id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.05876
|
Triangulations of cosmological polytopes
|
A cosmological polytope is defined for a given Feynman diagram, and its
canonical form may be used to compute the contribution of the Feynman diagram
to the wavefunction of certain cosmological models. Given a subdivision of a
polytope, its canonical form is obtained as a sum of the canonical forms of the
facets of the subdivision. In this paper, we identify such formulas for the
canonical form via algebraic techniques. It is shown that the toric ideal of
every cosmological polytope admits a Gr\"obner basis with a squarefree initial
ideal, yielding a regular unimodular triangulation of the polytope. In specific
instances, including trees and cycles, we recover graphical characterizations
of the facets of such triangulations that may be used to compute the desired
canonical form. For paths and cycles, these characterizations admit simple
enumeration. Hence, we obtain formulas for the normalized volume of these
polytopes, extending previous observations of K\"uhne and Monin.
|
Martina Juhnke-Kubitzke, Liam Solus, Lorenzo Venturello
|
2023-03-10T12:02:18Z
|
http://arxiv.org/abs/2303.05876v1
|
# Triangulations of cosmological polytopes
###### Abstract.
A cosmological polytope is defined for a given Feynman diagram, and its canonical form may be used to compute the contribution of the Feynman diagram to the wavefunction of certain cosmological models. Given a subdivision of a polytope, its canonical form is obtained as a sum of the canonical forms of the facets of the subdivision. In this paper, we identify such formulas for the canonical form via algebraic techniques. It is shown that the toric ideal of every cosmological polytope admits a Grobner basis with a squarefree initial ideal, yielding a regular unimodular triangulation of the polytope. In specific instances, including trees and cycles, we recover graphical characterizations of the facets of such triangulations that may be used to compute the desired canonical form. For paths and cycles, these characterizations admit simple enumeration. Hence, we obtain formulas for the normalized volume of these polytopes, extending previous observations of Kuhne and Monin.
## 1. Introduction
Arkani-Hamed, Benincasa and Postnikov [2] introduced the cosmological polytope \(\mathcal{C}_{G}\) of an undirected, connected graph \(G=(V,E)\), where \(V\) is the finite set of _vertices (or nodes)_ of \(G\) and \(E\) is its finite collection of _edges_; i.e., pairs \(ij\) for some \(i,j\in V\). When we would like to emphasize that \(V\) and \(E\) are, respectively, the vertex and edge set of \(G\), we may write \(V(G)\) and \(E(G)\), respectively. We will use \(ij\) to denote an undirected edge between \(i\) and \(j\), and \((i,j)\) to denote a directed edge \(i\to j\) when edge directions are needed.
We work in the finite real-Euclidean space \(\mathbb{R}^{|V|+|E|}\) with standard basis vectors \(x_{i}\) and \(x_{e}\) for all \(i\in V\), \(e\in E\). The _cosmological polytope_\(\mathcal{C}_{G}\) of \(G\) is
\[\mathcal{C}_{G}=\operatorname{conv}\{x_{i}+x_{j}-x_{e},x_{i}-x_{j}+x_{e},-x_{i }+x_{j}+x_{e}\ :\ e=ij\in E\}.\]
It is only required that the graph \(G\) is connected and undirected with a finite set of vertices and edges. For instance, \(G\) need not be simple. In [11], the authors work with a slight generalization of the definition of \(\mathcal{C}_{G}\) that allows for \(G\) to be disconnected. For the purposes of this paper, however, we will consider only connected \(G\).
In the physical context, the graph \(G\) can be interpreted as a Feynman diagram, in which case the cosmological polytope provides a geometric model for the computation of the contribution of the Feynman diagram represented by \(G\) to the so-called _wavefunction of the universe_[2]. Recent works study the physics of scattering amplitudes via a generalization of convex polytopes called _positive geometries_[1]. This connection arises via a unique differential form of the positive geometry that has only logarithmic singularities along its boundary. This form is termed its _canonical
###### Abstract
We consider the _comological polytope_ of a topological polytope
**Theorem B**.: _For specific choices of a term order we obtain a facet description of a regular unimodular triangulation of the cosmological polytope \(\mathcal{C}_{G}\), when \(G\) is:_
* _a path (Theorem_ 3.2_);_
* _a cycle (Theorem_ 4.1_);_
* _a tree (Theorem_ 5.11_)._
In the case of paths and cycles these characterizations are of a relatively simple form that allows for enumeration. We thereby obtain formulas for the normalized volume of \(\mathcal{C}_{G}\) in these two cases. While, for paths, we recover the formula identified in [11], for the cycle, the normalized volume of \(\mathcal{C}_{G}\) was previously unknown. Indeed, our methods enable us to show the following simple formula:
**Theorem C**.: _(Theorem 4.2) The cosmological polytope \(\mathcal{C}_{C_{n}}\) of the \(n\)-cycle \(C_{n}\) has normalized volume_
\[\operatorname{Vol}(\mathcal{C}_{C_{n}})=4^{n}-2^{n}.\]
While the normalized volume of these polytopes provides us with information on the number of summands in the formula (1) for computing \(\Omega_{\mathcal{C}_{G}}\), the explicit description of the facets that we obtain for trees and cycles given in Theorems 3.2, 4.1, and 5.11 allows for the exact computation of this canonical form. Theorem A suggests that such characterizations should be feasible for more general graphs via further analysis of the Grobner bases identified in this paper.
## 2. Grobner Bases for the toric ideal of \(\mathcal{C}_{G}\)
In this section, we describe a family of Grobner bases for the toric ideal of a cosmological polytope with the property that the corresponding initial ideals are squarefree. We start with some definitions. For any undirected graph \(G\) with vertex set \(V\) and edge set \(E\), we define a polynomial ring in \(|V|+4|E|\) variables, each corresponding to a lattice point of \(\mathcal{C}_{G}\). More precisely, we introduce three families of variables:
* A variable \(z_{k}\), for every \(k\in V\cup E\). We refer to these as _\(z\)-variables_.
* Variables \(y_{ije}\) and \(y_{jie}\), for every edge \(e=ij\in E\). We refer to these as _\(y\)-variables_.
* A variable \(t_{e}\) for every edge \(e\in E\). We refer to these as _\(t\)-variables_.
Let \(R_{G}\) be the polynomial ring in these \(|V|+4|E|\) many variables, with coefficients in a field \(K\), and consider the surjective homomorphism of \(K\)-algebras defined by
\[\varphi_{G}:R_{G} \to K[\mathbf{w}^{p}\;:\;p\in\mathcal{C}_{G}\cap\mathbb{Z}^{V\cup E}]\] \[z_{k} \mapsto w_{k}\] \[y_{ije} \mapsto w_{i}w_{j}^{-1}x_{e}\] \[y_{jie} \mapsto w_{i}^{-1}w_{j}w_{e}\] \[t_{e} \mapsto w_{i}w_{j}w_{e}^{-1}.\]
The ideal \(I_{\mathcal{C}_{G}}\coloneqq\ker(\varphi_{G})\) is the _toric ideal_ of \(\mathcal{C}_{G}\). Observe that variables in \(R_{G}\) correspond to lattice points of \(\mathcal{C}_{G}\). When the graph \(G\) is understood from the context, we may simply write \(\varphi\) for \(\varphi_{G}\). We now define some distinguished binomials in \(I_{\mathcal{C}_{G}}\), which will be elements of a Grobner basis for this ideal.
**Definition 2.1**.: We define two types of pairs of directed subgraphs of \(G\).
1. Let \(P\) be a path in \(G\), with \(E(P)=\{i_{1}i_{2},i_{2}i_{3},\ldots,i_{k-1}i_{k}\}\). For any partition \((P_{1},P_{2})\) of \(E(P)\) into two nonempty blocks we consider \(E_{1}=\{i_{j}\to i_{j+1}\ :\ i_{j}i_{j+1}\in E(P_{1})\}\), and \(E_{2}=\{i_{j+1}\to i_{j}\ :\ i_{j}i_{j+1}\in E(P_{2})\}\). The pair \((E_{1},E_{2})\) is called a _zig-zag pair_ of \(G\). Moreover, we define the _terminal vertices_ of \((E_{1},E_{2})\) to be \(v_{1}=i_{k}\) and \(v_{2}=i_{1}\).
2. Let \(C\) be a cycle in \(G\), with \(E(C)=\{i_{1}i_{2},i_{2}i_{3},\ldots,i_{k-1}i_{k},i_{k}i_{1}\}\). For any partition \((C_{1},C_{2})\) of \(E(C)\) into two blocks (with one possibly empty) we consider \(E_{1}=\{i_{j}\to i_{j+1}\ :\ i_{j}i_{j+1}\in E(C_{1})\}\), and \(E_{2}=\{i_{j+1}\to i_{j}\ :\ i_{j}i_{j+1}\in E(C_{2})\}\). The pair \((E_{1},E_{2})\) is called a _cyclic pair_ of \(G\).
**Definition 2.2**.: For every zig-zag pair \((E_{1},E_{2})\) we define the _zig-zag binomial_
\[b_{E_{1},E_{2}}=z_{v_{1}}\prod_{e=i\to j\in E_{1}}y_{ije}\prod_{e=i\to jeE_{2} }z_{e}-z_{v_{2}}\prod_{e=i\to j\in E_{2}}y_{ije}\prod_{e=i\to j\in E_{1}}z_{e}.\]
For every cyclic pair \((E_{1},E_{2})\) we define the _cyclic binomial_
\[b_{E_{1},E_{2}}=\prod_{e=i\to j\in E_{1}}y_{ije}\prod_{e=i\to jeE_{2}}z_{e}- \prod_{e=i\to jeE_{2}}y_{ije}\prod_{e=i\to jeE_{1}}z_{e}.\]
In the case that either \(E_{1}=\emptyset\) or \(E_{2}=\emptyset\), we call the resulting cyclic binomial a _cycle binomial_. In particular, cycle binomials consist of one monomial containing only \(y\)-variables and one containing only \(z\)-variables.
**Definition 2.3**.: We define the following collection of binomials in \(R_{G}\).
\[B_{G}=\{ \underbrace{y_{ije}y_{jie}-z_{e}^{2}},\ \underbrace{y_{ije}t_{e}-z_{i}^{2}},\ \underbrace{y_{jie}t_{e}-z_{j}^{2}},\] \[\ \
We observe that good term orders exist. For instance we can consider any lexicographic term order for which \(y\)-variables and \(t\)-variables are larger than any \(z\)-variable. We will now show that, for any undirected, connected graph \(G\), the set \(B_{G}\) is a Grobner basis for \(I_{\mathcal{C}_{G}}\) with respect to any good term order. To do so, we require a few lemmas.
**Lemma 2.5**.: _Let \(b=\mathbf{m}_{1}-\mathbf{m}_{2}\) be a binomial in \(I_{\mathcal{C}_{G}}\), and assume that no variable divides both \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\). If \(t_{e}|\mathbf{m}_{1}\), for some edge \(e=ij\) of \(G\), then \(\mathbf{m}_{1}\) is divisible by the leading term of a fundamental binomial in \(B_{G}\) with respect to any good term order._
Proof.: Assume \(t_{e}|\mathbf{m}_{1}\) and recall that \(\varphi(t_{e})=w_{i}w_{j}w_{e}^{-1}\). Since \(b\in I_{\mathcal{C}_{G}}\), we have that \(\varphi(\mathbf{m}_{1})=\varphi(\mathbf{m}_{2})\). In particular, the variable \(w_{e}\) either appears in the Laurent monomial \(\varphi(\mathbf{m}_{2})\) with a negative exponent, or it appears in \(\varphi(\mathbf{m}_{1}/t_{e})\) with positive exponent. The first case contradicts the fact that \(b\) is irreducible: indeed, since \(t_{e}\) is the only variable for which the variable \(w_{e}\) appears in \(\varphi(t_{e})\) with a negative exponent, this would imply that \(t_{e}|\mathbf{m}_{2}\).
In the second case we have that one of the variables \(v\) for which \(w_{e}\) appears in \(\varphi(v)\) with a positive exponent must divide \(\mathbf{m}_{1}\). These are either \(z_{e}\), \(y_{jie}\) or \(y_{ije}\). This concludes the proof, since the monomials \(t_{e}z_{e}\), \(y_{jie}t_{e}\) and \(y_{ije}t_{e}\) are leading terms of some binomial in \(B_{G}\) with respect to the chosen term order.
We now associate to any binomial \(\mathbf{m}_{1}-\mathbf{m}_{2}\) a pair of directed subgraphs \((\overrightarrow{G_{1}},\overrightarrow{G_{2}})\) of \(G\) in the following way: For any variable \(y_{ije}\) which divides \(\mathbf{m}_{1}\) (respectively \(\mathbf{m}_{2}\)) the graph \(\overrightarrow{G_{1}}\) (respectively \(\overrightarrow{G_{2}}\)) contains the vertices \(i\) and \(j\) and a number of directed edges from \(i\) to \(j\) equal to the degree of \(y_{ije}\) in \(\mathbf{m}_{1}\) (respectively \(\mathbf{m}_{2}\)).
**Definition 2.6**.: For a directed graph \(\overrightarrow{G}\) and a vertex \(i\in V(\overrightarrow{G})\) we define \(\deg_{\overrightarrow{G}}(i)=\operatorname{outdeg}_{\overrightarrow{G}}(i)- \operatorname{indeg}_{\overrightarrow{G}}(i)\), where \(\operatorname{outdeg}_{\overrightarrow{G}}(i)=|\{j\in V(\overrightarrow{G}): (i,j)\in E(\overrightarrow{G})\}|\) and \(\operatorname{indeg}_{\overrightarrow{G}}(i)=|\{j\in V(\overrightarrow{G}): (j,i)\in E(\overrightarrow{G})\}|\). If \(\deg_{\overrightarrow{G}}(i)>0\), we call \(i\) is _positive_ vertex of \(\overrightarrow{G}\). If \(\deg_{\overrightarrow{G}}(i)<0\), we call \(i\) a _negative_ vertex of \(\overrightarrow{G}\).
**Lemma 2.7**.: _Let \(b=\mathbf{m}_{1}-\mathbf{m}_{2}\) be an irreducible binomial in \(I_{\mathcal{C}_{G}}\), and let \((\overrightarrow{G_{1}},\overrightarrow{G_{2}})\) be the associated pair of directed graphs. Assume that no leading term of a fundamental binomial in \(B_{G}\) with respect to a good term order divides \(\mathbf{m}_{1}\) or \(\mathbf{m}_{2}\). Then:_
1. _If_ \(\deg_{\overrightarrow{G_{1}}}(i)<0\)_, then_ \(i\in V(\overrightarrow{G_{1}})\cap V(\overrightarrow{G_{2}})\)_. Moreover, if_ \(i\in V(\overrightarrow{G_{1}})\cap V(\overrightarrow{G_{2}})\)_, then_ \(\deg_{\overrightarrow{G_{1}}}(i)=\deg_{\overrightarrow{G_{2}}}(i)\)_._
2. _If_ \(i\in V(\overrightarrow{G_{1}})\setminus V(\overrightarrow{G_{2}})\) _and_ \(\deg_{\overrightarrow{G_{1}}}(i)>0\) _(_\(i\in V(\overrightarrow{G_{2}})\setminus V(\overrightarrow{G_{1}})\) _and_ \(\deg_{\overrightarrow{G_{2}}}(i)>0\)_, respectively), then_ \(z_{i}|\mathbf{m}_{2}\)__\((z_{i}|\mathbf{m}_{1}\)_, respectively)._
3. _If_ \(e\in E(\overrightarrow{G_{1}})\setminus E(\overrightarrow{G_{2}})\) _(_\(e\in E(\overrightarrow{G_{2}})\setminus E(\overrightarrow{G_{1}})\) _respectively), then_ \(z_{e}|\mathbf{m}_{2}\)__\((z_{e}|\mathbf{m}_{1}\) _respectively)._
Proof.: (1) If \(\deg_{\overrightarrow{G_{1}}}(i)<0\), then the degree of \(w_{i}\) in \(\varphi(\mathbf{m}_{1})\) is negative. Since \(b\in I_{\mathcal{C}_{G}}\), the degree of \(w_{i}\) in \(\varphi(\mathbf{m}_{2})\) is also negative. As the only variables \(v\) such that \(w_{i}\) has negative exponent in \(\varphi(v)\) are of the form \(y_{jie}\) for some vertex \(j\) and edge \(e\), the
claim follows. Since \(i\in V(\overrightarrow{G_{1}})\) there is at least one edge incident to \(i\) in \(\overrightarrow{G_{1}}\). We have then that \(\mathbf{m}_{1}\) is divisible by a variable \(y_{ije}\) or \(y_{jie}\), for some vertex \(j\) and edge \(e\). In particular, we conclude that \(z_{i}\) does not divide \(\mathbf{m}_{1}\) as we assumed that \(y_{ije}z_{i}\) does not divide \(\mathbf{m}_{1}\). By symmetry \(z_{i}\) does not divide \(\mathbf{m}_{2}\). It follows that \(\deg_{\overrightarrow{G_{1}}}(i)\) equals the degree of the variable \(w_{i}\) in \(\varphi(\mathbf{m}_{1})\) and that \(\deg_{\overrightarrow{G_{2}}}(i)\) equals the degree of the variable \(w_{i}\) in \(\varphi(\mathbf{m}_{2})\). Since \(b\in I_{\mathcal{C}_{G}}\), we conclude that \(\deg_{\overrightarrow{G_{1}}}(i)=\deg_{\overrightarrow{G_{2}}}(i)\).
(2) As in the previous case, the number \(\deg_{\overrightarrow{G_{1}}}(i)\) equals the degrees of the variable \(w_{i}\) in \(\varphi(\mathbf{m}_{1})\) and \(\varphi(\mathbf{m}_{2})\). Since \(i\notin V(\overrightarrow{G_{2}})\), the only variable in \(\mathbf{m}_{2}\) which contributes to a positive degree in \(\varphi(\mathbf{m}_{2})\) is \(z_{i}\).
(3) Again, since \(b\in I_{\mathcal{C}_{G}}\), the degrees of \(w_{e}\) in \(\varphi(\mathbf{m}_{1})\) and \(\varphi(\mathbf{m}_{2})\) coincide. By Lemma 2.5 the variable \(t_{e}\) does not divide neither \(\mathbf{m}_{1}\) nor \(\mathbf{m}_{2}\), and this is the only variable \(v\) such that \(w_{e}\) has a negative degree in \(\varphi(v)\). Hence \(w_{e}\) has positive degree in both \(\varphi(\mathbf{m}_{1})\) and \(\varphi(\mathbf{m}_{2})\). Since \(e\notin E(\overrightarrow{G_{2}})\), the only variable which contributes to a positive degree of \(w_{e}\) in \(\varphi(\mathbf{m}_{2})\) is \(z_{e}\).
The following lemma collects some simple properties of directed acyclic graphs that will be of use.
**Lemma 2.8**.: _Let \(H\) be a directed acyclic graph, with at least one edge and no isolated vertices. Then \(H\) has at least a positive and a negative vertex. Moreover, for every positive vertex \(i\in V(H)\) there exists a negative vertex \(j\in V(H)\) such that \(H\) contains a directed path from \(i\) to \(j\), and for every negative vertex \(j\in V(H)\) there exists a positive vertex \(i\in V(H)\) such that \(H\) contains a directed path from \(i\) to \(j\)._
Proof.: Every directed acyclic graph with at least one edge has at least one sink and at least one source node. Since sinks are positive vertices and sources are negative vertices the first claim holds. The second claim follows from the fact that every vertex in the directed acyclic graph has at least one descendant that is a sink and every vertex has at least one source node as an ancestor.
We are now ready to prove the main result of the section.
**Theorem 2.9**.: _The set \(B_{G}\) is a Grobner basis of \(I_{\mathcal{C}_{G}}\) with respect to every good term order._
Proof.: Let \(b=\mathbf{m}_{1}-\mathbf{m}_{2}\) be a binomial in \(I_{\mathcal{C}_{G}}\), and assume that no variable divides both \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\). We prove that there exists a binomial \(f\in B_{G}\) such that \(\operatorname{lt}(f)|\mathbf{m}_{1}\) or \(\operatorname{lt}(f)|\mathbf{m}_{2}\). This shows that any binomial in \(I_{\mathcal{C}_{G}}\) can be reduced by an element of \(B_{G}\). Since this reduction step produces another binomial and the sequence of reduction terminates, it must terminate with the zero polynomial. In particular, all \(S\)-polynomials obtained from a generating set of binomials of \(I_{\mathcal{C}_{G}}\) reduce to zero, which implies that \(B_{G}\) is a Grobner basis.
If the leading term of a fundamental binomial in \(B_{G}\) divides either \(\mathbf{m}_{1}\) or \(\mathbf{m}_{2}\), then we conclude.
Assume that no leading term of a fundamental binomial in \(B_{G}\) divides either \(\mathbf{m}_{1}\) or \(\mathbf{m}_{2}\). In particular, by Lemma 2.5, no variable of the form \(t_{e}\) divides either \(\mathbf{m}_{1}\) or \(\mathbf{m}_{2}\). Consider the pair \((\overrightarrow{G_{1}},\overrightarrow{G_{2}})\) of directed subgraphs of \(G\) associated with \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\).
If \(\overrightarrow{G_{1}}\) (\(\overrightarrow{G_{2}}\) respectively) has a directed cycle \(C\), then by construction \(\mathbf{m}_{1}\) (\(\mathbf{m}_{2}\) respectively) is divisible by the monomial \(\prod_{\vec{e}=(i,j)\in E(C)}y_{ije}\) which is the leading term of a cycle binomial by definition of good term order and so we conclude.
Assume that both \(\overrightarrow{G_{1}}\) and \(\overrightarrow{G_{2}}\) are directed acyclic. Since \(b\) is irreducible, \(\overrightarrow{G_{1}}\) and \(\overrightarrow{G_{2}}\) do not have any common directed edge, as those would correspond to variables which divide both \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\).
Suppose there is a positive vertex \(i\) in \(\overrightarrow{G_{1}}\) such that \(i\in V(\overrightarrow{G_{1}})\setminus V(\overrightarrow{G_{2}})\). Observe that by Lemma 2.7 (2) this implies that \(z_{i}|\mathbf{m}_{2}\). We let \(i_{1}=i\) and \(j_{1}\) be a negative vertex of \(\overrightarrow{G_{1}}\) such that there is a directed path from \(i_{1}\) to \(j_{1}\). By Lemma 2.7 (1), \(j_{1}\) is a negative vertex of \(\overrightarrow{G_{2}}\) as well. By Lemma 2.8, there exists a positive vertex \(i_{2}\) of \(\overrightarrow{G_{2}}\) such that there is a directed path in \(\overrightarrow{G_{2}}\) from \(i_{2}\) to \(j_{2}\). If \(i_{2}\in V(\overrightarrow{G_{1}})\), by Lemma 2.7 (1), we have \(\deg_{\overrightarrow{G_{1}}}(i_{2})=\deg_{\overrightarrow{G_{2}}}(i_{2})>0\). We can then iterate this procedure until one of the following possibilities occurs:
Case 1: \(i_{k}\notin V(\overrightarrow{G_{1}})\). In this case, let \(E_{1}\) be the union of the directed edges of the directed paths from \(i_{t}\) to \(j_{t}\), for \(t=1,\ldots,k-1\) and \(E_{2}\) be the union of the directed edges of the directed paths from \(i_{t+1}\) to \(j_{t}\), for \(t=1,\ldots,k-1\). Hence \((E_{1},E_{2})\) is a zig-zag pair. By definition of the graphs \((\overrightarrow{G_{1}},\overrightarrow{G_{2}})\) we have that \(\prod_{e=(i,j)\in E_{1}}y_{ije}\) divides \(\mathbf{m}_{1}\) and \(\prod_{e=(i,j)\in E_{2}}y_{ije}\) divides \(\mathbf{m}_{2}\). Moreover, by Lemma 2.7 (2), we have that \(z_{i_{k}}|\mathbf{m}_{1}\) and that \(z_{i_{1}}|\mathbf{m}_{2}\). Finally, by Lemma 2.7 (3), \(\prod_{e=(i,j)\in E_{2}}z_{e}\) divides \(\mathbf{m}_{1}\) and \(\prod_{e=(i,j)\in E_{1}}z_{e}\) divides \(\mathbf{m}_{2}\). In particular, \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) are divisible by the two monomials of \(b_{E_{1},E_{2}}\), the binomial corresponding to the zig-zag pair \((E_{1},E_{2})\).
Case 2: \(i_{k}=i_{\ell}\), for some \(\ell<k\). In this case, let \(E_{1}\) be the union of the directed edges of the directed paths from \(i_{t}\) to \(j_{t}\), for \(t=\ell,\ldots,k-1\) and \(E_{2}\) be the union of the directed edges of the directed paths from \(i_{t+1}\) to \(j_{t}\), for \(t=\ell,\ldots,k-1\) together with the directed edges from \(i_{k}\) to \(j_{\ell}\). The pair \((E_{1},E_{2})\) is a cyclic pair. Again by definition of the graphs \((\overrightarrow{G_{1}},\overrightarrow{G_{2}})\), we have that \(\prod_{e=(i,j)\in E_{1}}y_{ije}\) divides \(\mathbf{m}_{1}\) and \(\prod_{e=(i,j)\in E_{2}}y_{ije}\) divides \(\mathbf{m}_{2}\). Moreover, by Lemma 2.7 (3), \(\prod_{e=(i,j)\in E_{2}}z_{e}\) divides \(\mathbf{m}_{1}\) and \(\prod_{e=(i,j)\in E_{1}}z_{e}\) divides \(\mathbf{m}_{2}\). In particular, \(\mathbf{m}_{1}\) and \(\mathbf{m}_{1}\) are divisible by the two monomials of \(b_{E_{1},E_{2}}\), the binomial corresponding to the cyclic pair \((E_{1},E_{2})\). This finishes Case 2.
If there is a positive vertex \(i\) in \(\overrightarrow{G_{2}}\) such that \(i\in V(\overrightarrow{G_{2}})\setminus V(\overrightarrow{G_{1}})\), we can conclude by the same argument as above.
Suppose now that for all vertices \(i\) with \(\deg_{\overrightarrow{G_{1}}}(i)>0\) we have that \(i\in V(\overrightarrow{G_{2}})\) and for all vertices \(i\) with \(\deg_{\overrightarrow{G_{2}}}(i)>0\) we have that \(i\in V(\overrightarrow{G_{1}})\). We initialize \(i_{1}\) to be any of the vertices with \(\deg_{\overrightarrow{G_{1}}}(i)>0\) and, as in the previous case, we start constructing disjoint directed paths from \(i_{t}\) to \(j_{t}\) in \(\overrightarrow{G_{1}}\) and from \(i_{t+1}\) to \(j_{t}\) in \(\overrightarrow{G_{2}}\). Since the graphs \(\overrightarrow{G_{1}}\) and \(\overrightarrow{G_{2}}\) are finite there exists \(k\) such that \(i_{k}=i_{\ell}\) for some \(\ell<k\). Let \(E_{1}\) be the union of the directed edges of the directed paths from \(i_{t}\) to \(j_{t}\), for \(t=\ell,\ldots,k-1\) and \(E_{2}\) be the union of the directed edges of the directed paths from \(i_{t+1}\) to \(j_{t}\), for \(t=\ell,\ldots,k-1\) together with the directed edges from \(i_{k}\) to \(j_{\ell}\). The pair \((E_{1},E_{2})\) is a cyclic pair. Following verbatim Case 2 we obtain that \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) are divisible by
the two monomials of \(b_{E_{1},E_{2}}\), the binomial corresponding to the cyclic pair \((E_{1},E_{2})\). This completes the proof.
### Regular unimodular triangulations
Initial ideals of the toric ideal \(I_{A}\) of an integral point configuration \(A\) are in correspondence with regular triangulations of the convex hull of \(A\) into lattice simplices (using no additional vertices). More precisely, the radical of any initial ideal of \(I_{A}\) is the _Stanley-Reisner_ ideal of a regular triangulation of \(\operatorname{conv}(A)\), i.e., the squarefree monomial ideal generated by all monomials corresponding to non-faces of the triangulation (see [15, Theorem 8.3] or [6, Section 9.4]). Moreover, all regular triangulations of \(\operatorname{conv}(A)\) can be obtained in this way. Triangulations corresponding to initial ideals which are squarefree are _unimodular_, meaning that the number of facets equals the normalized volume of \(\operatorname{conv}(A)\). Since by Theorem 2.9 the set \(B_{G}\) is a Grobner basis of \(I_{\mathcal{C}_{G}}\) with respect to any good term order with only squarefree initial terms, we obtain the following corollary.
**Corollary 2.10**.: _Let \(G\) be any graph. The cosmological polytope \(\mathcal{C}_{G}\) has a regular unimodular triangulation._
Corollary 2.10 provides the existence of the desired subdivisions of \(\mathcal{C}_{G}\) for any \(G\). While the result is constructive, the presentation of the resulting triangulations is in the form of their minimal non-faces. In order to apply the formula in (1) to compute the canonical forms \(\Omega_{\mathcal{C}_{G}}\), we require a description of the triangulations in terms of their facets. In the coming sections, we give such characterizations for families of \(G\). To derive these results we will use some observations that can be seen to hold for all regular unimodular triangulations derived from good term orders for any graph \(G\). In this subsection, we collect these results and the relevant notation that will be used throughout the remaining sections.
We start by introducing some notation. In the following, let us assume that we have a graph \(G=(V,E)\) and a good term order. By Theorem 2.9 a Grobner basis with squarefree initial ideal is given by the fundamental binomials, the zig-zag binomials and the cyclic binomials. Since the cosmological polytope \(\mathcal{C}_{G}\) has dimension \(|V|+|E|-1\), the corresponding regular unimodular triangulation has facets given by all \((|V|+|E|)\)-subsets of the variables \(y_{ije},y_{jie},t_{e},z_{e},z_{i}\) that do not contain any leading term of the binomials in this Grobner basis.
The fundamental binomials imply that certain 2-subsets of variables cannot be contained in the facets. These 2-subsets to be avoided for each edge \(e=ij\in E\) correspond to the edges of the following graph:
To represent the facets of the triangulation, we introduce a symbol corresponding to each variable: Let \(i\in V\) and \(e=ij\in E\):
* the variable \(z_{i}\) is represented by the symbol \(\circ\). The vertex \(i\) is instead represented by \(\bullet\) if \(z_{i}\) is not present.
* the variable \(z_{e}\) is represented by the edge type \(\neg\),
* the variable \(t_{e}\) is represented by the edge type \(\widetilde{\
\(y_{jie}t_{e}-z_{j}^{2}\) and \(t_{e}z_{e}-z_{i}z_{j}\). However, since by assumption, none of \(y_{ije}\), \(y_{jie}\) and \(z_{e}\) is contained in \(S\), it follows that \(S\cup\{t_{e}\}\) is also a face of \(\mathcal{T}\). This contradicts the fact that \(S\) is a facet.
In the coming sections we apply these results to derive explict characterizations of the facets of regular unimodular triangulations of \(\mathcal{C}_{G}\) arising from good term orders on \(R_{G}\) for special instances of \(G\). In Section 3, we characterize the facets of this triangulation for a specific good term order when \(G\) is the path graph. In Section 4, we show that the techniques in Section 3 can be extended to yield an analogous characterization of the facets of a triangulation for the cycle. Finally, in Section 5, we extend the characterization of the facets of the triangulation for paths to general trees.
## 3. The Cosmological Polytope of the Path
In this section, we give an explicit description of the regular unimodular triangulation corresponding to a Grobner basis with respect to a good term order of the toric ideal for the cosmological polytope of the _n-path_, \(I_{n}\); that is, the graph with vertex set \(V=[n+1]\) and edge set \(E=\{ii+1\ :\ i\in[n]\}\).
A combinatorial description of the facets of this polytope is given that allows for enumeration of the facets. The resulting formula for the normalized volume of \(\mathcal{C}_{I_{n}}\) agrees with the formula identified in [11]. The combinatorial description of the facets may also be used to compute the canonical form of the polytope in a novel way, which may suggest new physical theories for the computation of wavefunctions associated to such Feynman diagrams.
In the following, we use the variable order
\[\begin{split}& y_{12}>y_{23}>\cdots>y_{n-1n}>y_{nn-1}>\cdots>y_{32 }>y_{21}>z_{12}>\cdots\\ &\cdots>z_{n-1n}>t_{12}>\cdots>t_{n-1n}>z_{1}>\cdots>z_{n},\end{split} \tag{3}\]
where for the edge \(e=ii+1\), we write \(y_{ii+1}\) and \(y_{i+1i}\) for the variables \(y_{ii+1e}\) and \(y_{i+1ie}\), respectively. It can be checked that the lexicographic term order, with respect to this ordering of the variables, on the monomials in \(R_{I_{n}}\) is a good term order according to Definition 2.4. Since the cosmological polytope \(\mathcal{C}_{G}\) for a graph \(G=(V,E)\) has dimension \(|V|+|E|-1\), the corresponding regular unimodular triangulation has facets given by all \((2n+1)\)-subsets of the variables \(y_{ije},y_{je},t_{e},z_{e},z_{i}\) that do not contain the leading terms of the binomials in this Grobner basis. Our goal is to characterize these subsets \(S\) in terms of the structure of their graphs \(G_{S}\) defined in Subsection 2.1.
By Proposition 2.12, we know that \(G_{S}\) is connected whenever \(S\) is a facet. We also know from Lemma 2.11 that all edges in \(G_{S}\) are either single or double edges and all double edges are of the form
\[\xy@{\qquad\qquad\text{ or }\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\text{ }}\xy@{\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
any subgraphs of the following form: Let \(\pi=\{ii+1,i+1i+2,\ldots,j-1j\}\) with \(1\leq i<j\leq n+1\) be a subpath of \(I_{n}\). Given a partition \((E_{1},E_{2})\) of the edges of \(\pi\) with \(E_{1}\neq\emptyset\), we call the graph \(G_{R}\) for the set of symbols
\[R=\{y_{\ell\ell+1}\ :\ \ell\ell+1\in E_{1}\}\cup\{z_{\ell\ell+1}\ :\ \ell\ell+1\in E _{2}\}\cup\{z_{j}\}\]
a _partially directed path to the right (ending in \(\circ\))_. For example, if \(S\) is a facet it cannot contain the subset of symbols \(R\) yielding the following graph:
The following lemma collects some additional useful properties of \(G_{S}\) when \(S\) is a facet.
**Lemma 3.1**.: _Let \(S\) be a subset of the variables generating the ring \(R_{I_{n}}\). If \(S\) is a facet of the triangulation and \(\mathfrak{Z}_{S}=\{i_{1}<i_{2}<\cdots<i_{n+1-k}\},\) it follows that_
1. \(G_{S}\) _contains exactly_ \(k\) _double edges,_
2. _the number of double edges in the induced subgraph of_ \(G_{S}\) _on_ \([i_{1}]\) _is_ \(i_{1}-1\)_,_
3. _the number of double edges in the induced subgraph of_ \(G_{S}\) _on_ \(\{i_{j},\ldots,i_{j+1}\}\) _is_ \(i_{j+1}-i_{j}-1\)_, for all_ \(j\in[n-k]\)_, and_
4. _the number of double edges in the induced subgraph of_ \(G_{S}\) _on_ \(\{i_{n-k+1},\ldots,n+1\}\) _is_ \(n+1-i_{n-k+1}\)_._
Proof.: Since \(S\) is a facet we know that \(|S|=2n+1\). We know also from Proposition 2.12 that the support graph of \(G_{S}\) is connected and equal to \(I_{n}\). Hence, there is at least one edge in \(G_{S}\) for all \(n\) edges in \(I_{n}\). Moreover, by Lemma 2.11 we know that \(G_{S}\) only contains single and double edges. Since \(|Z_{S}|=n+1-k\) and \(|S|=2n+1\), it follows that \(G_{S}\) contains exactly \(k\) double edges.
Consider now the subgraph between \(i<j\) of \(G_{S}\) where \(z_{i},z_{j}\in S\) and \(z_{\ell}\notin S\) for all \(i<\ell<j\). We claim that there are at most \(j-i-1\) double edges in this subgraph. To see this, suppose there are \(j-i\) double edges instead. It follows that all edges in this subgraph are double and of the form specified in Lemma 2.11. Since the subgraph cannot include the fundamental obstruction \(\circ\leftarrow\), it follows that the first pair of double edges is of the form
Since all remaining edges in the subgraph must also be double edges, and since these sets of doubles must each include the undirected edge \(\ell\ell+1\) (for \(i\leq\ell\leq j-1\)), it follows that the subgraph contains a partially directed path to the right ending in \(\circ\), which is a forbidden subgraph by the leading term of some zig-zag binomial. Hence, we have a contradiction.
It then follows from the Pigeonhole Principle that each subgraph of \(G_{S}\) given by a pair of nodes \(i<j\) for \(z_{i},z_{j}\in S\) but \(z_{\ell}\notin S\) for all \(i<\ell<k\), or \(z_{i}\in S\) but \(z_{\ell}\notin S\) for all \(\ell<i\), or \(z_{i}\in S\) but \(z_{\ell}\notin S\) for all \(i<\ell\) contains exactly as many double edges as it does black nodes. This proves claims (2) - (4).
Based on Lemma 3.1, it can be helpful to consider facets according to their intersection with the set \(Z=\{z_{i}\ :\ i\in[n+1]\}\). If a facet \(S\) is such that \(Z_{S}=S\cap Z=\{z_{i_{1}},\ldots,z_{i_{k}}\}\), where \(i_{1}<i_{2}<\cdots<i_{k}\), we can partition the graph \(G_{S}\) into the induced subgraphs on node sets \(\{1,\ldots,i_{1}\}\), \(\{i_{k},\ldots,n+1\}\) and \(\{i_{j},i_{j}+1,\ldots,i_{j+1}\}\) for all \(j\in[k-1]\), and consider the possible placements of the appropriate number of edges in each induced subgraph so as to ensure that \(|S|=2n+1\). A rule for producing all such graphs in this way will yield a combinatorial description of the facets of the triangulation. The next theorem gives such a characterization of the graphs that correspond to facets of the triangulation. In the following we use \(\leftrightarrow\) to denote that we are free to choose between either arrow (either \(\leftarrow\) or \(\rightarrow\)).
**Theorem 3.2**.: _Let \(S\) be a subset of the generators of the ring \(R_{I_{n}}\) and let \(Z_{S}=\{z_{i_{1}},\ldots,z_{i_{k}}\}\) where \(i_{1}<\cdots<i_{k}\). Then \(S\) is a facet of the triangulation of \(\mathcal{C}_{I_{n}}\) corresponding to the lexicographic order induced by (3) if and only if all three of the following hold:_
1. _The induced subgraph of_ \(G_{S}\) _on nodes_ \([i_{1}]\) _is of the form_ _That is, all edges are double with a_ \(\leftarrow\)_._
2. _For all_ \(j\in[k-1]\)_, the induced subgraph of_ \(G_{S}\) _on_ \(\{i_{j},i_{j}+1,\ldots,i_{j+1}\}\) _is of the form_ _or_ _That is, either (1) exactly one edge whose least vertex is a black node is either_ \(\widetilde{\
Proof.: We first observe that any set \(S\) such that \(G_{S}\) satisfies the listed properties, is a facet. Notice first that any choice of the edges for each of the possible subgraphs does not contain an induced subgraph excluded by the fundamental binomials. Furthermore, any partially directed path to the right is either interrupted by a single edge of the form \(\leftarrow\) or, or it terminates in a black node. Hence, such a \(G_{S}\) also does not contain any subgraph forbidden by the leading terms of the zig-zag binomials. Since there is exactly one double edge for every black node, it also follows that \(|S|=2n+1\). Since the dimension of \(\mathcal{C}_{I_{n}}\) is \(2n\), it follows that \(G_{S}\) is a facet of the triangulation.
Suppose now that \(S\) is a facet of the triangulation, and consider its associated graph \(G_{S}\). Since \(S\) is a facet, we know \(|S|=2n+1\), and by Lemma 3.1 we also know that \(G_{S}\) is connected and any of the induced subgraphs on node sets \(\{1,\ldots,i_{1}\}\), \(\{i_{k},\ldots,n+1\}\) and \(\{i_{j},i_{j}+1,\ldots,i_{j+1}\}\) for \(j\in[k-1]\) contains as many black nodes as it does double edges. It therefore suffices to show that these subgraphs of \(G_{S}\) are of one of the possible forms specified in the above list.
Consider first the induced subgraph of \(G_{S}\) on node set \([i_{1}]\). Since \(S\) is a facet, by Lemma 3.1, we know that every edge in this subgraph is a double edge, and hence of the form \(\{(i,i+1),ii+1\}\) or \(\{(i+1,i),ii+1\}\). Since \(S\) cannot contain the leading term of any fundamental binomial, it does not contain both \(z_{i_{1}}\) and \(y_{i_{1}-1i_{1}}\). Hence, this subgraph must contain the double edge \(\{(i_{1},i_{1}-1),i_{1}-1i_{1}\}\). Similarly, since \(S\) cannot contain the leading term of any zig-zag binomial, this subgraph cannot contain any partially directed paths to the right. It follows that all double edges in this subgraph are of the form \(\{(i+1,i),ii+1\}\). Hence, \(G_{S}\) fulfills the first criterion in the above list.
Similarly, for the induced graph of \(G_{S}\) on node set \(\{i_{j},i_{j}+1,\ldots,i_{j+1}\}\), we know that the graph must be connected and contain exactly \(i_{j+1}-i_{j}-1\) double edges by Lemma 3.1. Hence, there is exactly one single edge in the graph. Suppose that this edge is the leftmost edge (i.e., between \(i_{j}\) and \(i_{j}+1\)). In this case, the edge may be either or, but not or. To see that it cannot be, note that this would mean that the leading term of a fundamental binomial is contained in \(S\). To see that it cannot be, note that, since all remaining edges in the subgraph must be doubled (and hence include a ), it would follow that \(S\) contains the leading term of a zig-zag binomial, which is a contradiction. In a similar fashion, all double edges must be of the form \(\{(i+1,i),ii+1\}\). Otherwise \(S\) would contain the leading term of a zig-zag binomial.
Suppose now that the single edge in the subgraph is between \(i_{j}+t\) and \(i_{j}+t+1\) for some \(t>1\). By the same argument as the previous case, all remaining edges must be double edges and all double edges to the right of \(i_{j}+t+1\) must be of the form \(\{(i+1,i),ii+1\}\). We must also have that the double edge between \(i_{j}\) and \(i_{j}+1\) is of the form \(\{(i_{j},i_{j}+1),i_{j}i_{j}+1\}\), since otherwise \(S\) would contain the leading term of a fundamental binomial. However, all double edges between \(i_{j}+s\) and \(i_{j}+s+1\) for \(1\leq s<t\) can be of either form \(\{(i+1,i),ii+1\}\) or \(\{(i,i+1),ii+1\}\), since the single edge will interrupt any partially directed path to the right. Observe further that the single edge must be of the form or, since any other option would combine with the undirected edges and the directed edge between \(i_{j}\) and \(i_{j}+1\) to yield a partially directed path to the right terminating in a \(\circ\). It follows that if \(S\) is a facet,
the corresponding induced subgraphs of \(G_{S}\) on the intervals \(\{i_{j},i_{j}+1,\ldots,i_{j+1}\}\) for all \(j\in[k-1]\) are of the form in item (2) in the above list.
Finally, for the induced subgraph of \(G_{S}\) on node set \(\{i_{k},\ldots,n+1\}\), we know from Lemma 3.1 that all edges are double edges and hence of the form \(\{(i+1,i),ii+1\}\) or \(\{(i,i+1),ii+1\}\). To avoid a subgraph forbidden by a fundamental binomial, we must also have that the double edge between \(i_{k}\) and \(i_{k}+1\) is of the form \(\{(i,i+1),ii+1\}\). However, since the path does not contain any \(\circ\) to the right of node \(i_{k}\), we are free to choose the direction of the arrow in all remaining double edges. Hence, this subgraph is of the form given in item (3) in the above list, which completes the proof.
**Example 3.3**.: According to Theorem 3.2, the facets of the triangulation of \(\mathcal{C}_{I_{2}}\) are given by the following sixteen graphs:
Each of these graphs encodes the collection of vertices of the corresponding facet in the triangulation of \(\mathcal{C}_{I_{2}}\), from which we can recover the facet-defining equations of the simplex and thereby compute the canonical form \(\Omega_{I_{2}}\).
Since the triangulation is unimodular, it follows that the normalized volume of \(\mathcal{C}_{I_{n}}\) is given by the sum over all graphs \(G_{S}\) that satisfy the properties listed in Lemma 3.2. Using the decomposition of these properties into subgraphs, we can recover the formula for the normalized volume of \(\mathcal{C}_{I_{n}}\) given in [11].
**Corollary 3.4**.: _The normalized volume of \(\mathcal{C}_{I_{n}}\) is \(4^{n}\)._
Proof.: We first deduce a formula for the normalized volume of \(\mathcal{C}_{I_{n}}\) by enumerating the facets of the triangulation using Lemma 3.2. Then we show that this formula reduces to \(4^{n}\).
To enumerate the facets via Lemma 3.2, we first pick a subset \(\{i_{1},\ldots,i_{k}\}\) of \([n+1]\) where we assume \(i_{1}<\cdots<i_{k}\). Let \(S\) be a facet with \(Z_{S}=\{z_{i_{1}},\ldots,z_{i_{k}}\}\). There is only one possible induced subgraph on node set \([i_{1}]\). The possible number of induced subgraphs of \(G_{S}\) on node set \(\{i_{k},i_{k}+1,\ldots,n+1\}\) is the following
\[\begin{cases}1,&\text{ if }i_{k}=n+1,\\ 2^{n-i_{k}},&\text{ if }i_{k}<n+1.\end{cases}\]
Given an interval \(\{i_{j},i_{j}+1,\ldots,i_{j+1}\}\), the possible induced subgraphs on this interval of \(G_{S}\) must contain exactly one single edge between \(i_{j}+s\) and \(i_{j}+s+1\) for \(s\in\{0,1,\ldots,i_{j+1}-i_{j}-1\}\). By Lemma 3.2 we always have two choices for this edge. Further, by the same lemma, when \(s=0\), there are exactly two possible subgraphs
of \(G_{S}\) on this interval. For \(s>0\), there are \(2^{s}\) choices (including the choice of edge type for the single edge). Hence, there are a total of
\[2+\sum_{s=1}^{i_{j+1}-i_{j}-1}2^{s} =1+\sum_{s=0}^{i_{j+1}-i_{j}-1}2^{s},\] \[=1+(2^{i_{j+1}-i_{j}}-1),\] \[=2^{i_{j+1}-i_{j}}\]
possible subgraphs for this interval. The number of facets \(S\) of the triangulation with \(Z_{S}=\{i_{1},\ldots,i_{k}\}\) is then equal to
\[\begin{cases}1\cdot\left(\prod_{j=1}^{k-1}2^{i_{j+1}-i_{j}}\right)\cdot 1,&\text{ if }i_{k}=n+1,\\ 1\cdot\left(\prod_{j=1}^{k-1}2^{i_{j+1}-i_{j}}\right)\cdot 2^{n-i_{k}},&\text{ if }i_{k}<n+1 \end{cases}=\begin{cases}2^{i_{k}-i_{1}},&\text{ if }i_{k}=n+1,\\ 2^{n-i_{1}},&\text{ if }i_{k}<n+1.\end{cases}\]
Summing over all proper subsets of \([n+1]\), yields
\[\sum_{\varnothing*Z\in 2^{[n+1]}}2^{n-\min(Z)}+\sum_{Z\in 2^{[n]}}2^{n +1-\min(Z)} =\sum_{\ell=1}^{n}2^{n-\ell}\cdot 2^{n-\ell}+\sum_{\ell=1}^{n}2^{n- \ell}\cdot 2^{n+1-\ell}+1\] \[=\sum_{\ell=0}^{n-1}4^{\ell}+2\sum_{\ell=0}^{n-1}4^{\ell}+1\] \[=3\sum_{\ell=0}^{n-1}4^{\ell}+1\] \[=3\frac{4^{n}-1}{4-1}+1=4^{n},\]
which completes the proof.
## 4. The Cosmological Polytope of the Cycle
We now consider the cosmological polytope \(\mathcal{C}_{C_{n}}\) associated with the \(n\)-cycle \(C_{n}\); i.e., the graph with vertex set \(V=[n]\) and edge set \(E=\{ii+1\ :\ i\in[n]\}\), where \(i+1\) is considered modulo \(n\). Via a mild extension of the observations made in Section 3, we can characterize the facets of a regular unimodular triangulation of \(\mathcal{C}_{C_{n}}\) arising from a good term order. This yields a method for computing the canonical form \(\Omega_{C_{n}}\). Furthermore, we can enumerate these facets, yielding a closed formula for the normalized volume of \(\mathcal{C}_{C_{n}}\), which was previously unknown.
We use the notation introduced in Section 3. In particular, we represent sets of variables \(S\) in \(R_{C_{n}}\) with the graphs \(G_{S}\). For the edge \(e=ii+1\) we also write \(y_{ii+1}\) and \(y_{i+1i}\) for the corresponding \(y\)-variables. Just as in Section 3, we consider the triangulation \(\mathcal{T}\) of \(\mathcal{C}_{C_{n}}\) induced by a lexicographic term order with respect to the following ordering of the variables
\[y_{12}>y_{23}>\cdots y_{n-1n}>y_{n1}>y_{1n}>y_{nn-1}>\cdots>y_{21}>z _{12}>\cdots\] \[\cdots>z_{n-1n}>t_{12}>\cdots>t_{n-1n}>z_{1}>\cdots>z_{n}. \tag{4}\]
This term order is seen to be a good term order (see Definition 2.4).
With respect to such a term order, the leading terms of zig-zag binomials correspond again to partially directed paths ending in a \(\circ\), just as in Section 3. Note that this is indeed true even more general for facets of the cosmological polytope of any
graph for any induced path (whose internal vertices have degree 2) with respect to any good term order for which variables corresponding to one direction of the path are greater than the ones for the other direction.
We must now also avoid the subgraphs corresponding to leading terms of cyclic binomials. For cycle binomials, this implies that we must avoid subgraphs that are directed cycles (both clockwise and counter-clockwise), such as
For cyclic binomials that are not cycle binomials, since \(y_{ii+1}>y_{j+1j}\) for every \(i,j\in[n]\) (with addition taken modulo \(n\)), the leading terms will always correspond to partially directed cycles oriented clockwise, such as
Hence, we must avoid subgraphs that are partially directed cycles with a clockwise orientation. The following theorem provides a characterization of the facets of this triangulation in terms of these forbidden subgraphs.
**Theorem 4.1**.: _Let \(S\) be a subset of the generators of the ring \(R_{C_{n}}\) and let \(Z_{S}=\{z_{i_{1}},\ldots,z_{i_{k}}\}\) where \(i_{1}<\cdots<i_{k}\). Then \(S\) is a facet of the triangulation of \(\mathcal{C}_{C_{n}}\) corresponding to the lexicographic order induced by (4) if and only if all of the following hold:_
1. \(Z_{S}\neq\varnothing\)_,_
2. _the induced subgraph of_ \(G_{S}\) _on_ \(\{i_{t},i_{t}+1,\ldots,i_{t+1}\}\) _is of the form described in Theorem_ 3.2 _(2) for all_ \(t\in[k-1]\)_, and_
3. _the induced subgraph of_ \(G_{S}\) _on_ \(\{i_{k},i_{k}+1\mod n,\ldots,i_{1}\}\) _is of the form described in Theorem_ 3.2 _(2) where the right-most node is_ \(i_{1}\)_._
Proof.: Suppose that \(S\) is a facet of \(\mathcal{T}\). Then \(|S|=2n\). If \(Z_{S}=\varnothing\), then by the forbidden subgraphs arising from the fundamental binomials, we know that every edge in \(G_{S}\) is a double edge consisting of an undirected edge together with a directed edge. This, however, would imply that \(G_{S}\) contains a subgraph corresponding to the leading term of a cyclic binomial, which is a contradiction. Hence, \(Z_{S}\neq\varnothing\). The fact that conditions (2) and (3) hold follows from the specified variable ordering and the arguments given in the proof of Theorem 3.2.
Similarly, the converse follows from the arguments given in the proof of Theorem 3.2, with the additional observation that the specified paths between any two white nodes contains a single edge and these single edges prevent the existence of clockwise partially directed and directed cycles.
Similar to the results in Section 3, we can use the characterization in Theorem 4.1 to enumerate the facets of the triangulation and derive a closed formula for the normalized volume of \(\mathcal{C}_{C_{n}}\).
**Theorem 4.2**.: _The cosmological polytope of the \(n\)-cycle \(C_{n}\) has normalized volume_
\[\operatorname{Vol}(\mathcal{C}_{C_{n}})=4^{n}-2^{n}.\]
Proof.: Let \(S\) be a facet of the triangulation \(\mathcal{T}\) described above and \(Z_{S}=\{z_{i_{1}},\ldots,z_{i_{k}}\}\) where \(i_{1}<\cdots<i_{k}\). Then the induced subgraph of \(G_{S}\) on \(\{i_{\ell},\ldots,i_{\ell+1\mod k}\}\) for \(1\leq\ell\leq k\) is of the form described in Theorem 3.2 (2). By the proof of Corollary 3.4 there are \(2^{i_{\ell+1\mod k-i\ell}}\) possible subgraphs for this interval which gives \(\prod_{\ell=1}^{k}2^{i_{\ell+1\mod k-i\ell}}=2^{n}\) possible graphs \(G_{S}\) with a prescribed set of white vertices. Varying the latter over all non-empty subsets of the vertices, we get a total of \((2^{n}-1)\dot{2}^{n}=4^{n}-2^{n}\) possible subgraphs \(G_{S}\). It remains to verify that none of these subgraphs contains a clockwise partially oriented cycles or a completely oriented cycle. To see this, it suffices to note that if any induced subgraph on \(\{i_{\ell},\ldots,i_{\ell+1\mod k}\}\) for \(1\leq\ell\leq k\) is of the first three types in Theorem 3.2 (2), then the unique single edge already prevents the existence of such a cycle. However, if all considered subgraphs are of the fourth type then none of the variables \(y_{ii+1\mod n}\) is present. Hence, \(G_{S}\) neither contains a clockwise partially oriented cycle nor a completely oriented cycle. This finishes the proof.
## 5. The Cosmological Polytope of a Tree
The description of the facets for a regular unimodular triangulation arising from a good term order for the path in Section 3 can be extended to any tree. To do so, we first specify a good term order associated to an arbitrary tree \(T\) on node set \([n+1]\) that generalizes the term order used in Section 3.
Fix a leaf node \(r\) of \(T\) and consider the associated orientation of \(T\), denoted \(\overrightarrow{T}\), in which all edges are directed away from \(r\). Let \(\preceq_{r}\) denote the partial order on \([n+1]\) given by the distance of a node \(i\in[n+1]\) from \(r\); that is, \(i\preceq_{r}j\) whenever the length of the unique directed path from \(r\) to \(i\) in \(\overrightarrow{T}\) is less than or equal to the length of the unique directed path from \(r\) to \(j\) in \(\overrightarrow{T}\). Fix a linear extension \(\prec_{r}\) of \(\prec_{r}\) in the following way:
A _floret_ in a rooted directed tree consists of a node \(i\) and all of its _children_; i.e., the nodes \(j\) such that \(i\to j\) is an edge of the tree. Iterating over, \(k=1,2,\ldots\), the distance of a node in \(\overrightarrow{T}\) from \(r\), we consider all nodes at distance \(k-1\). These nodes have been totally ordered as \(i_{1}<\cdots<i_{t}\). Iterating over \(i_{\ell}\) for \(\ell=1,\ldots,t\), totally order the children of \(i_{\ell}\). Then totally order the children of \(i_{\ell+1}\) such that all children of \(i_{\ell+1}\) are larger than those of \(i_{\ell}\). For an example consider Figure 2(a).
We then totally order the edges of \(\overrightarrow{T}\) such that for \((i,j),(s,t)\in E(\overrightarrow{T})\) we have \((i,j)\prec(s,t)\) if
1. \(i<_{r}s\), or
2. \(i=s\) and \(j<_{r}t\).
Using these orderings, we can then define a total order \(<\) of the variables \(y_{ij}\), \(z_{ij}\), \(t_{ij}\) and \(z_{i}\) such that
1. If \((i,j),(s,t)\in E(\overrightarrow{T})\) and \((i,j)\prec(s,t)\), then * \(y_{ij}>y_{st}\), * \(y_{ts}>y_{ji}\), * \(z_{ij}>z_{st}\), and * \(t_{ij}>t_{st}\),
2. If \((i,j),(s,t)\in E(\overrightarrow{T})\), then * \(y_{ij}>y_{ts}\), * \(y_{ij}>z_{st}\), * \(y_{ji}>z_{st}\), * \(y_{ij}>z_{st}\), * \(y_{ij}>t_{st}\),
3. If \(i<_{r}j\), then \(z_{i}>z_{j}\).
This variable ordering is seen to generalize the variable ordering (3), and the associated lexicographic term order on the monomials in \(R_{T}\) is a good term order. Hence, by Corollary 2.10, we obtain a regular unimodular triangulation \(\mathcal{T}\) of \(\mathcal{C}_{T}\).
We note the following property of the chosen edge ordering of \(T\).
**Lemma 5.1**.: _Suppose that \(i<_{r}j\), and let \(\pi=\{i_{1}i_{2},\ldots,i_{k-1}i_{k}\}\) be the unique path in \(T\) between \(i_{1}=i\) and \(i_{k}=j\). Then \((i_{2},i_{1})\prec(i_{k-1},i_{k})\)._
Proof.: Note first that since \(i<_{r}j\), the distance from \(r\) to \(j\) in \(T\) is at least the distance from \(r\) to \(i\) in \(T\). Suppose that these two distances are equal. Let \(\alpha=\min_{<_{r}}\{i_{1},\ldots,i_{k}\}\), and let \(\pi_{1}\) and \(\pi_{2}\) be the unique path between \(i_{1}\) and \(\alpha\) and \(i_{k}\) and \(\alpha\), respectively. We index \(\pi_{1}\) as \(\pi_{1}=\{i_{0,1}i_{1,1},i_{1,1}i_{2,1},\ldots,i_{t-1,1}i_{t,1}\}\) and \(\pi_{2}\) as \(\pi_{2}=\{i_{0,2}i_{1,2},i_{1,2}i_{2,2},\ldots,i_{t-1,2}i_{t,2}\}\) where \(i_{0,1}=i_{0,2}=\alpha\), \(i_{t,1}=i_{1}\) and \(i_{t,2}=i_{k}\). Observe that \(\pi_{1}\) and \(\pi_{2}\) have the same length since \(i\) and \(j\) have the same distance from \(r\).
We claim now that \(i_{j,1}\less_{r}i_{j,2}\) for all \(j=1,\ldots,t\). To see this, suppose for the sake of contradiction that there exists a \(j\) for which \(i_{j,1}\succ_{r}i_{j,2}\). Then the floret for \(i_{j,2}\) has its children ordered before that of \(i_{j,1}\). This implies that \(i_{j+1,2}\succ_{r}i_{j+1,1}\). Iterating this argument implies that \(i=i_{t,1}\succ_{r}i_{t,2}=j\), which is a contradiction. Hence, \(i_{j,1}\less_{r}i_{j,2}\) for all \(j=1,\ldots,t\). It follows that
\[(i_{2},i_{1})=(i_{t-1,1},i_{t,1})\prec(i_{t-1,2},i_{t,2})=(i_{k-1},i_{k}),\]
as desired.
Now suppose that the distance from \(r\) to \(j\) in \(T\) is strictly larger than the distance from \(r\) to \(i\) in \(T\). Then \(\pi_{1}\) contains only nodes of distance at most \(t\) from \(r\) and \(\pi_{2}\) contains a node at distance \(t+1\) from \(r\), for minimally chosen \(t\). By the chosen edge ordering, the edge in \(\pi_{2}\) from the node at distance \(t\) to the node at distance \(t+1\) is larger than all edges in \(\pi_{1}\). This edge is also seen to be equal to, or smaller than \((i_{k-1},i_{k})\), which completes the proof.
For each edge \(ij\in E(T)\) the fundamental binomials in \(B_{T}\) imply that if \(S\) is a subset of the generators of \(R_{T}\) corresponding to a face of \(\mathcal{T}\), then the graph \(G_{S}\) does not contain any of the subgraphs listed in (2).
Similarly, the zig-zag binomials in \(B_{T}\) imply that the graph \(G_{S}\) must not contain certain subgraphs along paths if \(S\) is a face of \(\mathcal{T}\). These subgraphs generalize the partially directed increasing paths from Section 3 and are defined as follows:
Let \(i_{1},i_{k}\) be vertices of \(T\) such that \(i_{1}\prec_{r}i_{k}\), let \(\pi=\{i_{1}i_{2},i_{2}i_{3},\ldots,i_{k-1}i_{k}\}\) be the unique path in \(T\) between \(i_{1}\) and \(i_{k}\), and let \(\alpha=\min_{\prec_{r}}\{i_{1},\ldots,i_{k}\}\). Further let \(\pi_{1}\) and \(\pi_{2}\) denote the subpaths of \(\pi\) between \(i_{1}\) and \(\alpha\) and \(i_{k}\) and \(\alpha\), respectively. Given the ordering of the variables above, the leading terms of any zig-zag binomial for a zig-zag pair on \(\pi\) have associated graphs being one of the following:
1. Partially directed paths toward \(i_{1}\) ending in \(\circ\) that include a directed edge on \(\pi_{1}\) pointing toward \(i_{1}\).
2. Partially directed paths toward \(i_{k}\) ending in \(\circ\) that include an edge directed toward \(i_{k}\) on \(\pi_{2}\), and
3. Partially directed paths toward \(i_{1}\) ending in \(\circ\) with all edges on \(\pi_{2}\) directed and all edges on \(\pi_{1}\) undirected.
Observe that the paths in (2) include the paths excluded by the leading terms of zig-zag binomials for the path in Section 3 by taking \(i_{1}=\alpha\). To see that these three options contain all possible leading terms of zig-zag binomials, consider a zig-zag pair \((E_{1},E_{2})\) on the path \(\pi\), where \(E_{1}\) are the edges directed toward \(i_{k}\) and \(E_{2}\) are the edges directed toward \(i_{1}\). If either \(E_{1}\) contains edges on \(\pi_{2}\) or \(E_{2}\) contains edges on \(\pi_{1}\), then under the given variable ordering one of the \(y\)-variables represented by these edges is the largest. Hence, under the given term order the leading term of the associated zig-zag pair is represented by a graph of type (1) or (2).
On the other hand, if \(E_{1}\) is all the edges on \(\pi_{1}\) and \(E_{2}\) is all the edges on \(\pi_{2}\), then by Lemma 5.1, the leading term is given by the \(y\)-variables in \(E_{2}\). Hence, such a zig-zag pair is represented by the graphs in (3) listed above.
We call these paths _zig-zag obstructions_. Zig-zag obstructions of the form (i) for \(i=1,2,3\) are called _zig-zag obstructions of type i_.
**Example 5.2**.: In Figure 2 we see four graphs. The first three each contain a zig-zag obstruction of type 1, 2, and 3, respectively, when considered from left-to-right. The second graph, which highlights in red a zig-zag obstruction of type 2 also contains three additional zig-zag obstructions (of the same type). These are given by replacing exactly the one of the directed edges with its undirected version, or alternatively considering the subgraph of the red edges where we forget the least of the two directed edges. The rightmost graph depicts a graph that contains no zig-zag obstructions.
For the chosen term order, we can further reduce the Grobner basis identifed for \(I_{T}\). Consider paths of the form \(\pi=\{i_{1},\ldots,i_{k}\}\) in which \(i_{1}\prec_{r}i_{2}\prec_{r}\cdots\prec_{r}i_{k}\). We define a _simple zig-zag pair of type 1_ as zig-zag pair \((E_{1},E_{2})\) where \(E_{1}=\{i_{1}\to i_{2}\}\) and \(E_{2}=\{i_{t+1}\to i_{t}\,:\,t\in\{2,\ldots,k-1\}\}\). Consider also paths of the form \(\pi=\{i_{1},i_{2},\ldots,i_{k}\}\) in which \(i_{j}\prec_{r}i_{1}\) for all \(j=2,\ldots,k-1\) but \(i_{1}\prec_{r}i_{k}\). Let \(\alpha=\min_{\prec_{r}}\{i_{1},\ldots,i_{k}\}\), and take \(\pi_{1}\) and \(\pi_{2}\) as before. A _simple zig-zag pair of type 2_ is a zig-zag pair \((E_{1},E_{2})\) on this path where \(E_{1}\) consists of all edges on \(\pi_{1}\) oriented toward \(\alpha\) and \(E_{2}\) consists of all edges on \(\pi_{2}\) oriented toward \(\alpha\).
**Lemma 5.3**.: _Let \(T\) be a tree. The leading term of any zig-zag binomial under the lexicographic order on \(R_{T}\) corresponding to \(\prec\) is divisible by the leading term of a simple zig-zag binomial._
Proof.: Under the given term order, the leading term of the zig-zag binomial for a simple zig-zag pair is graphically represented by a partially directed path from \(i_{1}\) to \(i_{k}\) in which the first edge \(i_{1}\to i_{2}\) is directed toward \(i_{k}\) and all other edges are undirected, plus a symbol for the variable \(z_{i_{1}}\). Given a zig-zag binomial whose leading term is represented by a zig-zag obstruction of type 1, the associated path \(\pi=\{i_{1},\ldots,i_{k}\}\) is such that the subpath \(\pi_{1}\) contains at least one directed edge pointing toward \(i_{1}\). Pick the edge \(i_{s}\gets i_{s+1}\) of this form on \(\pi_{1}\) with \(s\) minimal. The remaining edges between this edge and \(i_{1}\) must be undirected, and hence the subpath on \(\{i_{1},\ldots,i_{s}\}\) is the graphical representation of the leading term of the zig-zag binomial of a simple zig-zag pair of type 1. Hence, the leading term of this zig-zag binomial is divisible by the leading term of a zig-zag binomial of a simple zig-zag pair. The same argument shows that all zig-zag binomials whose leading terms are represented by zig-zag obstructions of type 2 are also divisible by the leading term of some zig-zag binomial for a simple zig-zag pair of type 1.
For zig-zag binomials whose leading terms are represented by zig-zag obstructions of type 3, the subpath \(\{i_{1},\ldots,i_{t}=\alpha,i_{t+1},\ldots,i_{s}\}\), where \(i_{s}\) is the first node on \(\pi_{2}\) larger than \(i_{1}\) under \(\prec_{r}\), is the graphical representation of the leading term of a zig-zag binomial for a simple zig-zag pair of type 2. This follows from Lemma 5.1. Hence, all such zig-zag binomials also have leading terms divisible by the leading term of a zig-zag binomial for a simple zig-zag pair. This completes the proof.
Let \(S\) be the subset of variables in the leading term of a zig-zag binomial for a simple zig-zag pair of type 1. We call the graph \(G_{S}\) a _simple zig-zag obstruction of type 1_. We similarly define _simple zig-zag obstructions of type 2_.
**Example 5.4**.: The zig-zag obstructions in the first and third graphs in Figure 2 are both simple (of type 1 and 2, respectively). On the other hand, the zig-zag obstruction in the second graph is not simple, but contains a simple zig-zag obstruction of type 2 as a subgraph. This obstruction is given by deleting the first of
Figure 2. The first three graphs are, respectively from left-to-right, examples of zig-zag obstructions of type 1, 2 and 3 with the obstruction depicted in red. The rightmost graph is an example of a graph that contains no zig-zag obstructions. The order \(\prec_{r}\) is the natural order on the vertex set.
the two directed edges from the graph. Such subgraph inclusions correspond to the divisibility of leading terms as seen in the proof of Lemma 5.3.
Since \(T\) is a tree on vertex set \([n+1]\), the dimension of \(\mathcal{C}_{T}\) is \(|V|+|E|-1=2n\). By Lemma 5.3, a characterization of the facets of the triangulation \(\mathcal{T}\) of \(\mathcal{C}_{T}\) given by the specified good term order consists of all subsets \(S\) of the variables generating \(R_{T}\) with \(|S|=2n+1\) for which the graph \(G_{S}\) contains no fundamental obstructions and no simple zig-zag obstructions.
In the following, we say that two nodes \(i,j\) in a undirected graph \(G=(V,E)\) are _connected given a subset_\(C\subset V\) if there is a path \(\pi=\{i_{1}i_{2},\ldots,i_{k-1}i_{k}\}\) in \(G\) such that \(i_{1}=i\), \(i_{k}=j\) and \(i_{2},\ldots,i_{k-1}\notin C\). We say a subset of vertices \(B\) of \(G\) is _maximally connected given_\(C\) if all vertices in \(B\) are connected given \(C\) and there is no pair of vertices \(i\in B\) and \(j\notin B\) such that \(i\) and \(j\) are connected given \(C\). For a tree \(T\) and subset \(S\) of variables in \(R_{T}\), we let \(\overline{G}_{S,1},\ldots,\overline{G}_{S,M}\) denote the induced subgraphs of \(G_{S}\) on the maximally connected subsets of \(T\) given \(\mathfrak{Z}_{S}\). We call the collection of graphs \(\overline{G}_{S,1},\ldots,\overline{G}_{S,M}\) the \(Z_{S}\)_-components_ of \(G_{S}\).
Recall from Proposition 2.12 that the support graph of \(G_{S}\) for \(S\) a facet of the triangulation of \(\mathcal{C}_{T}\) is the tree \(T\). Hence, the support graph of each \(\overline{G}_{S,j}\) is a subtree of \(T\). In the following, we will want to refer to certain subgraphs of \(\overline{G}_{S,j}\) that are induced subgraphs of \(\overline{G}_{S,j}\) on the vertex set of the corresponding induced subgraphs of \(T\). For instance, although a vertex \(i\) in \(\overline{G}_{S,j}\) may have degree greater than \(1\), we will call it a _leaf node_ of \(\overline{G}_{S,j}\) if it is a leaf node in the support graph of \(\overline{G}_{S,j}\). Similarly, we may refer to a subgraph of \(\overline{G}_{S,j}\) as a _path_ in \(\overline{G}_{S,j}\) if it is the induced subgraph of \(\overline{G}_{S,j}\) on the node set of a path in its support graph, despite the fact that it may include multiple edges between the same pair of vertices. This mild abuse of terminology should, however, be clear from context. As a first example, we call the graph \(\overline{G}_{S,j}\)_\(Z_{S}\)-bounded_ if all leaf nodes of \(\overline{G}_{S,j}\) are in \(\mathfrak{Z}_{S}\). Otherwise, we call it _\(Z_{S}\)-unbounded_.
Figure 3. A tree \(T\) and the graph \(G_{S}\) for a subset \(S\) of the variables in the ring \(R_{T}\).
**Example 5.5**.: Consider the tree \(T\) and the graph \(G_{S}\) depicted in Figure 2(a) and Figure 2(b), respectively. For \(G_{S}\), we have that
\[\mathfrak{Z}_{S}=\{6,7,10,11,13,14,21,22,23,24,25\}.\]
The natural order on the vertex set is taken for \(<_{r}\), where \(r=1\), under which we see that \(G_{S}\) contains no fundamental obstructions and no simple zig-zag obstructions. From \(G_{S}\) we also see that \(|S|=2n+1\), where \(n=24\), and so it follows that \(S\) is a facet of the triangulation \(\mathcal{T}\).
The graph \(G_{S}\) has the \(Z_{S}\)-components depicted in Figure 4, in which the two leftmost graphs are \(Z_{S}\)-unbounded and the remaining ones are \(Z_{S}\)-bounded.
To analyze the graphs \(\overline{G}_{S,j}\) it will be helpful to have a notion of separation of vertices by edges. Given a graph \(G\) with vertex set \(V\) and edge set \(E\), we say that the subset of nodes \(A\subset V\) are _separated_ by the subset of edges \(B\subset E\) if for every pair of nodes \(i,j\in A\) all paths in \(G\) from \(i\) to \(j\) include an edge in \(B\). Note that the set \(B\) is a cut-set for which the associated cuts each contain a single vertex in \(A\). The following lemma will be used.
**Lemma 5.6**.: _Let \(T\) be a tree with set of leaf nodes \(A\). If the set \(A\) is separated by \(B\), then \(|B|\geq|A|-1\)._
Proof.: Note that if \(B\) is a set of edges separating the leaf nodes \(A\) of the tree \(T\) then deleting the edges in \(B\) from \(T\) results in a forest with at least \(|A|\) connected components such that no two leaves of \(T\) are in the same component. This is because removing a single edge from a forest always increases the number of connected components by exactly \(1\). Hence, to get \(|A|\) connected components that each contain a unique leaf node we must remove \(|A|-1\) edges, which proves the desired lower bound.
**Lemma 5.7**.: _Let \(T\) be a tree with set of leaves \(A\) and let \(B\) be a set of edges in \(T\) that separate \(A\). If \(|B|=|A|-1\), then for each edge \(e\in B\) there exists a unique pair of vertices \(i,j\in A\) such that \(B\) separates \(i\) and \(j\) but \(B\setminus e\) does not._
Proof.: By Lemma 5.6 the set \(B\) is a minimal separating set for \(A\). Hence, removing a single edge \(e=st\in B\) from \(B\) connects at least one pair of leaves of \(T\). Suppose we connect two pairs of leaves, say \(i,j\) and \(k,\ell\). Then, without loss of generality, \(i\) and \(k\) and \(j\) and \(\ell\) are, respectively, on the same side of the single edge \(e\); i.e., they are in the same connected component given by deleting the edge \(e\). Say \(j\) and \(\ell\) are in the component containing \(t\) and \(i\) and \(k\) are in the component containing \(s\).
Figure 4. The \(Z_{S}\)-components of the graph \(G_{S}\) in Figure 2(b).
Since \(T\) is a tree there is a unique path between \(i\) and \(k\), and this path must be the concatenation of the paths between \(i\) and \(s\) and \(k\) and \(s\). Since this path does not contain \(e\), it must contain another edge \(e^{\prime}\in B\). Without loss of generality, suppose \(e^{\prime}\) lies on the path between \(i\) and \(s\). This contradicts the assumption that removing \(e\) from the separating set connects \(i\) and \(j\), which completes the proof.
We say that an edge \(e\)_critically separates_ a pair of nodes \(i\) and \(j\) in a graph \(G=(V,E)\) with respect to \(B\subset E\) if \(B\) separates \(i\) and \(j\) but \(B\setminus e\) does not. Lemma 5.7 states that if \(G\) is a tree with set of leaf nodes \(A\) and \(B\) separates \(A\) with \(|B|=|A|-1\), then each edge in \(B\) critically separates a unique pair of leaf nodes in \(G\). For a fixed tree \(T\), we let \(m_{j}\coloneqq|V(\overline{G}_{S,j})\cap\mathfrak{Z}_{S}|\). The following generalizes Lemma 3.1 to arbitrary trees.
**Lemma 5.8**.: _Let \(T\) be a tree on node set \([n+1]\) and let \(S\) be a facet of the triangulation of \(\mathcal{C}_{T}\). If \(\mathfrak{Z}_{S}=\{i_{1}\lessdot_{r}\cdots\lessdot_{r}i_{n+1-k}\}\) it follows that_
1. \(G_{S}\) _contains exactly_ \(k\) _double edges,_
2. _All double edges are of the form_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _ _or_ _or_ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _ _or_ _or_ _ _or_ _ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _ _or_ _or_
all directed edges on \(\pi\) must be directed toward \(\alpha\). However, this implies that \(G_{S}\) contains a simple zig-zag obstruction of type 2, which is again a contradiction.
Thus, no such paths of double edges exist, and we conclude that the single edges must separate the nodes in \(V(\overline{G}_{S,j})\cap\mathfrak{Z}_{S}\). Since \(T\) is a tree, we require at least \(m_{j}-1\) such single edges in \(\overline{G}_{S,j}\) by Lemma 5.6. Similarly, if \(\overline{G}_{S,j}\) is \(Z_{S}\)-unbounded, we can consider the induced subgraph of \(\overline{G}_{S,j}\) by all nodes on the unique paths in \(T\) between the vertices in \(V(\overline{G}_{S,j})\cap\mathfrak{Z}_{S}\), and the same argument applies. Hence, there are at least \(m_{j}-1\) single edges in \(\overline{G}_{S,j}\) and at most \(n_{j}-m_{j}\) double edges.
We now claim that there are exactly \(m_{j}-1\) single edges in \(\overline{G}_{S,j}\). By our choice of Grobner basis, the graph \(\overline{G}_{S,j}\) also corresponds to a facet of the triangulation of the cosmological polytope of the support graph \(T^{\prime}\) of \(\overline{G}_{S,j}\). Since \(T^{\prime}\) is a tree on \(n_{j}\) vertices, and since \(|V(\overline{G}_{S,j})\cap\mathfrak{Z}_{S}|\) contains \(m_{j}\) vertices, we know from (1) that \(\overline{G}_{S,j}\) must exactly contain \(n_{j}-m_{j}\) double edges. Equivalently, it must contain exactly \(m_{j}-1\) single edges, which completes the proof.
In the proof of Lemma 5.8 we use the fact that the set of single edges in a \(Z_{S}\)-component \(\overline{G}_{S,j}\) corresponds to a set of edges in the support graph of the component that separates its leaf nodes. In the following, we will simply say that a set of edges in a \(Z_{S}\)-component _separates_ a set of nodes \(A\) if the corresponding edges in the support graph separate \(A\).
By definition, each of the \(Z_{S}\)-components \(\overline{G}_{S,j}\) has a unique minimal node \(r_{S,j}\) under the vertex ordering \(<_{r}\). This node is the root of the induced subgraph of \(\overline{T}\) on the vertex set \(V(\overline{G}_{S,j})\). A root-to-leaf path in this subtree is a path connecting the root node \(r_{S,j}\) to a leaf node \(i\). We can consider the induced subgraph on the node set of such a path in the graph \(\overline{G}_{S,j}\). For simplicity, we refer to such a subpath as a root-to-leaf path in \(\overline{G}_{S,j}\), noting that it may contain multiple edges. When we refer to a leaf-to-root path we imagine reading such a root-to-leaf path backwards from the leaf to the root node.
For each node \(i\in V(\overline{G}_{S,j})\cap(\mathfrak{Z}_{S}\setminus\{r_{S,j}\})\) consider the first single edge encountered along the leaf-to-root path from \(i\) to \(r_{S,j}\) in \(\overline{G}_{S,j}\). By Lemma 5.7, this edge critically separates a unique pair of vertices \(s,t\in V(\overline{G}_{S,j})\cap\mathfrak{Z}_{S}\) (with respect to the set of single edges in \(\overline{G}_{S,j}\)). Moreover, it can be seen by applying a similar argument as in the proof of Lemma 5.7 that one of these two vertices must be \(i\), say \(s=i\). Let \(\pi=\{i_{1}i_{2},\ldots,i_{k-1}i_{k}\}\) denote the unique path in \(T\) between \(i\) and \(t\), and let \(\alpha=\min_{<_{r}}\{i_{1},\ldots,i_{k}\}\). We call the path between \(\alpha\) and \(i\) the _threshold path for \(i\)_. If we take \(i=i_{k}\) and \(\alpha=i_{t}\) for some \(t\in[k]\) in \(\pi\), we see that \(i_{t}\prec_{r}i_{t+1}\prec_{r}\cdots\prec_{r}i_{k}\); that is, the threshold path is a decreasing path from \(i\) to \(\alpha\) under the ordering \(\prec_{r}\). It follows that the threshold path always contains the unique single edge on \(\pi\) separating \(i\) and \(t\) as the leaf-to-root path used to define the threshold path is also decreasing.
If \(i\prec_{r}t\) it follows that the threshold path is the path \(\pi_{1}\), and if \(t\prec_{r}i\) it is \(\pi_{2}\). In the former case, we say the threshold path is _type 1_, and we say it is _type 2_ in the latter case. A threshold path of type 1 is _blocking_ if the portion of the path between \(i\) and the first single edge consists only of undirected edges paired with directed edges pointing away from \(i\) and the single edge is a directed edge pointing away from \(i\) or a. A threshold path of type 2 is _blocking_ if the portion of the path between
and the first single edge consists only of undirected edges paired with directed edges pointing away from \(i\) and one of the following holds:
1. The single edge is undirected and all directed edges on the threshold path point away from \(i\), or
2. the single edge is \(\sim\sim\), or
3. the single edge is a directed edge pointing away from \(i\) and at least one directed edge on the portion of the path between \(\alpha\) and this single edge points toward \(i\).
**Example 5.9**.: We consider some examples of threshold paths in the rightmost \(Z_{S}\)-component for the graph \(G_{S}\) in Example 5.5. For instance, the threshold path for the vertex \(22\) in this \(Z_{S}\) component is
The unique edge on the leaf-to-root path from \(22\) to \(r=1\) has first single edge the undirected edge between vertices \(13\) and \(16\) depicted here. This edge critically separates the two nodes \(13,22\in\mathfrak{Z}_{S}\) (with respect to the set of all single edges in the \(Z_{S}\)-component), and hence the threshold path for \(22\) is the entire path in the \(Z_{S}\)-component between \(22\) to \(13\). Since \(13<_{r}22\), this is a threshold path of type \(2\). We see that it is blocking since it satisfies (1).
As a second example, consider the leaf node \(25\) in the same \(Z_{S}\)-component. The first edge on its leaf-to-root path is a single edge. This edge critically separates nodes \(22,25\in\mathfrak{Z}_{S}\). The path between these nodes is depicted on the left in the following, and the threshold path for \(25\) is depicted on the right:
Since \(22<_{r}25\), the threshold path for \(25\) is also type \(2\), and we see that it is blocking by (2).
As a third, and final example, consider the root-to-leaf path from node \(21\) in the same \(Z_{S}\)-component. As the first edge on this path is a single edge which critically separates nodes \(21\) and \(22\), we have that the path between nodes \(21\) and \(22\) is that depicted on the left, and the threshold path for \(21\) is that depicted on the right:
Since \(21<_{r}22\) this is a threshold path of type \(1\), which is seen to be blocking since the single edge is a \(\sim\sim\).
We will use the notion of blocking paths in our characterization of the facets of the triangulation of \(\mathcal{C}_{T}\). We additionally require one more type of path. Given a
node \(i\) of the graph \(T\) we say that a node \(j\)_covers_\(i\) if \(i<_{r}j\) and no node along the unique path from \(j\) to \(r\) in \(T\) is larger than \(i\). Let \(\pi=\{i_{1}i_{2},\ldots,i_{k-1}i_{k}\}\) be the unique path in \(T\) between \(i\) and \(j\) and let \(\alpha=\min_{<_{r}}\{i_{1},\ldots,i_{k}\}\). A _partially directed branching_ from \(j\) to \(i\) is a partial orientation of \(\pi\) such that all edges along the path between \(i\) and \(\alpha\) are undirected and all edges along the path between \(j\) and \(\alpha\) are directed toward \(\alpha\).
**Example 5.10**.: We consider the \(Z_{S}\)-component depicted in Figure 4(a) for the graph \(G_{S}\) in Figure 2(b). The node \(23\) is covered by nodes \(24\) and \(25\). We see from inspection of the graph that there are no partially directed branchings in this \(Z_{S}\)-component from \(24\) to \(23\) nor from \(25\) to \(23\). However, if we were to reverse the direction of the edge between nodes \(16\) and \(18\) we would then have a partially directed branching from \(24\) to \(23\), as depicted in red in Figure 4(b).
The following theorem characterizes the facets of the triangulation of \(\mathcal{C}_{T}\) for \(T\) a tree under the specified good term order.
**Theorem 5.11**.: _Let \(S\) be a subset of generators of \(R_{T}\) where \(T\) is a tree on node set \([n+1]\). Let \(\overline{G}_{S,1},\ldots,\overline{G}_{S,M}\) be the \(Z_{S}\)-components of \(G_{S}\). The set \(S\) is a facet of the triangulation \(\mathcal{T}\) of \(\mathcal{C}_{T}\) corresponding to a lexicographic order induced by the order \(<\) if and only if the following hold:_
1. \(G_{S}\) _is connected,_
2. \(G_{S}\) _contains only single and double edges, where all double edges are of the form_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _or_ _ _or_ _or_ _ _or_ _or_ _ _or_ _ _or_ _ _or_ _or_ _ _or_ _ _or_ _ _or_ _or_ _ _or_ _ _or_ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _ _or_ _or_ _
Proof.: Assume first that \(S\) is a facet. By Proposition 2.12 we know that (1) is satisfied. By Lemma 5.8 we know that (2) and (3)(a) are also satisfied.
To see that (3)(b) holds, consider a leaf-to-root path from \(i\in V(\overline{G}_{S,j})\cap(\mathfrak{Z}_{S}\smallsetminus\{r_{S,j}\})\) and the portion of this path up to and including its first single edge. We note that this is a subpath of the threshold path for \(i\). Since \(S\) is a facet, all edges before the single edge are double, and by (2) they are of the specified form above. Since the path is leaf-to-root, then reading its vertices from the single edge out toward \(i\) is an increasing sequence of nodes under the ordering \(<_{r}\). Hence, if any of these double edges contains an arrow pointing toward \(i\), then \(G_{S}\) would contain a simple zig-zag obstruction of type 1, which is a contradiction to \(S\) being a facet.
Consider now the unique pair of vertices in \(V(\overline{G}_{S,j})\cap\mathfrak{Z}_{S}\) separated by the single edge on this path, one of which is \(i\) and the other of which we denote by \(t\). Denote this path by \(\pi\), and consider its associated \(\alpha\)-value, and the two paths \(\pi_{1}\) and \(\pi_{2}\) between \(\alpha\) and \(i\) and \(\alpha\) and \(t\). Here, we let \(\pi_{1}\) denote the path between \(\alpha\) and the least of the two vertices \(i\) and \(t\) under \(\prec_{r}\), and \(\pi_{2}\) denote the path between \(\alpha\) and the largest of the two. Note that the single edge is always on the path \(\pi_{1}\) or \(\pi_{2}\) that contains \(i\). Since the given single edge is the unique separator of \(i\) and \(t\), we know that all other edges on this path are double and of the form specified in (2). If \(i\less_{r}t\) and the single edge is on \(\pi_{1}\) (the path from the least of the two nodes \(i\) and \(t\) to \(\alpha\)) then \(\pi_{2}\) must consist of only double edge directed toward \(\alpha\). Otherwise, \(G_{S}\) would contain a simple zig-zag obstruction of type 1. Since all edges on \(\pi_{1}\) are also double edges except for the single edge, \(G_{S}\) would contain a simple zig-zag obstruction of type 2 if the single edge were undirected. Since it would contain a simple zig-zag obstruction of type 1 if the edge were directed toward \(i\), the only valid options are a \(\widetilde{\ }\widetilde{\ }\widetilde{\ }\) or an edge directed toward \(\alpha\). Hence, the threshold path for \(i\) is of type 1 and blocking.
On the other hand, if \(t\less_{r}i\), then the unique single edge on the path between \(i\) and \(t\) would lie on \(\pi_{2}\) (i.e., the path from the largest of the two nodes \(i\) and \(t\) to \(\alpha\)). Hence, all edges on \(\pi_{1}\) are double and directed toward \(\alpha\) to avoid simple zig-zag obstructions of type 1. We note that the single edge cannot be a directed edge toward \(i\) as this would lead to a simple zig-zag obstruction of type 1 as well. If the edge is undirected, then \(G_{S}\) contains no simple zig-zag obstructions of type 1 only if all directed edges on \(\pi_{2}\) point toward \(\alpha\). If the single edge is \(\widetilde{\ }\widetilde{\ }\widetilde{\ }\), all directed edges between the single edge and \(i\) must point toward \(\alpha\). Finally, if the single edge is directed toward \(\alpha\) there must be at least one directed edge between \(\alpha\) and the single edge pointing toward \(i\). Otherwise, \(G_{S}\) would contain a simple zig-zag obstruction of type 2. Hence, the threshold path must also be blocking in this case. Therefore, (3)(b) holds for \(S\) a facet.
To see that (3)(c) holds, note that if there was such a partially directed branching then \(G_{S}\) would necessarily contain a simple zig-zag obstruction of type 2, a contradiction to \(S\) being a facet.
To see that (3)(d) holds, note that any other choice of edge configuration at \(r_{S,j}\) would imply that \(G_{S}\) contains a fundamental obstruction. Hence, the listed conditions are all satisfied if \(S\) is a facet of the triangulation.
Conversely, suppose that the listed conditions are satisfied by \(S\). Let \(|Z_{S}|=n+1-k\). We will show now that (3)(a) implies that \(G_{S}\) contains exactly \(k\) double edges. Since \(G_{S}\) is a connected graph on vertex set \(\left[n+1\right]\), it contains at least \(n\) edges. If exactly \(k\) of these edges are double then \(|S|=2n+1\), which is the correct dimension for \(S\) to be a facet. To see this claim, assume (3)(a) holds; i.e., assume that each \(\overline{G}_{S,j}\) contains exactly \(m_{j}-1\) single edges, where we let \(m_{j}=|V(\overline{G}_{S,j})\cap\mathfrak{Z}_{S}|\). We induct on the number \(M\geq 1\) of \(Z_{S}\)-components in \(G_{S}\). The result is seen to hold in the case that \(M=1\), as in this case \(m_{j}-1=\left(n+1-k\right)-1\). Suppose now that the result holds for \(G_{S}\) with at most \(M-1\geq 0\)\(Z_{S}\)-components, and consider \(G_{S}\) with \(M\)\(Z_{S}\)-components. Since \(M>1\), there is at least one \(Z_{S}\)-component containing a sink node of \(\overrightarrow{T}\) that does not contain the root node \(r\). Suppose this component is \(\overline{G}_{S,j}\) and that its support graph has \(n_{j}\) edges. Note that \(r_{S,j}\) is necessarily a \(\circ\) node, as \(r_{S,j}\neq r\).
We then have that the subgraph of \(T\) given by deleting all edges of the support graph of \(\overline{G}_{S,j}\) is a tree containing \(n-n_{j}\) edges. Since \(\overline{G}_{S,j}\) does not contain \(r\) and does contain a sink node of \(\overrightarrow{T}\), deleting all vertices in \(V(\overline{G}_{S,j})\setminus\{r_{S,j}\}\) and all edges incident to these vertices from \(G_{S}\), results in a graph \(\widetilde{G}_{\widetilde{S}}\) that contains \(n+1-k-(m_{j}-1)=\left(n+1-k\right)-m_{j}+1\)\(\circ\) nodes. By the inductive hypothesis, \(\widetilde{G}_{\overline{S}}\) contains \(\left(n+1-k\right)-m_{j}\) single edges. By assumption, \(\overline{G}_{S,j}\) contains \(m_{j}-1\) single edges. Hence, \(G_{S}\) contains \(n-k\) single edges, or equivalently, \(k\) double edges.
Since \(|S|=2n+1\), which is the correct size for \(S\) to be a facet, it only remains to see that the set \(S\) contains no fundamental obstructions or simple zig-zag obstructions. The fact that \(G_{S}\) contains no fundamental obstructions follows from (2) together with (3)(d) and (3)(b) (when we consider the definition of blocking). The fact that \(G_{S}\) contains no simple zig-zag obstructions follows from (3)(b) and (3)(c), which completes the proof.
## 6. Open problems
We conclude with a few problems of interest left open by the article. As we worked out in the case of cycles and trees, a combinatorial analysis of the Grobner basis presented in Section 2 reveals an explicit facet description for the corresponding triangulation. Moreover, one of the features of having a regular unimodular triangulation is that the computation of the volume can be reduced to counting the facets. It would be interesting to push this understanding further.
**Problem 6.1**.: Obtain a facet description for a regular unimodular triangulation of the cosmological polytope of more general families of graphs. Can the volume of the cosmological polytope of an arbitrary graph be expressed in terms of elementary graph invariants?
From a combinatorial viewpoint one is typically interested in finer invariants than the volume of a lattice polytope. One popular instance is the _\(h^{*}\)-polynomial_; that is, the univariate polynomial with integer coefficients which arises as the numerator of the _Ehrhart series_ of a lattice polytope (see for instance [4, Chapter 3]). As an example, we observed experimentally that the \(h^{*}\)-polynomial of the cosmological polytope of a tree on \(n+1\) vertices equals \(h^{*}(t)=(1+3t)^{n}\).
**Problem 6.2**.: Find formulas for the \(h^{*}\)-polynomial of a cosmological polytope \(\mathcal{C}_{G}\) in terms of graph invariants of \(G\).
Finally, since, as we commented in the introduction, the computation of the canonical form of a polytope can be reduced to computing the canonical forms of the facets of any triangulation, we propose the following problem:
**Problem 6.3**.: Describe triangulations of cosmological polytopes with the minimum number of facets.
Choosing a lexicographic term order for which \(z\)-variables are larger than the other variables, the corresponding initial ideal will contain all squares of \(z\)-variables as generators, since each of them is the leading term of some fundamental binomial. While the facets of the triangulation obtained in this way do not all have minimum volume, their number can be significantly smaller than in the unimodular case. For example, experiments suggest that this idea gives a triangulation of the cosmological polytope of a tree with \(n+1\) vertices which consists of \(2^{n-1}\) facets, compared to the \(4^{n}\) many of a unimodular triangulation.
### Acknowledgements
The authors would like to thank Paolo Benincasa, Lukas Kuhne and Leonid Monin for helpful discussions. We would also like to thank the _2022 Combinatorial Coworkspace: A Session in Algebraic and Geometric Combinatorics_ at Haus Bergkranz in Kleinwalsertal, Austria where this work began. Liam Solus was supported by the Wallenberg Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation, the Digital Futures Lab at KTH, the Goran Gustafsson Stiftelse Prize for Young Researchers, and Starting Grant No. 2019-05195 from The Swedish Research Council (Vetenskapsradet).
|
2309.01493
|
HAGRID -- High Accuracy GRB Rapid Inference with Deep learning
|
Since their discoveries in 1967, Gamma-Ray Bursts (GRBs) continue to be one
of the most researched objects in astrophysics. Multi-messenger observations
are key to gaining a deeper understanding of these events. In order to
facilitate such measurements, fast and accurate localization of the gamma-ray
prompt emission is required. As traditional localization techniques are often
time consuming or prone to significant systematic errors, here we present a
novel method which can be applied on the POLAR-2 observatory. POLAR-2 is a
dedicated GRB polarimeter, which will be launched towards the China Space
Station (CSS) in 2025. The CSS provides POLAR-2 access to a GPU, which makes it
possible and advantageous to run a Deep Learning model on it. In this work, we
explore the possibility to identify GRBs in real time and to infer their
location and spectra with deep learning models. Using POLAR simulations and
data, a feasibility experiment was performed to implement this method on
POLAR-2. Our results indicate that using this method, in combination with real
time data downlinking capabilities, POLAR-2 will be able to provide accurate
localization alerts within 2 minutes of the GRB onset.
|
Merlin Kole, Gilles Koziol, David Droz
|
2023-09-04T10:01:18Z
|
http://arxiv.org/abs/2309.01493v1
|
# Hadgrid - High Accuracy GRB Rapid Inference with Deep learning
###### Abstract:
Since their discoveries in 1967, Gamma-Ray Bursts (GRBs) continue to be one of the most researched objects in astrophysics. Multi-messenger observations are key to gaining a deeper understanding of these events. In order to facilitate such measurements, fast and accurate localization of the gamma-ray prompt emission is required. As traditional localization techniques are often time consuming or prone to significant systematic errors, here we present a novel method which can be applied on the POLAR-2 observatory. POLAR-2 is a dedicated GRB polarimeter, which will be launched towards the China Space Station (CSS) in 2025. The CSS provides POLAR-2 access to a GPU, which makes it possible and advantageous to run a Deep Learning model on it. In this work, we explore the possibility to identify GRBs in real time and to infer their location and spectra with deep learning models. Using POLAR simulations and data, a feasibility experiment was performed to implement this method on POLAR-2. Our results indicate that using this method, in combination with real time data downlinking capabilities, POLAR-2 will be able to provide accurate localization alerts within 2 minutes of the GRB onset.
Introduction
Gamma-Ray Bursts (GRBs) are the most violent explosions in the Universe. They consist of a bright prompt emission, mostly visible in gamma-rays, followed by a longer lasting afterglow visible at lower wavelengths. The prompt emission is so bright that it can relatively easily be observed using wide field of view instruments. For typical GRBs with a brightness of \(1\times 10^{-5}\,\mathrm{erg/cm^{2}}\), an effective area of several 10's of \(cm^{2}\) typically suffices for a detection. The afterglow, however, is significantly fainter, therefore often requiring narrow field of view instruments to achieve the signal to background required for observations. Successful afterglow measurements therefore require fast alerts from other observatories, such as wide field of view gamma-ray detectors, with an accurate location in order to catch the afterglow in time. Similarly, instruments such as MAGIC, HESS and CTA rely on such alerts to be able to redirect their telescopes to catch the GeV or TeV emission from such events.
The need for fast alerts indicates the vital role gamma-ray detectors will play in the quickly evolving multi-messenger field. This is also indicated by the large number of instruments currently in preparation with accurate localization as their primary science goal. Examples of this are Moon-BEAM [1] and HERMES [2]. Such detectors require not only a large sensitivity and localization capabilities, but also a system to perform fast localization calculations and to send the alerts or data to ground.
Although not designed with this as its primary science goal, the POLAR-2 detector (see ICRC proceedings [3] for details) is competitive in its localization capabilities with for example the previously mentioned missions. The wide field of view detector, detailed in section 2, has the largest effective areas of a gamma-ray detector in space since BATSE on CGRO. This, combined with it observing half the sky, allows it to detect a large number very weak GRBs such as GRB 170817A. In addition, the segmented nature of the detector makes it sensitive to the incoming direction of the GRB. Most importantly though, the POLAR-2 detector will be placed on the China Space Station (CSS) which provides it access to on board GPU computing facilities as well as real time telemetry to Earth. Given these characteristics, POLAR-2 will be able to detect and localize a large fraction of all GRBs observed in the sky and subsequently submit detailed location and spectral information to ground within minutes.
In order to succeed with this, reliable, fast online analysis software needs to be developed. The goal of this software is to autonomously detect GRBs in the POLAR-2 data, in real time, and subsequently perform localization and spectral analysis on the GRB data. As this software will be ran on a GPU a Deep Learning approach was investigated and tested using real data from the predecessor of POLAR-2, POLAR. Here we will present the various Deep Learning methods employed on the POLAR data, their results and how these can be employed in the future for POLAR-2.
In these proceedings we will first discus the POLAR-2 detector, particularly focusing on its characteristics which allow it to perform localization measurements. This is followed by a discussion on traditionally existing localization studies and why deep learning methods can be advantageous. Subsequently, we will present the results from the first version of the software we developed here, called the High Accuracy GRB Rapid Inference with Deep learning (HAGRID) method. We will finish with a discussion on the prospects for POLAR-2 and further studies to be performed in the
near future.
## 2 Polar-2
POLAR-2 is a dedicated GRB polarimeter being developed by a collaboration from Switzerland, Poland, Germany and China. The mission is the successor of the POLAR mission which performed polarization measurements of 14 GRBs in 2016 and 2017. POLAR-2 will be launched towards the CSS in 2025 or 2026. From there it will observe half the sky for a duration of at least 2 years.
The POLAR-2 detector, which is presented in detail in [3] measures the polarization of gamma-rays using a segmented scintillator detector. The detector, shown on the right of figure 1, therefore consists of an array of 6400 scintillator bars with dimensions of \(5.9\times 5.9\times 125\,\mathrm{mm}^{3}\) each of which is readout using a Hamamatsu S13 MPPC. The plastic scintillator bars are optimized for the purpose of polarimetry, meaning that they have a low atomic number to optimize the cross section for Compton scattering (required to measure the polarization) in the 20-800 keV energy range in which POLAR-2 will be sensitive.
In order to measure polarization, an incoming gamma-ray needs to Compton scatter in one plastic scintillator and subsequently interact in a second one. This allows one to measure the azimuthal Compton scattering angle which is correlated to the intrinsic polarization. Although POLAR-2 is optimized to measure this, the efficiency for such interactions can be imagined to be relative small. This as, opposed to the preferred interaction mechanism, some gamma-rays can undergo photo-absorption only, scatter out of the detector after their first Compton scattering interaction, or scatter several times in the array. As a result, only about 10% of all the incoming photons can be used for gamma-ray polarimetry, making such measurements statistically starved.
To compensate for this POLAR-2 will have a large surface area of around \(600\times 600\,\mathrm{cm}^{2}\). Its effective area for localization studies exceeds \(2000\,\mathrm{cm}^{2}\). When only taking photons into account which can be used for polarimetry the effective area is around \(1000\,\mathrm{cm}^{2}\). This is compared with that of POLAR on the left of figure 1. In addition, in order to observe as many GRBs as possible the instrument has a field of view of half the sky. This large field of view combined with the large effective area, and the improved signal to background compared to POLAR, allows POLAR-2 to observe GRBs with fluences as low as \(10^{-8}\,\mathrm{erg}/\mathrm{cm}^{2}\).
Furthermore, the segmented nature of the detector makes it sensitive to the incoming direction of GRBs. The sensitivity is similar to that used in Fermi-GBM, where the dependence of the effective area of the various sub-detectors on the incoming direction of the GRB can be used to infer the location of the GRB in the sky. This method was tested for POLAR in the past and presented in [4] where traditional localization methods were used to find that GRBs could be localized to within several degrees. The larger effective area of POLAR-2, combined with the larger number of sub-detectors (6400 compared to 1600 for POLAR) will likely increase this precision.
Finally, POLAR-2 will be placed on the CSS in 2025. As such, its data can be analyzed in near real time using a GPU (an nvidia TX2). The results of the GPU analysis can subsequently be submitted to ground using either the Beidou satellite system or using the CSS real time telemetry. In case the POLAR-2 data can be analyzed autonomously this will allow for alerts to be send to ground within 2 minutes of the onset of a GRB, thereby allowing instruments such as CTA, MAGIC and HESS to repoint to the GRB location to capture the high energy emission.
## 3 Localization Methods
The POLAR and POLAR-2 GRB localization method makes use of the dependence of the effective area of the various scintillator bars on the incoming direction of the photons. For example, while a GRB occurring at zenith will result in an equal effective area for all 6400 bars, a GRB coming from the side will result in the scintillator bars on the side to have a significantly larger effective area than those on the opposite side as these are shadowed by other materials. This idea is illustrated in figure 2. This method is similar to that employed for example for Fermi-GBM or BATSE.
A very simple analysis method would be to compare the relative number of observed photons in each detector element with those predicted using Monte Carlo (MC) simulations. Comparing the observed relative rates vs those predicted through MC simulations for all possible incoming angles allows to identify the most likely incoming angle of the GRB. A clear drawback of this method is that this relative number of counts not only depends on the incoming direction of the GRB, but also on the spectral shape of the GRB. To somewhat mitigate this, one can expand the array of MC simulation results by performing these simulations for 3 different typical GRB spectral types (hard, medium and soft), thereby producing a 3d matrix (with dimensions of spectral shape, \(\theta\) and \(\phi\)) against which the observed data can be compared using, for example, a \(\chi^{2}\) fit. This method, which is fast and relatively easy to perform, is employed on Fermi-GBM data [5] as well as on the POLAR data in [4].
A clear downside of this method is that the real spectral shape of the observed GRB will not perfectly match that of one of the 3 simulated GRBs, thereby inducing systematic errors in the final location result [6]. A more complex and time consuming version of this method uses the best fit location to perform a spectral fit, the result of which is subsequently used to perform a
Figure 1: Left: the effective area versus initial photon energy. Green shows the effective area of POLAR, and red that of POLAR when simply increasing its size by a factor of 4. Blue shows the POLAR-2 effective area. It can be seen that POLAR-2 is significantly more sensitive at low energies thanks to the SiPM technology, Right: an exploded view of the POLAR-2 high energy polarimeter. We can see the 100 modules in black and the electronics in green
second localization fit using a simulated array produced for a spectral shape similar to the best fitted spectral fit. Although this method provides a more accurate result, the issue with the systematic errors remains.
A more sophisticated method which overcomes this issue is the BALROG method [6]. Opposed to the method described above, the BALROG analysis performs a joint localization and spectral fit on the data. By fitting both at the same time the best spectral and location are found together, thereby mitigating issues with systematic effects. The downside of this method is that it is significantly more computationally demanding. Given advances in computing power this issue is not significant when performing on ground analysis, making this method preferable for this, however, for performing real time analysis on a space station this issue is a significant downside.
Although the simple \(\chi^{2}\) method can be employed in space, it is far from ideal and does not make use of the equipment available to POLAR-2. Here we therefore present a third option which makes use of Deep Learning. This method can in principle be used to perform fast, accurate analysis where systematic errors are automatically included in the results. The further advantage is that the localization calculation can be performed very quickly, especially on a GPU. The training of the model can be performed on ground using real data during the mission and updated models can be uploaded to the CSS and applied in real time to data.
## 4 HAGRID Results
The first job of the HAGRID software is to autonomously detect GRBs within the POLAR-2 light curve. For this method a Long Short-Term Memory model was trained using both POLAR background data and simulated GRBs produced using the POLAR simulation software. In total over 150'000 artificial GRBs were produced using real background data from POLAR, where the rate vs time was picked based on a polynomial shape. Artificial GRBs (with random spectral shapes and incoming directions) were subsequently placed on top of these background light curves. The energy and interactions of these GRBs were produced using MC simulations while the light curves were produced using part of the CosmoGRB package 1. The input data for the model to train on consisted of both the rates of the various sub-detectors, as well as the measured energy spectra. This allows the model to not only detect transient events based on the increasing rates, as is often done by eye, but to also make use of the changes in the spectral shapes as well as the relative differences in the rates in the various sub-detectors.
Figure 2: Illustration of the idea behind localization measurements of a GRB using a segmented detector. The number of photons interaction in the various detector elements (illustrated by the darkness of the elements) depends on the incoming direction of the GRB.
After training on the large simulation sample the produced model was tested on several days of real POLAR data. Although various minor issues were found (mostly related to the POLAR detector being switched on and off around the SAA which was recognized as a GRB) the model recognized all real GRBs known to be in the studied data. An example of this can be seen in figure 3. In addition, HAGRID identified unknown GRB candidates while the false positive rate (with exception to SAA induced events which can easily be filtered out) remained negligible.
Subsequent to correctly detecting a GRB, HAGRID is designed to perform spectral and localization analysis on the GRB data. As the GRB detection algorithm currently studies the GRB lightcurve in 1 second bins, the localization and spectral studies commence 1 second after the onset of the GRB. If the GRB is longer the analysis will be performed each subsequent second, on the total accumulated data, until the GRB stops.
For the spectral analysis again a large sample of simulated GRBs (using the POLAR simulation software) was used to train the model. In these simulations, again the incoming angle of the GRB was randomized, while the spectral shape was simulated as being a Band function. The Band function parameters (\(\alpha,\beta\) and \(E_{peak}\)) were simulated from distributions as measured by Fermi-GBM and BATSE.
For the model, many options were studied using a toy MC analysis. It was found that Fully Connected Neural Networks making use of loss functions with mean squared error and with the three output parameters being treated independently performed best. The results on the POLAR simulation data are illustrated in figure 4 where the input parameters for the three spectral parameters are shown as the Target on the x-axis and the reconstructed ones on the y-axis. As expected with the energy range of POLAR (50-500 keV) the reconstruction of the \(\alpha\) parameters is successful, while both \(E_{peak}\) and \(\beta\) are more difficult to reproduce. In detailed follow-up studies it was confirmed that the issues in this reproduction are due to the lack of photons at energies above the \(E_{peak}\) value. As can be seen in figure 4 for low values of \(E_{peak}\) a somewhat better reproduction can be performed.
Finally, and most importantly, a localization model is applied to the data in parallel to the spectral model. For the localization model a vast array of models was tested both on toy MC and on real POLAR data. It was concluded that a deep 2d convolutional neural network performed best on the data. Furthermore, the model was found to perform best when trained for outputting \(\theta\), \(\cos(\theta)\) and \(\sin(\phi)\). A result of the correlation between the target and reconstructed location angles for a deep convolutional neural network is shown in figure 5. The median localization error on this
Figure 3: GRB 161218A as identified correctly by HAGRID in the POLAR data.
data set was found to be \(3.6^{\circ}\). It thereby performs better than the traditional methods used on the POLAR data in [4].
## 5 Conclusions and Discussion
The preliminary results of the performance of HAGRID show great promise. The model, which is currently only trained using POLAR data as no POLAR-2 data is yet available, already performs well for GRB detection, while spectral and localization studies also show good results. The latter two can however be greatly improved with further studies.
Firstly, it was found that the current training sample of 150'000 artificial GRBs is not yet sufficient for accurate training. A study of the performance of the models versus the number of training GRBs shows that increasing the number will increase both the localization as well as the spectral performance. This issue can be easily overcome by running more simulations which will be performed during the coming months.
Furthermore, it was found that the normalization of the data sets, meaning normalization of the various input parameters in order to optimize the training, can be improved. This requires further studies during the coming months.
Finally, the training will start to be performed on the POLAR-2 MC data instead of the on the POLAR MC. This will indicate whether the larger effective area of POLAR-2 and its larger number of detector elements can improve the localization as is expected. Furthermore, the increased energy range of POLAR-2 (up to 800 keV compared to 500 keV of POLAR), will allow for better spectral reconstruction especially of the \(E_{peak}\) and \(\beta\) parameters.
Overall we currently expect to employ the HAGRID method on the CSS after the launch of POLAR-2. Initially we will need to train the model using real POLAR-2 data, and using flight data
Figure 4: Correlation between target and predictions of model \(M_{sep}\). This model predicts the \(\alpha\) parameter quite accurately, whereas the \(\beta\) and \(E_{0}\) parameters are harder to predict. Note however that the model always predicts \(\alpha>\beta\)
verified MC simulations of GRBs. Once the results are optimized we anticipate to run HAGRID autonomously on POLAR-2 data several months after launch with the aim of producing alerts with degree level localization precision within 2 minutes of the GRB onset.
|
2302.04699
|
LUXE: A new experiment to study non-perturbative QED in electron-laser
and photon-laser collisions
|
The LUXE experiment (Laser Und XFEL Experiment) is an experiment in planning
at DESY Hamburg using the electron beam of the European XFEL. LUXE is intended
to study collisions between a high-intensity optical laser pulse and 16.5 GeV
electrons, as well as collisions between the laser pulse and high-energy
secondary photons. This will elucidate quantum electrodynamics (QED) at the
strong-field frontier, where the electromagnetic field of the laser is above
the Schwinger limit. In this regime, QED is non-perturbative. This manifests
itself in the creation of physical electron-positron pairs from the QED vacuum,
similar to Hawking radiation from black holes. LUXE intends to measure the
positron production rate in an unprecedented laser intensity regime. An
overview of the LUXE experimental setup and its challenges and progress is
given in this article, along with a discussion of the expected physics reach in
the context of testing QED in the non-perturbative regime.
|
Yee Chinn Yap
|
2023-02-09T15:34:30Z
|
http://arxiv.org/abs/2302.04699v1
|
# LUXE: A new experiment to study non-perturbative QED in electron-laser and photon-laser collisions
###### Abstract:
The LUXE experiment (Laser Und XFEL Experiment) is an experiment in planning at DESY Hamburg using the electron beam of the European XFEL. LUXE is intended to study collisions between a high-intensity optical laser pulse and 16.5 GeV electrons, as well as collisions between the laser pulse and high-energy secondary photons. This will elucidate quantum electrodynamics (QED) at the strong-field frontier, where the electromagnetic field of the laser is above the Schwinger limit. In this regime, QED is non-perturbative. This manifests itself in the creation of physical electron-positron pairs from the QED vacuum, similar to Hawking radiation from black holes. LUXE intends to measure the positron production rate in an unprecedented laser intensity regime. An overview of the LUXE experimental setup and its challenges and progress is given in this article, along with a discussion of the expected physics reach in the context of testing QED in the non-perturbative regime.
## 1 Introduction
Laser Und XFEL Experiment (LUXE) [1] is a proposed new experiment at DESY and European XFEL in Hamburg, Germany aiming to study non-perturbative quantum electrodynamics (QED) in electron-laser and photon-laser collisions. The high-power laser and the electron beam from XFEL collide at an angle of 17 degrees at a frequency of 1 Hz, determined by the laser frequency. The electron beam has a frequency of 10 Hz, allowing 9 out of 10 of the electron bunches to be used for background studies. LUXE uses one electron bunch out of 2700 from the XFEL with each bunch containing \(1.5\times 10^{9}\) electrons of 16.5 GeV. Two running modes are planned at LUXE: \(e\)-laser where the electron beam from XFEL is collided directly with the laser and \(\gamma\)-laser where the electron beam is first converted into photon beam via Bremsstrahlung with a target before colliding with the laser.
## 2 Physics
Figure 1 shows the two processes of interest at LUXE: the non-linear Compton scattering process of a photon radiated from the electron in the laser field,
\[e^{-}+n\gamma_{L}\to e^{-}+\gamma, \tag{1}\]
where \(n\) is the number of laser photons \(\gamma_{L}\) participating in the process; and the non-linear Breit-Wheeler pair creation,
\[\gamma+n\gamma_{L}\to e^{+}+e^{-}, \tag{2}\]
from the interaction of a photon in the laser field.
In the Breit-Wheeler process, the incoming photon can either be the Bremsstrahlung source photon in \(\gamma\)-laser mode or produced from the first Compton process in the \(e\)-laser mode. In \(e\)-laser interaction, both processes can occur either in two steps or in a single step (\(e^{-}+n\gamma_{L}\to e^{-}e^{+}e^{-}\)), collectively known as the non-linear trident process. The planned \(\gamma\)-laser mode allows real photon interactions with laser photons to be studied, as opposed to the two-step trident process.
An important parameter that characterises these interactions is \(\xi\), the laser field intensity parameter or the charge-field coupling, defined as
\[\xi=\frac{m_{e}E_{L}}{\omega_{L}E_{cr}}, \tag{3}\]
Figure 1: Schematic diagrams for the non-linear Compton Scattering process and the non-linear Breit-Wheeler process.
where \(m_{e}\) is the electron mass, \(E_{L}\) is the laser field strength, \(\omega_{L}\) is the frequency of the laser and \(E_{cr}\) is the critical field strength, also known as the Schwinger limit defined as \(E_{cr}=m_{e}^{2}c^{3}/e\hbar\).
Another useful parameter is the quantum non-linearity parameter \(\chi\) defined as
\[\xi_{i}=\frac{\epsilon_{i}}{m_{e}}\frac{E_{L}}{E_{cr}}(1+\beta\cos\theta),i=e,\gamma \tag{4}\]
where \(\epsilon\) is the particle (electron or photon) energy, \(\theta\) the collision angle and \(\beta\) the speed in units of \(c\).
The processes are perturbative at low values of \(\xi\) with the probability for the process involving \(n\) laser photons given by \(\xi^{2n}\) for \(\xi\ll 1\). At larger \(\xi\), one needs to consider contributions from all orders. LUXE aims to make precise measurements of the interactions in a transition from the perturbative to the non-perturbative regime.
Figure 2 shows the QED parameter space of \(\chi\) vs \(\xi\) and the reach of several ongoing or planned experiments including LUXE. LUXE is planned in two phases: the first phase uses a 40 TW laser, while the second phase uses an upgraded laser. LUXE spans a range of \(\xi\) and \(\chi\) parameters, from the perturbative to the non-perturbative region. E144 is an experiment at SLAC in the 1990s that reached \(\chi=0.25\) and \(\xi=0.4\), still within the perturbative regime, but already with observable non-linear effects where the trident process was observed. Other proposed experiments are E320 at SLAC and ELI-NP in Romania, while Astra-Gemini in the UK is ongoing. For a recent review of strong-field QED physics, see Ref. [2].
The abundant photons produced in LUXE, mainly through the non-linear Compton scattering, can be used to search for physics beyond the Standard Model (BSM). A scenario considered is the creation of Axion-like particles (ALPs) produced in the LUXE photon dump via the Primakoff effect. The ALPs would then decay into two photons and can be detected via dedicated detectors. Ref. [3] studies the sensitivity of such a search.
Figure 2: \(\chi\) vs \(\xi\) for a selection of experiments and facilities. For LUXE, three beam energies are shown as isolines, and two laser focus spot sizes are highlighted for the phase-0 (40 TW) laser and one for the phase-1 (350 TW) laser. Reproduced from Ref. [1].
## 3 Measurements
The following measurements as a function of \(\xi\) are planned.
* The position of Compton edge determined from the electron and photon energy spectra in \(e\)-laser runs. Figure 3 shows the photon energy spectrum for different \(\xi\) values, and the edge is given by the first kink in the spectrum and it is seen to shift as a function of \(\xi\) due to the effective mass gained by the electron \(m_{*}=m_{e}\sqrt{1+\xi^{2}}\).
* The positron rate which spans many orders of magnitudes, as shown in the right figure in Figure 3. The positron production rate is equal to the rate of Breit-Wheeler process.
* The number of photons radiated per electron in \(e\)-laser mode.
## 4 Laser
A Titanium Sapphire laser based on chirped pulse amplification technology is planned. The photon wavelength is 800 nm, corresponding to 1.55 eV in energy. Different \(\xi\) values can be reached by changing the focus of the laser. Exceptional shot-to-shot stability in the laser intensity is needed in LUXE.
Table 1 outlines a few quantities related to the laser system in phase-0 and phase-1. The higher laser power in phase-1 allows higher values of \(\xi\) and \(\chi\) to be reached. The variation of these parameters is achieved by changing the laser focal spot waist, where the highest intensity is obtained by focussing the laser to 3 \(\mu\)m.
## 5 Experimental setup
Figure 4 shows the layout of the LUXE detectors in the two setups. In the \(e\)-laser setup, a dipole magnet is placed after the interaction point to separate the particles. Electrons and positrons
Figure 3: (Left) Photon emission rate as a function of the photon energy for different values of \(\xi\). (Right) Positron production rate as a function of \(\xi\) for \(e\)-laser and \(\gamma\)-laser runs in phase-0 and phase-1. Reproduced from Ref. [1].
are deflected in opposite directions and measured in dedicated detectors, while the photons travel straight. The expected fluxes of these particles vary, e.g. the electron flux is expected to be around \(10^{9}\) while the number of positrons ranges from \(10^{-3}\) to \(10^{6}\). The electron and positron detection systems use different technologies due to the difference in expected flux. The positron side uses a precision tracker and a calorimeter while the electron side uses a scintillation screen and a Cherenkov detector. The photon detection system is downstream of the electron and positron detection systems, where a target converts the photons into electron-positron pairs before they are measured with scintillation screens, a gamma profiler and a gamma flux monitor. A BSM detector can be placed at the end after the photon dump.
In the \(\gamma\)-laser setup, after the electron beam hits the converter target, a dipole magnet is placed to deflect the electron beam into a dump while the photon beam travels on to interact with the laser. A dipole magnet is again placed after the interaction point to split the electron-positron pairs as done for the \(e\)-laser case. In this setup, the expected electron and position fluxes are the same and are much lower than in the \(e\)-laser case. Hence, the electron detection system uses also a tracking system similar to the positron detector.
## 6 Conclusions
In summary, LUXE will study strong-field QED in an unprecedented regime using a high-intensity optical laser pulse and high-energy electrons from the XFEL electron beam. The high
\begin{table}
\begin{tabular}{l l l} \hline & Phase-0 & Phase-1 \\ \hline Laser power (TW) & 40 & 350 \\ Peak intensity in focus (\(\times 10^{20}\)W/cm\({}^{2}\)) & \textless{1.33} & \textless{12} \\ Dimensionless peak intensity \(\xi\) & \textless{7.9} & \textless{23.6} \\ Quantum parameter \(\chi_{e}\) for \(\epsilon_{e}=16.5\) GeV & \textless{1.5} & \textless{4.45} \\ Laser focal spot waist (\(\mu\)m) & & \(\geq\) 3 \\ Laser pulse duration (fs) & & 30 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters for the laser system as planned for the two phases of the LUXE experiment.
Figure 4: Schematic layouts for the \(e\)-laser and \(\gamma\)-laser setup. Reproduced from Ref. [1].
photon flux in LUXE can also be used in a BSM physics programme with competitive sensitivity to other experiments. LUXE has passed the stage-0 critical approval from the DESY management. The start of data taking is planned for 2026.
## Acknowledgments
We thank the DESY technical staff for continuous assistance and the DESY directorate for their strong support and the hospitality they extend to the non-DESY members of the collaboration. This work has benefited from computing services provided by the German National Analysis Facility (NAF) and the Swedish National Infrastructure for Computing (SNIC).
|
2304.00771
|
Continuous-time Analysis of Anchor Acceleration
|
Recently, the anchor acceleration, an acceleration mechanism distinct from
Nesterov's, has been discovered for minimax optimization and fixed-point
problems, but its mechanism is not understood well, much less so than Nesterov
acceleration. In this work, we analyze continuous-time models of anchor
acceleration. We provide tight, unified analyses for characterizing the
convergence rate as a function of the anchor coefficient $\beta(t)$, thereby
providing insight into the anchor acceleration mechanism and its accelerated
$\mathcal{O}(1/k^2)$-convergence rate. Finally, we present an adaptive method
inspired by the continuous-time analyses and establish its effectiveness
through theoretical analyses and experiments.
|
Jaewook J. Suh, Jisun Park, Ernest K. Ryu
|
2023-04-03T07:49:55Z
|
http://arxiv.org/abs/2304.00771v2
|
# Continuous-time Analysis of Anchor Acceleration
###### Abstract
Recently, the anchor acceleration, an acceleration mechanism distinct from Nesterov's, has been discovered for minimax optimization and fixed-point problems, but its mechanism is not understood well, much less so than Nesterov acceleration. In this work, we analyze continuous-time models of anchor acceleration. We provide tight, unified analyses for characterizing the convergence rate as a function of the anchor coefficient \(\beta(t)\), thereby providing insight into the anchor acceleration mechanism and its accelerated \(\mathcal{O}(1/k^{2})\)-convergence rate. Finally, we present an adaptive method inspired by the continuous-time analyses and establish its effectiveness through theoretical analyses and experiments.
## 1 Introduction
Nesterov acceleration (Nesterov, 1983) is foundational to first-order optimization theory, but the mechanism and its convergence proof are not transparent. One approach to better understand the mechanism is the continuous-time analysis: derive an ODE model of the discrete-time algorithm and analyze the continuous-time dynamics (Su et al., 2014, 2016). This approach provides insight into the accelerated dynamics and has led to a series of follow-up work (Wibisono et al., 2016; Shi et al., 2021; Even et al., 2021).
Recently, a new acceleration mechanism, distinct from Nesterov's, has been discovered. This _anchor acceleration_ for minimax optimization and fixed-point problems (Kim, 2021; Yoon and Ryu, 2021; Park and Ryu, 2022) has been an intense subject of study, but its mechanism is understood much less than Nesterov acceleration. The various analytic techniques developed to understand Nesterov acceleration, including continuous-time analyses, have only been applied in a very limited manner (Ryu et al., 2019).
Contribution.In this work, we present continuous-time analyses of anchor acceleration. The continuous-time model is the differential inclusion
\[\dot{X}\in-\mathbf{A}(X)-\beta(t)(X-X_{0})\]
with initial condition \(X(0)=X_{0}\in\operatorname{dom}\mathbf{A}\), maximal monotone operator \(\mathbf{A}\), and scalar-valued function \(\beta(t)\). The case \(\beta(t)=\frac{1}{t}\) corresponds to the prior anchor-accelerated methods APPM (Kim, 2021), EAG (Yoon and Ryu, 2021), and FEG (Lee and Kim, 2021).
We first establish that the differential inclusion is well-posed, despite the anchor coefficient \(\beta(t)\) blowing up at \(t=0\). We then provide tight, unified analyses for characterizing the convergence rate as a function of the anchor coefficient \(\beta(t)\). This is the first formal and rigorous treatment of this anchored dynamics, and it provides insight into the anchor acceleration mechanism and its accelerated \(\mathcal{O}(1/k^{2})\)-convergence rate. Finally, we present an adaptive method inspired by the continuous-time analyses and establish its effectiveness through theoretical analyses and experiments.
### Preliminaries and notation
We review standard definitions and set up the notation.
Monotone and set-valued operators.We follow the standard definitions of Bauschke and Combettes (2017); Ryu and Yin (2022). For the underlying space, consider \(\mathbb{R}^{n}\) with standard inner product \(\langle\cdot,\cdot\rangle\) and norm \(\lVert\cdot\rVert\). Define domain of \(\mathbf{A}\) as \(\operatorname{dom}\mathbf{A}=\{x\in\mathbb{R}^{n}\mid\mathbf{A}x\neq\emptyset\}\). We say \(\mathbf{A}\) is an operator on \(\mathbb{R}^{n}\) and write \(\mathbf{A}\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) if \(\mathbf{A}\) maps a point in \(\mathbb{R}^{n}\) to a subset of \(\mathbb{R}^{n}\). We say \(\mathbf{A}\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) is monotone if
\[\langle\mathbf{A}x-\mathbf{A}y,x-y\rangle\geq 0,\qquad\forall x,y\in\mathbb{R} ^{n},\]
i.e., if \(\langle u-v,x-y\rangle\geq 0\) for all \(u\in\mathbf{A}x\) and \(v\in\mathbf{A}y\). For \(\mu\in(0,\infty)\), say \(\mathbf{A}\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) is \(\mu\)-strongly monotone if
\[\langle\mathbf{A}x-\mathbf{A}y,x-y\rangle\geq\mu\lVert x-y\rVert^{2},\qquad \forall x,y\in\mathbb{R}^{n}.\]
Write \(\operatorname{Gra}\mathbf{A}=\{(x,u)\mid u\in\mathbf{A}x\}\) for the graph of \(\mathbf{A}\). An operator \(\mathbf{A}\) is maximally monotone if there is no other monotone \(\mathbf{B}\) such that \(\operatorname{Gra}\mathbf{A}\subset\operatorname{Gra}\mathbf{B}\) properly, and is maximally \(\mu\)-strongly monotone if there is no other \(\mu\)-strongly monotone \(\mathbf{B}\) such that \(\operatorname{Gra}\mathbf{A}\subset\operatorname{Gra}\mathbf{B}\) properly.
For \(L\in(0,\infty)\), single-valued operator \(\mathbf{T}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) is \(L\)-Lipschitz if
\[\|\mathbf{T}x-\mathbf{T}y\|\leq L\|x-y\|,\qquad\forall x,y\in\mathbb{R}^{n}.\]
Write \(\mathbf{J}_{\mathbf{A}}=(\mathbf{I}+\mathbf{A})^{-1}\) for the resolvent of \(\mathbf{A}\), while \(\mathbf{I}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) is the identity operator. When \(\mathbf{A}\) is maximally monotone, it is well known that \(\mathbf{J}_{\mathbf{A}}\) is single-valued with \(\operatorname{dom}\mathbf{J}_{\mathbf{A}}=\mathbb{R}^{n}\).
We say \(x_{\star}\in\mathbb{R}^{n}\) is a zero of \(\mathbf{A}\) if \(0\in\mathbf{A}x_{\star}\). We say \(y_{\star}\) is a fixed-point of \(\mathbf{T}\) if \(\mathbf{T}y_{\star}=y_{\star}\). Write \(\operatorname{Zer}\mathbf{A}\) for the set of all zeros of \(\mathbf{A}\) and \(\operatorname{Fix}\mathbf{T}\) for the set of all fixed-points of \(\mathbf{T}\).
Monotonicity with continuous curves.If \(\mathbf{A}\) is a differentiable monotone operator and \(X\colon[0,\infty)\to\mathbb{R}^{n}\) is a differentiable curve, then taking limit \(h\to 0\) of
\[\frac{1}{h^{2}}\left\langle\mathbf{A}(X(t+h))-\mathbf{A}(X(t)),X(t+h)-X(t) \right\rangle\geq 0\]
leads to
\[\left\langle\frac{d}{dt}\mathbf{A}(X(t)),\dot{X}(t)\right\rangle\geq 0. \tag{1}\]
Similarly if \(\mathbf{A}\) is \(\mu\)-strongly monotone, then
\[\left\langle\frac{d}{dt}\mathbf{A}(X(t)),\dot{X}(t)\right\rangle\geq\mu\left\| \dot{X}(t)\right\|^{2}. \tag{2}\]
### Prior work
Acceleration for smooth convex function in discrete setting.There had been rich amount of research on acceleration about smooth convex functions. Nesterov (1983) introduced accelerated gradient method (AGM), which has a faster \(\mathcal{O}(1/k^{2})\) rate than \(\mathcal{O}(1/k)\) rate of gradient descent (Cauchy, 1847) in reducing the function value. Optimized gradient method (OGM) (Kim & Fessler, 2016) improved AGM's rate by a constant factor, and is proven to be optimal (Drori, 2017). For smooth strongly convex setup, strongly convex AGM (Nesterov, 2004) achieves an accelerated rate, and further improvements were studied (Van Scoy et al., 2018; Park et al., 2022; Taylor & Drori, 2022; Salim et al., 2022). Recently, OGM-G (Kim & Fessler, 2021) was introduced as an accelerated method reducing squared gradient magnitude for smooth convex minimization.
Acceleration for smooth convex function in continuous setting.Continuous-time analysis of Nesterov acceleration has been thoroughly studied as well. Su et al. (2014) introduced an ODE model of AGM \(\dot{X}(t)+\frac{r}{r}X+\nabla f(X)=0\), providing \(f(X(t))-f_{\star}\in\mathcal{O}\left(1/t^{2}\right)\) rate for \(r\geq 3\). Attouch et al. (2018) improved the constant of bound for \(r>3\) and proved convergence of the trajectories. Attouch et al. (2019) achieved \(\mathcal{O}\left(1/t^{-2r/3}\right)\) rate for \(0<r<3\). Apidopoulos et al. (2018) generalized their results to differential inclusion with non-differentiable convex function. Furthermore, wide range of variations of AGM ODE has been studied (Attouch & Cabot, 2017; Attouch et al., 2018; Aujol et al., 2019; Bot et al., 2020; Attouch & Laszlo, 2021; Attouch et al., 2021; Bot et al., 2021). Also, applications to monotone inclusion problem were studied by Attouch & Peypouquet (2019); Attouch & Laszlo (2020); Bot & Hulett (2022).
Motivated from above continuous-time analysis for accelerated methods, tools analyzing ODEs have further developed. Wibisono et al. (2016) and Wilson et al. (2021) adopted Lagrangian mechanics and introduced first, second Bregman Lagrangian to provide unified analysis for generalized family of ODE, where the latter provided analysis for strongly convex AGM. Systemical approach to obtain Lyapunov functions exploiting Hamiltonian mechanics (Diakonikolas & Jordan, 2021) and dilated coordinate system (Suh et al., 2022) were proposed, and analysis of OGM-G was provided by dilated coordinate framework. Different forms of continuous-time models such as high-resolution ODE (Shi et al., 2021) and continuized framework (Even et al., 2021) were developed.
On the other hand, another type of acceleration called _anchor acceleration_ recently gained attention. As Yoon & Ryu (2022) focused, many recently discovered accelerated methods for both minimax optimization and fixed-point problems are based on anchor acceleration.
Fixed-point problem.The history of studies on fixed-point problem dates back to the work of Banach (1922), which established that the Picard iteration with contractive operator is convergent. Kransnosel'skii-Mann iteration (KM) (Krasnosel'skii, 1955; Mann, 1953) was introduced, which is a generalization of Picard iteration. Convergence of KM iteration with general nonexpansive operators was proven by Martinet (1972). For iteration of Halpern (Halpern, 1967), convergence with wide choice of parameter were shown by Wittmann (1992).
The squared norm \(\left\|y_{k}-\mathbf{T}y_{k}\right\|^{2}\) of fixed-point residual is a common error measure for fixed-point problems. KM iteration was shown to exhibit \(\mathcal{O}(1/k)\) rate (Cominetti et al., 2014; Liang et al., 2016; Bravo & Cominetti, 2018) and \(o(1/k)\)-rate (Baillon & Bruck, 1992; Matsushita, 2017). For Halpern iteration, \(\mathcal{O}(1/(\log k)^{2})\)-rate was established by Leustean (2007), then improve to \(\mathcal{O}(1/k)\) rate by Kohlenbach (2011). First accelerated \(\mathcal{O}(1/k^{2})\) rate was achieved by Sabach & Shtern (2017) and the constant was improved by Lieder (2021) by a factor of 16.
It is known that there is an equivalence between solving fixed-point problem and solving monotone inclusion problem (George J. Minty, 1962; Eckstein & Bertsekas, 1992;
Park & Ryu, 2022). Proximal point method (PPM) (Martin, 1970) achieves \(\mathcal{O}\left(1/k\right)\)-rate in terms of \(\left\|\tilde{\mathbf{A}}\mathbf{x}_{k}\right\|^{2}\)(Gu & Yang, 2020). Accelerated proximal point method (APPM) (Kim, 2021) improved the rate to accelerated \(\mathcal{O}\left(1/k^{2}\right)\)-rate. Park & Ryu (2022) showed APPM is exactly optimal method for this problem and provided exactly optimal method for \(\mu\)-strongly monotone operator named OS-PPM, which achieved \(\mathcal{O}\left(1/e^{4\mu k}\right)\) rate. The optimal methods APPM and OS-PPM are based on anchor acceleration (Yoon & Ryu, 2022).
Minimax problems.Minimax optimization problem of the form \(\min_{x}\max_{y}\mathbf{L}(x,y)\) have recently gained attention in machine learning society. One of the commonly considered theoretical setting is smooth convex-concave setup, with squared gradient norm as error measure. In terms of \(\left\|\partial\mathbf{L}(x,y)\right\|^{2}\), classical EG (Solodov & Svaiter, 1999) and OG (Popov, 1980; Rakhlin & Sridharan, 2013; Daskalakis et al., 2018) was shown to achieved \(\mathcal{O}\left(1/k\right)\)-rate (Gorbunov et al., 2022). SGDA (Ryu et al., 2019) achieved \(\mathcal{O}\left(1/k^{2-2p}\right)\) rate for \(p>1/2\) with introducing the term _anchor_. With introducing a parameter-free Halpern type method, Diakonikolas (2020) achieved \(\mathcal{O}(\log k/k^{2})\). Recently, EAG (Yoon & Ryu, 2021) first achieved accelerated rate \(\mathcal{O}(1/k^{2})\) with anchor acceleration, followed by FEG (Lee & Kim, 2021) and anchored Popov's scheme (Tran-Dinh & Luo, 2021). Fast ODGA (Bot et al., 2022) also achieved accelerated \(o(1/k^{2})\) rate. For \(\partial\mathbf{L}\) is furthermore strongly monotone with condition number \(\kappa\), SM-EAG+ (Yoon & Ryu, 2022) achieved accelerated \(\mathcal{O}\left(1/e^{2k\kappa}\right)\) rate.
However, continuous-time analysis for anchor acceleration is, to the best of our knowledge, insufficient. Continuous-time analyses of acceleration for monotone inclusion problem were studied by Bot et al. (2022); Lin & Jordan (2023), but they did not consider anchor acceleration. Ryu et al. (2019) considered continuous-time analysis of anchor acceleration, but only with limited cases \(\dot{X}(t)=-\mathbf{A}(X(t))-\frac{\gamma}{t}(X-X_{0})\) for \(\gamma\geq 1\). In this paper, we provide a unified continuous-time analysis for anchor acceleration with generalized anchor coefficient.
## 2 Derivation of differential inclusion model of anchor acceleration
### Anchor ODE
Suppose \(\mathbf{A}\colon\mathbb{R}^{n}\rightrightarrows\mathbb{R}^{n}\) is a maximal monotone operator and \(\beta:(0,\infty)\rightarrow[0,\infty)\) is a twice differentiable function. Consider differential inclusion
\[\dot{X}(t)\in-\mathbf{A}(X(t))-\beta(t)(X(t)-X_{0}) \tag{3}\]
with initial condition \(X(0)=X_{0}\in\mathrm{dom}\left(\mathbf{A}\right)\). We refer to this as the _anchor ODE_. 1 We say \(X\colon[0,\infty)\rightarrow\mathbb{R}^{n}\) is a solution, if it is absolutely continuous and satisfies (3) for \(t\in(0,\infty)\) almost everywhere.
Footnote 1: Strictly speaking, this is a differential inclusion, not a differential equation, but we nevertheless refer to it as an ODE.
Denote \(S\) as the subset of \([0,\infty)\) on which \(X\) satisfies the differential inclusion. Define
\[\tilde{\mathbf{A}}(X(t))=-\dot{X}(t)-\beta(t)(X(t)-X_{0})\]
for \(t\in S\). Since \(\tilde{\mathbf{A}}(X(t))\in\mathbf{A}(X(t))\) for \(t\in S\), we say \(\tilde{\mathbf{A}}\) is a _selection_ of \(\mathbf{A}\) for \(t\in S\). If \(\left\|\tilde{\mathbf{A}}(X(t))\right\|\) is bounded on all bounded subsets of \(S\), then we can extend \(\tilde{\mathbf{A}}\) to \([0,\infty)\) while retaining certain favorable properties. We discuss the technical details of this extension in Appendix D.1. The statements of Section 3 are stated with this extension.
### Derivation from discrete methods.
We now show that the following instance of the anchor ODE
\[\dot{X}(t)=-\mathbf{A}(X(t))-\frac{1}{t}(X(t)-X_{0}), \tag{4}\]
where \(X(0)=X_{0}\) is the initial condition and \(\mathbf{A}\colon\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a continuous operator, is a continuous-time model of APPM (Kim, 2021), EAG (Yoon & Ryu, 2021), and FEG (Lee & Kim, 2021), which are accelerated method for monotone inclusion and minimax problems.
Consider APPM with operator \(h\mathbf{A}\)
\[x^{k} =\mathbf{J}_{h\mathbf{A}}y^{k-1}\] \[y^{k} =\frac{k}{k+1}(2x^{k}-y^{k-1})+\frac{1}{k+1}y^{0} \tag{5}\]
with initial condition \(y^{0}=x^{0}\). Assume \(\mathbf{A}\colon\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is continuous. Using \(y^{k-1}=x^{k}+h\mathbf{A}x^{k}\), we get
\[x^{k+1}+h\mathbf{A}x^{k+1}=\frac{k}{k+1}\left(x^{k}-h\mathbf{A}x^{k}\right)+ \frac{1}{k+1}x^{0}.\]
Multiplying both sides by \(k+1\) and reorganizing, we get
\[h(k+1)\frac{x^{k+1}-x^{k}}{h} +h(k+1)(\mathbf{A}x^{k+1}-\mathbf{A}x^{k})\] \[+(x^{k}-x^{0})+h\left(2k+1\right)\mathbf{A}x^{k}=0.\]
Identifying \(x^{0}=X_{0}\), \(2hk=t\), and \(x^{k}=X(t)\), we have \(\frac{x^{k+1}-x^{k}}{2h}=\dot{X}(t)+\mathcal{O}\left(h\right)\) and
\[t\dot{X} +\frac{t}{2}\left(\mathbf{A}(X(t+h))-\mathbf{A}(X(t))\right)\] \[+(X(t)-X_{0})+t\mathbf{A}(X(t))+\mathcal{O}\left(h\right)=0.\]
Taking limit \(h\to 0\), the second term vanishes and we get the anchor ODE (4). The correspondence with EAG and FEG are provided in Appendix C.4.
The following theorem establishes a rigorous correspondence between APPM and the anchor ODE for general maximal monotone operators.
**Theorem 2.1**.: _Let \(\mathbf{A}\) be a (possibly set-valued) maximal monotone operator and assume \(\mathrm{Zer}\mathbf{A}\neq\emptyset\). Let \(x^{k}\) be the sequence generated by APPM (5) and \(X\) be the solution of the differential inclusion (3) with \(\beta(t)=\frac{1}{t}\). For all fixed \(T>0\),_
\[\lim_{h\to 0+}\max_{0\leq k\leq\frac{T}{2h}}\left\|x^{k}-X(2kh)\right\|=0.\]
We provide the proof in Appendix C.2.
### Existence of the solution for \(\beta(t)=\frac{\gamma}{t^{p}}\)
To get further insight into the anchor acceleration, we generalize anchor coefficient to \(\beta(t)=\frac{\gamma}{t^{p}}\) for \(p,\gamma>0\). We first establish the uniqueness and existence of the solution.
**Theorem 2.2**.: _Consider (3) with \(\beta(t)=\frac{\gamma}{t^{p}}\), i.e._
\[\dot{X}(t)\in-\mathbf{A}(X(t))-\frac{\gamma}{t^{p}}(X(t)-X_{0}). \tag{6}\]
_for \(p,\gamma>0\). Then solution of (6) uniquely exists._
We provide the proof in Appendix A.
### Additional properties of anchor ODE
We state a regularity lemma of the differential inclusion (3), which we believe may be of independent interest. In particular, We use this result several times throughout our various proofs.
**Lemma 2.3**.: _Let \(X(\cdot)\) and \(Y(\cdot)\) are solutions of the differential inclusion (3) respectively with initial values and anchors \(X_{0}\) and \(Y_{0}\). Then for all \(t\in[0,\infty)\),_
\[\left\|X(t)-Y(t)\right\|\leq\left\|X_{0}-Y_{0}\right\|.\]
We provide the proof in Appendix A.1.
Boundedness of trajectories is an immediate corollary of Lemma 2.3. Specifically, suppose \(X(\cdot)\) is the solution of differential inclusion (3) with initial value \(X_{0}\). Then for all \(X_{\star}\in\mathrm{Zer}\mathbf{A}\) and \(t\in[0,\infty)\),
\[\left\|X(t)-X_{\star}\right\|\leq\left\|X_{0}-X_{\star}\right\|.\]
This follows from setting \(Y_{0}=X_{\star}\) in Lemma 2.3.
Let \(\beta\) be the anchor coefficient function of (3). Define \(C\colon[0,\infty)\to\mathbb{R}\) as \(C(t)=e^{\int_{t}^{t}\beta(s)ds}\) for some \(v\in[0,\infty]\). Note that \(\dot{C}=C\beta\) and \(C\) is unique up to scalar multiple. We call \(\mathcal{O}\left(\beta(t)\right)\) the _vanishing speed_ and \(\mathcal{O}\left(\frac{1}{C(t)}\right)\) the _contracting speed_, and we describe their trade-off in the following.
Loosely speaking, the _contracting speed_ describes how fast the anchor term alone contracts the dynamical system. Consider \(\dot{X}(t)=-\beta(t)(X(t)-a)\) for \(a\in\mathbb{R}^{n}\), a system only with the anchor. Then, \(X(t)=\frac{C(0)}{C(t)}(X(0)-a)+a\) is the solution, so the flow contracts towards anchor the \(a\) with rate \(\frac{1}{C(t)}\). Intuitively speaking, this contracting behavior leads to stability and convergence. On the other hand, the anchor must eventually vanish, since our goal is to converge to an element in \(\mathrm{Zer}\mathbf{A}\), not the anchor. Thus the _vanishing speed_ must be fast enough to not slow down the convergence of the flow to \(\mathrm{Zer}\mathbf{A}\).
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Case & \(p=1\), & \(p=1\), & \(p<1\) & \(p>1\) \\ \(\left\|\tilde{\mathbf{A}}(X(t))\right\|^{2}\) & \(\mathcal{O}\left(\frac{1}{t^{2}}\right)\) & \(\mathcal{O}\left(\frac{1}{t^{2\gamma}}\right)\) & \(\mathcal{O}\left(\frac{1}{t^{2p}}\right)\) & \(\mathcal{O}\left(1\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Convergence rates of Theorem 3.1.
This observation is captured in Figure 1. Consider a monotone linear operator \(\mathsf{A}=\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\) on \(\mathbb{R}^{2}\) and \(\beta(t)=\frac{\gamma}{t^{p}}\) with \(\gamma=1\) and \(p>0\). Note if there is no anchor, the ODE reduces to \(\tilde{X}=-\mathsf{A}(X)\) which do not converge (Goodfellow, 2016, Chapter 8.2). Figure 1 shows that with \(p>1\), the anchor vanished too early before the flow is contracted enough to result in converging flow. With \(p<1\), the flow does converge but the anchor vanished too late, slowing down the convergence. With \(p=1\), the convergence is fastest.
The following theorem formalizes this insight and produces the results of Table 1.
**Theorem 3.1**.: _Suppose \(\mathsf{A}\) is a maximal monotone operator with \(\mathrm{Zer}\mathsf{A}\neq\emptyset\). Consider (3) with \(\beta(t)=\frac{\gamma}{t^{p}}\). Let \(\tilde{\mathsf{A}}(X(t))\) be the selection of \(\mathsf{A}(X(t))\) as in Section 2.1. Then,_
\[\left\|\tilde{\mathsf{A}}(X(t))\right\|^{2}=\mathcal{O}\left(\frac{1}{C(t)^{ 2}}\right)+\mathcal{O}\left(\beta(t)^{2}\right)+\mathcal{O}\left(\dot{\beta}( t)\right).\]
Note that
\[C(t)=\begin{cases}t^{\gamma}&p=1\\ e^{\frac{\gamma}{1-p}t^{1-p}}&p\neq 1.\end{cases}\]
We expect the convergence rate of Theorem 3.1 to be optimized when the terms are balanced. When \(\beta(t)=\frac{1}{t}\),
\[\frac{1}{C(t)^{2}}=\frac{1}{(e^{\int_{1}^{t}\frac{1}{2}ds})^{2}}=\frac{1}{t^{ 2}}=\beta(t)^{2}=-\dot{\beta}(t)\]
and all three terms are balanced. Indeed, the choice \(\beta(t)=\frac{1}{t}\) corresponds to the optimal discrete-time choice \(\frac{1}{k+2}\) of APPM or other accelerated methods.
### Proof outline of Theorem 3.1
The proof of Theorem 3.1 follows from Lemma 3.4, which we will introduce later in this section. To derive Lemma 3.4, we introduce a conservation law.
**Proposition 3.2**.: _Suppose \(\tilde{\mathsf{A}}\) is Lipschitz continuous and monotone. For \(t_{0}>0\), define \(E:(0,\infty)\to\mathbb{R}\) as_
\[E=\frac{C(t)^{2}}{2}\] \[\qquad\qquad+\left(\beta(t)^{2}+\dot{\beta}(t)\right)\left\|X(t )-X_{0}\right\|^{2}\bigg{)}\] \[\quad+\int_{t_{0}}^{t}C(s)^{2}\left\langle\frac{d}{ds}\tilde{ \mathsf{A}}(X(s)),\dot{X}(s)\right\rangle ds\] \[\quad-\int_{t_{0}}^{t}\frac{d}{ds}\left(\frac{C(s)^{2}\dot{\beta }(s)}{2}\right)\left\|X(s)-X_{0}\right\|^{2}ds.\]
_Then \(E\) is a constant function._
The proof of Proposition 3.2 uses dilated coordinate \(W(t)=C(t)(X(t)-X_{0})\) to derive its conservation law in the style of Suh et al. (2022). We provide the details in Appendix D.2. Recall from (1) that \(\left\langle\frac{d}{ds}\tilde{\mathsf{A}}(X(s)),\dot{X}(s)\right\rangle\geq 0\). This leads to \(V(t)=E-\int_{t_{0}}^{t}C(s)^{2}\left\langle\frac{d}{ds}\tilde{\mathsf{A}}(X(s )),\dot{X}(s)\right\rangle ds\) as our Lyapunov function.
**Corollary 3.3**.: _Let \(\mathsf{A}\) be maximal monotone and \(\beta(t)=\frac{\gamma}{t^{p}}\) with \(p>0\), \(\gamma>0\). Let \(\tilde{\mathsf{A}}(X(t))\) be the selection of \(\mathsf{A}(X(t))\) as in Section 2.1. For \(t_{0}\geq 0\), define \(V:[0,\infty)\to\mathbb{R}\) as_
\[V(t)= \frac{C(t)^{2}}{2}\] \[\qquad+\left(\beta(t)^{2}+\dot{\beta}(t)\right)\left\|X(t)-X_{0} \right\|^{2}\bigg{)}\] \[\quad-\int_{t_{0}}^{t}\frac{d}{ds}\left(\frac{C(s)^{2}\dot{\beta }(s)}{2}\right)\left\|X(s)-X_{0}\right\|^{2}ds.\]
_for \(t>0\) and \(V(0)=\lim_{t\to 0+}V(t)\). Then \(V(t)\leq V(0)\) holds for \(t\geq 0\)._
A technical detail is that all terms involving \(\frac{d}{ds}\tilde{\mathsf{A}}(X(s))\) have been excluded in the definition of \(V\) and this is what allows \(\mathsf{A}\) to not be Lipschitz continuous. We provide the details in Appendix D.3.
**Lemma 3.4**.: _Consider the setup of Corollary 3.3. Assume \(\mathrm{Zer}\mathsf{A}\neq\emptyset\). Then for \(t>0\) and \(X_{\star}\in\mathrm{Zer}\mathsf{A}\),_
\[\left\|\tilde{\mathsf{A}}(X(t))\right\|^{2}\leq 4\beta(t)^{2} \left\|X_{0}-X_{\star}\right\|^{2}+\frac{4V(0)}{C(t)^{2}}\] \[\qquad-2\left(\beta(t)^{2}+\dot{\beta}(t)\right)\left\|X(t)-X_{0} \right\|^{2} \tag{7}\] \[\qquad+\frac{2}{C(t)^{2}}\int_{t_{0}}^{t}\frac{d}{ds}\left(C(s)^{ 2}\dot{\beta}(s)\right)\left\|X(s)-X_{0}\right\|^{2}ds.\]
Proof outline of Lemma 3.4.: Define
\[\Phi(t)=\left\|\tilde{\mathsf{A}}(X(t))\right\|^{2}+2\beta(t)\left\langle\tilde{ \mathsf{A}}(X(t)),X(t)-X_{0}\right\rangle.\]
Then,
\[\Phi(t)\geq\left\|\tilde{\mathsf{A}}(X(t))\right\|^{2}+2\beta(t) \left\langle\tilde{\mathsf{A}}(X(t)),X_{\star}-X_{0}\right\rangle\] \[\geq\left\|\tilde{\mathsf{A}}(X(t))\right\|^{2}-2\left(\left\| \frac{1}{2}\tilde{\mathsf{A}}(X(t))\right\|^{2}+\left\|\beta(t)\left(X_{\star} -X_{0}\right)\right\|^{2}\right)\] \[=\frac{1}{2}\left\|\tilde{\mathsf{A}}(X(t))\right\|^{2}-2\beta(t )^{2}\left\|X_{0}-X_{\star}\right\|^{2}. \tag{8}\]
First inequality holds since \(\tilde{\mathsf{A}}\) is monotone, second inequality follows from Young's inequality.
By Corollary 3.3, \(\frac{2V(0)}{C(t)^{2}}-\frac{2V(t)}{C(t)^{2}}+\Phi(t)\geq\Phi(t)\) for \(t>0\). Applying (8) and organizing, we can get the desired result. The details are provided in Appendix D.4.
Proof outline of Theorem 3.1.: It remains to show that last integral term of Lemma 3.4 is \(\mathcal{O}\left(\frac{1}{C(t)^{2}}\right)+\mathcal{O}\left(\beta(t)^{2}\right)+ \mathcal{O}\left(\dot{\beta}(t)\right)\). The details are provided in Appendix D.5.
Before we end this section, we observe how our analysis simplifies in the special case \(\beta(t)=1/t\). In this case,
\[V(t)=t^{2}\left\|\tilde{\mathbf{A}}(X(t))\right\|^{2}+2t\left\langle\tilde{ \mathbf{A}}(X(t)),X(t)-X_{0}\right\rangle,\]
and this corresponds to the Lyapunov function of (Ryu et al., 2019, Section 4) for the case \(\gamma=1\). As \(V(0)=0\), the conclusion of Lemma 3.4 becomes
\[\left\|\tilde{\mathbf{A}}(X(t))\right\|^{2}\leq\frac{4}{t^{2}}\left\|X_{0}-X_{ \star}\right\|^{2}=\mathcal{O}\left(\frac{1}{t^{2}}\right),\]
which to the best rate in Table 1.
### Point convergence
APPM is an instance of the Halpern method (Park and Ryu, 2022, Lemma 3.1), which iterates converge to the element in \(\mathrm{ZerA}\) closest to \(X_{0}\)(Halpern, 1967; Wittmann, 1992). The anchor ODE also exhibits this behavior.
**Theorem 3.5**.: _Let \(\mathbf{A}\) be a maximal monotone operator with \(\mathrm{ZerA}\neq\emptyset\) and \(X\) be the solution of (3). If \(\lim_{t\to\infty}\left\|\tilde{\mathbf{A}}(X(t))\right\|=0\) and \(\lim_{t\to\infty}1/C(t)=0\), then, as \(t\to\infty\),_
\[X(t)\to\operatorname*{argmin}_{z\in\mathrm{ZerA}}\left\|z-X_{0}\right\|.\]
We provide the proof in Appendix D.6.
## 4 Tightness of analysis
In this section, we show that the convergence rates of Table 1 are actually tight by considering the dynamics under the explicit example \(\mathbf{A}=\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\). Throughout this section, we denote \(\mathbf{A}\) as \(A\) when when the operator is linear.
### Explicit solution for linear \(A\)
**Lemma 4.1**.: _Let \(A\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a linear operator and let \(\beta(t)=\frac{\gamma}{t}\). The series_
\[X(t)=\sum_{n=0}^{\infty}\frac{(-tA)^{n}}{\Gamma(n+\gamma+1)}\Gamma(\gamma+1)X_ {0},\]
_where \(\Gamma\) denotes the gamma function, is the solution for (3) with \(\mathbf{A}=A\)._
Note that when \(\gamma=0\), this is the series definition of the matrix exponential and \(X(t)=e^{-tA}\). The solution also has an integral form, which extends to general \(\beta(t)\).
**Lemma 4.2**.: _Suppose \(A\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a monotone linear operator. Then_
\[X(t)=\frac{e^{-tA}}{C(t)}\left(\int_{0}^{t}e^{sA}C(s)\beta(s)ds+C(0)I\right)X_ {0} \tag{9}\]
_is the solution for (3) with \(\mathbf{A}=A\)._
See Appendix E.1.1 and Appendix E.1.2 for details.
If \(A=\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\), then \(e^{-tA}=\left(\begin{smallmatrix}\cos t&\sin t\\ -\sin t&\cos t\end{smallmatrix}\right)\) is a rotation matrix, a unitary matrix that preserves norms. As we will soon see, \(A=\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\) turns out to be the worst-case instance in many cases.
### The rates in Table 1 are tight
First, we consider \(p>1\) for \(\beta(t)=\frac{\gamma}{t^{p}}\).
**Theorem 4.3**.: _Suppose \(\lim_{t\to\infty}\frac{1}{C(t)}\neq 0\), i.e., suppose \(\beta(t)\in L^{1}[t_{0},\infty)\) for some \(t_{0}>0\). Then there exists an operator \(\mathbf{A}\) such that_
\[\lim_{t\to\infty}\left\|\tilde{\mathbf{A}}(X(t))\right\|\neq 0,\]
_where \(X\) is the solution of (3)._
Note that \(\frac{\gamma}{t^{p}}\in L^{1}[t_{0},\infty)\) when \(p>1\). The proof of Theorem 4.3 considers \(\mathbf{A}=2\pi\xi\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\) for \(\xi\in\mathbb{R}\) and uses the Fourier inversion formula. See Appendix E.2 for details.
Next, we consider \(\beta(t)=\frac{\gamma}{t^{p}}\) for cases other than \(p>1\).
**Theorem 4.4**.: _Let \(A=\left(\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\right)\), \(\beta(t)=\frac{\gamma}{t^{p}}\), \(0<p\leq 1\), and \(\gamma>0\). Let \(X\) be the solution given by (9) and \(X_{0}\neq 0\). Let_
\[r(t)=\begin{cases}t^{2}&\text{for }p=1,\gamma\geq 1\\ t^{2\gamma}&\text{for }p=1,\gamma<1\\ t^{2p}&\text{for }0<p<1.\end{cases},\]
_Then,_
\[\lim_{t\to\infty}r(t)\left\|A(X(t))\right\|^{2}\neq 0.\]
We provide the proof in Appendix E.3.
## 5 Discretized algorithms
In this section, we provide discrete-time convergence results that match the continuous-time rate of Section 3.
**Theorem 5.1**.: _Suppose \(\mathbf{A}\) be a maximal monotone operator, \(p>0\), and \(\gamma>0\). Consider_
\[x^{k} =\mathbf{J}_{\mathbf{A}}y^{k-1}\] \[y^{k} =\frac{k^{p}}{k^{p}+\gamma}(2x^{k}-y^{k-1})+\frac{\gamma}{k^{p}+ \gamma}x^{0}\]
_for \(k=1,2,\dots\), with initial condition \(y^{0}=x^{0}\in\mathbb{R}^{n}\). Let \(\tilde{\mathbf{A}}x^{k}=y^{k-1}-x^{k}\) for \(k=1,2,\dots\). Then this method exhibits the rates of convergence in Table 2._
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Case & \(\begin{matrix}p=1,&p=1,\\ \gamma\geq 1&\gamma<1\end{matrix}\) & \(p<1\) & \(p>1\) \\ \hline \(\left\|\tilde{\mathbf{A}}(x^{k})\right\|^{2}\) & \(\mathcal{O}\left(\frac{1}{k^{2}}\right)\) & \(\mathcal{O}\left(\frac{1}{k^{2\gamma}}\right)\) & \(\mathcal{O}\left(\frac{1}{k^{2p}}\right)\) & \(\mathcal{O}\left(1\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Rates for the discrete-time method of Theorem 5.1.
Note that the method of Theorem 5.1 reduces to APPM when \(\gamma=1\), \(p=1\).
Proof outline of Theorem 5.1.: The general strategy is to find discretized counterparts of corresponding continuous-time analyses. However, directly discretizing the conservation law of Proposition 3.2 was difficult due to technical reasons. Instead, we obtain differently scaled but equivalent conservation laws using dilated coordinates and then performed the discretization. The specific dilated coordinates, inspired by (Suh et al., 2022), are \(W_{1}(t)=X(t)-X_{0}\) for \(p>1\), \(W_{2}(t)=t^{p}\left(X(t)-X_{0}\right)\) for \(0<p<1\), \(W_{3}(t)=t\left(X(t)-X_{0}\right)\) for \(p=1\), \(\gamma\geq 1\) and \(W_{4}(t)=t^{\gamma}\left(X(t)-X_{0}\right)\) for \(p=1\), \(0<\gamma<1\).
In the discrete-time analyses, the behavior of the leading-order terms is predictable as they match the continuous-time counterpart. The difficult part is, however, controlling the higher-order terms that were not present in the continuous-time analyses. Through our detailed analyses, we bound such higher-order terms and show that they do not affect the convergence rate in the end. We provide the details in Appendix F.3.
## 6 Convergence analysis under strong monotonicity
In this section, we analyze the dynamics of the anchor ODE (3) for \(\mu\)-strongly monotone \(\mathsf{A}\). When \(\beta(t)=\frac{1}{t}\) and \(\mathsf{A}=\left(\begin{smallmatrix}\mu&0\\ 0&\mu\end{smallmatrix}\right)\), Lemma 4.1 tells us that \(\mathsf{A}(X(t))=\frac{1}{t}\left(I-e^{-tA}\right)X_{0}\) and therefore that \(\left\|\mathsf{A}(X(t))\right\|^{2}=\Theta\left(\frac{1}{t^{2}}\right)\), which is a slow rate for the strongly monotone setup. On the other hand, we will see that \(\beta(t)=\frac{2\mu}{e^{2\mu t}-1}\) is a better choice leading to a faster rate in this setup.
Our analysis of this section is also based on a conservation law, but we use a slightly modified version to exploit strong monotonicity.
**Proposition 6.1**.: _Suppose \(\tilde{\mathsf{A}}\) is monotone and Lipschitz continuous. Let \(X\) be the solution of (3) and let \(R:[0,\infty)\rightarrow(0,\infty)\) be a differentiable function. For \(t_{0}>0\), define \(E:(0,\infty)\rightarrow\mathbb{R}\) as_
\[E\!=\!\frac{C(t)^{2}R(t)^{2}}{2}\!\left(\!\left\|\tilde{\mathsf{ A}}(X(t))\right\|^{2}\!\!\!+\!2\beta(t)\big{\langle}\tilde{\mathsf{A}}(X(t)),\! X(t)\!-\!X_{0}\right\rangle\] \[+\left(\beta(t)^{2}+\dot{\beta}(t)\right)\left\|X(t)-X_{0}\right\| ^{2}\Bigg{)}\] \[-\int_{t_{0}}^{t}\!\frac{d}{ds}\left(\frac{C(s)^{2}R(s)^{2}\dot{ \beta}(s)}{2}\right)\left\|X(t)-X_{0}\right\|^{2}ds.\]
_Then \(E\) is a constant function for \(t\in[0,\infty)\)._
Proposition 6.1 generalizes Proposition 3.2, since it corresponds to the special case with \(R(t)\equiv 1\).
Recall from (2), when \(\mathsf{A}\) is \(\mu\)-strongly monotone we have
\[\left\langle\frac{d}{ds}\mathsf{A}(X(t)),\dot{X}(t)\right\rangle-\mu\left\| \dot{X}(t)\right\|^{2}\geq 0.\]
This motivates the choice \(R(t)=e^{\mu t}\), since \(\frac{\dot{R}(s)}{R(s)}=\mu\). From calculation provided in Appendix G.2, the choice \(\beta(t)=\frac{2\mu}{e^{2\mu t}-1}\) makes \(\frac{d}{ds}\left(\frac{C(s)^{2}R(s)^{2}\dot{\beta}(s)}{2}\right)=0\). Plugging these choices into Proposition 6.1 and following arguments of Section 3, we arrive at the following theorem.
**Theorem 6.2**.: _Let \(\mathsf{A}\) be a \(\mu\)-strongly maximal monotone operator with \(\mu>0\) and assume \(\mathrm{Zer}\mathsf{A}\neq\emptyset\). Let \(X\) be a solution of the differential inclusion (3) with \(\beta(t)=\frac{2\mu}{e^{2\mu t}-1}\), i.e._
\[\dot{X}\in-\mathsf{A}(X)-\frac{2\mu}{e^{2\mu t}-1}(X-X_{0}) \tag{10}\]
_for almost all \(t\). Then for \(V:[0,\infty)\rightarrow\mathbb{R}\) defined as_
\[V(t) =\frac{(e^{\mu t}-e^{-\mu t})^{2}}{2}\left\|\tilde{\mathsf{A}}(X( t))\right\|^{2}\] \[+2\mu\left(1-e^{-2\mu t}\right)\left\langle\tilde{\mathsf{A}}(X( t)),X(t)-X_{0}\right\rangle\] \[+2\mu^{2}\left(e^{-2\mu t}-1\right)\left\|X(t)-X_{0}\right\|^{2},\]
\(V(t)\leq V(0)\) _holds. Furthermore,_
\[\|\tilde{\mathsf{A}}(X(t))\|^{2}\leq 4\left(\frac{\mu}{e^{\mu t}-1}\right)^{2} \|X_{0}\!-\!X_{\star}\|^{2}=\mathcal{O}\left(\frac{1}{e^{2\mu t}}\right).\]
In Appendix G.2.3, we show that (10) is a continuous-time model for OS-PPM of Park & Ryu (2022). In Appendix B, we show the existence and uniqueness of the solution.
Since \(\beta(t)=\frac{2\mu}{e^{2\mu t}-1}\in L^{1}[t_{0},\infty)\) for any \(t_{0}>0\), Theorem 4.3 implies that \(\tilde{\mathsf{A}}(X(t))\nrightarrow 0\) when \(\mathsf{A}\) is merely monotone. This tells us that the optimal choice of \(\beta(t)\) for should depend on the properties of \(\mathsf{A}\). In the following section, we describe how \(\beta(t)\) can be chosen to adapt to the operator's properties.
## 7 Adaptive anchor acceleration and experiments
In this section, we present an adaptive method for choosing the anchor coefficient \(\beta\), and we theoretically and experimentally show that this choice allows the dynamics to adapt to the operator's properties.
**Theorem 7.1**.: _Suppose \(\tilde{\mathsf{A}}\) is Lipschitz continuous and monotone. Consider the anchor ODE_
\[\dot{X}=-\tilde{\mathsf{A}}(X)+\underbrace{\left\|\tilde{\mathsf{A}}(X) \right\|^{2}}_{=-\beta(t)}(X-X_{0}) \tag{11}\]
_with initial condition \(X(0)=X_{0}\) and \(\left\|\tilde{\mathbf{A}}(X_{0})\right\|\neq 0\). Suppose the solution exists and \(\tilde{X}\) is continuous at \(t=0\). Moreover, suppose \(\beta\colon(0,\infty)\to\mathbb{R}\) is well-defined, i.e., no division by zero occurs in the definition of \(\beta(t)\). Then \(\beta(t)>0\) and_
\[\left\|\tilde{\mathbf{A}}(X)\right\|^{2} \leq 4\beta(t)^{2}\left\|X_{0}-X_{\star}\right\|^{2}\] \[\beta(t)^{2} \leq\frac{1}{t^{2}}\]
_for all \(t>0\) and \(X_{\star}\in\mathrm{Zer}\mathbf{A}\)._
_If \(\tilde{\mathbf{A}}\) is furthermore \(\mu\)-strongly monotone, then_
\[\beta(t)^{2}\leq\left(\frac{\mu/2}{e^{\mu t/2}-1}\right)^{2}.\]
We provide the proof in Appendix H.1. Note that anchor coefficient (11) is chosen so that
\[\Phi(t)=\left\|\tilde{\mathbf{A}}(X(t))\right\|^{2}+2\beta(t)\left\langle \tilde{\mathbf{A}}(X(t)),X(t)-X_{0}\right\rangle=0.\]
So left-hand side of (8) is zero and a \(\mathcal{O}\left(\beta(t)^{2}\right)\) convergence rate is immediate. An analogous discrete-time result is shown in the following theorem.
**Theorem 7.2**.: _Let \(\mathbf{A}\) be a maximal monotone operator. Let \(x^{0}=y^{0}\in\mathbb{R}^{n}\). Consider_
\[x^{k} =\mathbf{J}_{\mathbf{A}}y^{k-1}\] \[y^{k} =(1-\beta_{k})(2x^{k}-y^{k-1})+\beta_{k}x^{0}\]
_with_
\[\beta_{k}=\begin{cases}\frac{\|\tilde{\mathbf{A}}x^{k}\|^{2}}{- \langle\tilde{\mathbf{A}}x^{k},\,x^{k}-x^{0}\rangle+\|\tilde{\mathbf{A}}x^{k} \|^{2}}&\text{ if }\|\tilde{\mathbf{A}}x^{k}\|^{2}\neq 0\\ 0&\text{ if }\|\tilde{\mathbf{A}}x^{k}\|^{2}=0,\end{cases}\]
_for \(k=1,2,\dots\), where \(\tilde{\mathbf{A}}x^{k}=y^{k-1}-x^{k}\)._
_Then_
\[\beta_{k} \in[0,1)\] \[\left\|\tilde{\mathbf{A}}(x^{k+1})\right\|^{2} \leq 4\beta_{k}^{2}\left\|x^{0}-x^{\star}\right\|^{2}\] \[\beta_{k}^{2} \leq\frac{1}{(k+1)^{2}}\]
_for \(k=1,2,\dots\) and \(x^{\star}\in\mathrm{Zer}\mathbf{A}\)._
The method of Theorem 7.2 is a discrete-time counterpart of the ODE of (11). Appendix H.2 provides the correspondence, and the extra term \(\|\tilde{\mathbf{A}}x^{k}\|^{2}\) in the denominator is shown to vanish in the continuous-time limit. Note Theorem 7.2 does not need \(\mathbf{A}\) to be Lipschitz as the method is a proximal method that accesses \(\mathbf{A}\) through its resolvent \(\mathbf{J}_{\mathbf{A}}\). We provide the proof of Theorem 7.2 in Appendix H.3.
Analogous to the continuous-time case, a key property of the discrete-time adaptive method is that the counterpart of \(\Phi(t)\) is kept nonpositive. In the proof of Lemma H.3, the fact \(\beta_{k}<1\) plays the key role while proving this property. The extra term \(\|\tilde{\mathbf{A}}x^{k}\|^{2}\) in the denominator and the fact that \(\langle\tilde{\mathbf{A}}x^{k},\,x^{k}-x^{0}\rangle<0\) when \(\|\tilde{\mathbf{A}}x^{k}\|^{2}\neq 0\) allow \(\beta_{k}<1\).
### Experiment details
We now show an experiment with the method of Theorem 7.2 applied to a decentralized compressed sensing problem Shi et al. (2015). We assume that we have the measurement \(b_{i}=A_{(i)}x+e_{i}\), where \(A_{(i)}\) is a measurement matrix available for each local agent \(i\), \(x\) is an unknown shared signal we hope to recover, and \(e_{i}\) is an error in measurement. We solve this problem in a decentralized manner in which the local agents keep their measurements private and only communicate with their neighbors.
Figure 2: (Left) Network graph. (Right) Squared norm of operator norm \(\|\tilde{\mathbf{A}}x^{k}\|^{2}\) vs. \(k\). The Halpern corresponds to the method in Theorem 5.1, we use \(p=1.5\) and \(\gamma=2.0\).
As in Shi et al. (2015), we formulate the problem into an unconstrained \(\ell_{1}\)-regularized least squares problem
\[\underset{x\in\mathbb{R}^{d}}{\text{minimize}} \frac{1}{n}\sum_{i=1}^{n}\left\{\frac{1}{2}\|A_{(i)}x-b_{i}\|^{2}+ h_{i}\|x\|_{1}\right\},\]
and apply PG-EXTRA. We compare vanilla PG-EXTRA with the various anchored versions of PG-EXTRA with \(\beta(t)\) as in Theorem 5.1 and Theorem 7.2. We show the results in Figure 2. Further details of the experiment are provided in Appendix I.
## 8 Conclusion
This work introduces a continuous-time model of anchor acceleration, the anchor ODE \(\dot{X}\in-\mathsf{A}(X)-\beta(t)(X-X_{0})\). We characterize the convergence rate as a function of \(\beta(t)\) and thereby obtain insight into the anchor acceleration mechanism. Finally, inspired by the continuous-time analyses, we present an adaptive method and establish its effectiveness through theoretical analyses and experiments.
Prior work analyzing continuous-time models of Nesterov acceleration had inspired various follow-up research, such as analyses based on Lagrangian and Hamiltonian mechanics (Wibisono et al., 2016; Wilson et al., 2021; Diakonikolas and Jordan, 2021), high-resolution ODE model (Shi et al., 2021), and continuized framework (Even et al., 2021). Carrying out similar analyses for the anchor ODE are interesting directions of future work.
|
2309.00955
|
Similarity between compact extremely red objects discovered with JWST in
cosmic dawn and blue-excess dust-obscured galaxies known in cosmic noon
|
Spatially compact objects with extremely red color in the rest-frame optical
to near-infrared (0.4--1 ${\rm \mu m}$) and blue color in the rest-frame
ultraviolet (UV; 0.2--0.4 ${\rm \mu m}$) have been discovered at $5 < z < 9$
using the James Webb Space Telescope (JWST). These extremely red objects
(JWST-EROs) exhibit spectral energy distributions (SEDs) that are difficult to
explain using a single component of either star-forming galaxies or quasars,
leading to two-component models in which the blue UV and extremely red optical
are explained using less-dusty and dusty spectra of galaxies or quasars,
respectively. Here, we report the remarkable similarity in SEDs between
JWST-EROs and blue-excess dust-obscured galaxies (BluDOGs) identified at $2 < z
< 3$. BluDOGs are a population of active galactic nuclei (AGNs) with blackhole
masses of $\sim10^{8-9}$ M$_\odot$, which are one order of magnitude larger
than those in some JWST-EROs. The Eddington ratios of BluDOGs are one or
higher, whereas those of JWST-EROs are in the range of 0.1--1. Therefore,
JWST-EROs are less massive, less active, and more common counterparts in
higher-$z$ of BluDOGs in cosmic noon. Conversely, JWST-EROs have a
significantly higher fraction of those with blue-excess than DOGs. We present
the average UV spectra of BluDOGs as a comparison to JWST-EROs and discuss a
coherent evolutionary scenario for dusty AGN populations.
|
Akatoki Noboriguchi, Akio K. Inoue, Tohru Nagao, Yoshiki Toba, Toru Misawa
|
2023-09-02T14:44:34Z
|
http://arxiv.org/abs/2309.00955v2
|
Similarity between compact extremely red objects discovered with JWST in cosmic dawn and blue-excess dust-obscured galaxies known in cosmic noon
###### Abstract
Spatially compact objects with extremely red color in the rest-frame optical to near-infrared (0.4-1 \(\mu\)m) and blue color in the rest-frame ultraviolet (UV; 0.2-0.4 \(\mu\)m) have been discovered at \(5<z<9\) using the James Webb Space Telescope (JWST). These extremely red objects (JWST-EROs) exhibit spectral energy distributions (SEDs) that are difficult to explain using a single component of either star-forming galaxies or quasars, leading to two-component models in which the blue UV and extremely red optical are explained using less-dusty and dusty spectra of galaxies or quasars, respectively. Here, we report the remarkable similarity in SEDs between JWST-EROs and blue-excess dust-obscured galaxies (BluDOGs) identified at \(2<z<3\). BluDOGs are a population of active galactic nuclei (AGNs) with blackhole masses of \(\sim 10^{8-9}\) M\({}_{\odot}\), which are one order of magnitude larger than those in some JWST-EROs. The Eddington ratios of BluDOGs are one or higher, whereas those of JWST-EROs are in the range of 0.1-1. Therefore, JWST-EROs are less massive, less active, and more common counterparts in higher-\(z\) of BluDOGs in cosmic noon. Conversely, JWST-EROs have a significantly higher fraction of those with blue-excess than DOGs. We present the average UV spectra of BluDOGs as a comparison to JWST-EROs and discuss a coherent evolutionary scenario for dusty AGN populations.
Active galactic nuclei(16) -- Galaxy evolution(594) --
0000-0002-8820-7885]Akatoki Noboriguchi
0000-0002-4188-7885]Akio K. Inoue
0000-0002-4188-7885]Tohru Nagao
0000-0002-4188-7885]Yoshiki Toba
0000-0002-4188-7885]Toru Misawa
## 1 Introduction
The James Webb Space Telescope (JWST) has opened up an amazing new window to the very early Universe with a near-infrared (NIR) camera (NIRCam), NIR Spectrograph (NIRSpec), and mid-infrared (MIR) instrument (MIRI). NIR and MIR wavelengths are important for investigating high-\(z\) objects, as emission lines in the rest-frame ultraviolet (UV) and optical are shifted to NIR and MIR. Recently, spatially compact and extremely red objects (EROs) have been discovered in the observational data of the JWST (Kocevski et al., 2023; Akins et al., 2023; Barro et al., 2023; Matthee et al., 2023; Labbe et al., 2023; Furtak et al., 2023, 2023; Kokorev et al., 2023). We refer to these objects as JWST-EROs in this Letter. The JWST-EROs exhibit a red color between 2.77 and 4.44 \(\mu\)m-bands (\((F277W-F444W)_{\rm AB}>1.5\)) and a blue color between 1.50 and 2.00 \(\mu\)m-bands (\((F150W-F200W)_{\rm AB}\sim 0\)) (Barro et al., 2023). Given the spectroscopic redshifts (\(z_{\rm spec}\)) or photometric redshifts (\(z_{\rm photo}\)) of the JWST-EROs, spectral energy distributions (SEDs) are characterized by a peculiar combination of the extremely red color in the rest-frame 0.4-1 \(\mu\)m and blue color in the rest-frame 0.2-0.4 \(\mu\)m. Such SEDs are difficult to explain with a sin
gle population of galaxies or quasars, but they can be explained with composites of two components of less-dusty galaxies/quasars and dusty galaxies/quasars (Kocevski et al., 2023; Akins et al., 2023; Barro et al., 2023; Labbe et al., 2023). The spatial compactness of the JWST-EROs suggests potential active galactic nuclei (AGNs) (Akins et al., 2023; Barro et al., 2023; Labbe et al., 2023). Some JWST-EROs exhibit broad emission lines in their spectra, indicating that they are AGNs (Kocevski et al., 2023; Matthee et al., 2023; Furtak et al., 2023; Kokorev et al., 2023).
There is a population of dusty AGNs called dust-obscured galaxies (DOGs), which are thought to be in a transition phase between dusty star formation and dusty AGNs after a gas-rich major merger event (Dey et al., 2008). DOGs are AGNs selected by a color between observed-frame optical and MIR. Toba et al. (2015, 2017) selected DOGs from Subaru Hyper Suprime Cam (HSC; Miyazaki et al., 2018)-Subaru Strategic Program (SSP; Aihara et al., 2018) data and _Wide-field Infrared Survey Explorer_ (_WISE_; Wright et al., 2010) data using the color criterion (\((i-W4)_{\rm AB}\geq 7.0\), where \(i\) and \(W4\) denotes the magnitudes of \(i\)- and \(W4\)-bands) (see also Toba & Nagao, 2016). The extremely red color between optical and MIR is explained by heavy dust reddening in UV/optical and re-emission in MIR from the dusty torus surrounding the nucleus.
Most DOGs exhibit a simple red SED described by a power law (see e.g., Toba et al., 2020, 2020). However, Noboriguchi et al. (2019) found eight DOGs with blue-excess in optical bands (blue-excess DOGs; BluDOGs) using an observed-frame optical slope (\(\alpha_{\rm opt}<0.4\), where \(\alpha_{\rm opt}\) denotes the observed-frame optical spectral index of the power law fitted to the HSC \(g\)-, \(r\)-, \(i\)-, \(z\)-, and \(y\)-band fluxes, \(f_{\nu}\propto\lambda^{\alpha_{\rm opt}}\)). After spectroscopic follow-up observations, Noboriguchi et al. (2022) reported that BluDOGs are broad-line AGNs at \(2<z<3\) and the origin of the blue-excess are blue continuum and large equivalent widths (EWs) of the broad emission lines. In addition, C iv lines exhibit a blue tail, suggesting that BluDOGs have nuclear outflows (Noboriguchi et al., 2022). The Eddington ratio (\(\lambda_{\rm Edd}=L_{\rm bol}/L_{\rm Edd}\), where \(L_{\rm bol}\) and \(L_{\rm Edd}\) denote the bolometric luminosity and the Eddington luminosity, respectively) of BluDOGs is greater than 1, i.e., it is in a super Eddington phase. Therefore, BluDOGs are likely to be in a transition phase between dusty AGN and optically thin quasar phases (Noboriguchi et al., 2022).
In this Letter, we present the remarkable similarity in SEDs between the JWST-EROs at \(z>5\) and BluDOGs at \(2<z<3\). This Letter is structured as follows. We describe the samples of the JWST-EROs and BluDOGs in Section 2. In Section 3, we compare the SEDs and physical parameters between JWST-EROs and BluDOGs. In Section 4, we discuss the number densities and excess UV emission of JWST-EROs and BluDOGs and present a possible evolutionary scenario of dusty AGNs to explain the similarity between JWST-EROs and BluDOGs. Throughout this Letter, the adopted cosmology is a flat universe with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.3\), and \(\Omega_{\Lambda}=0.7\). Unless otherwise stated, all magnitudes refer to the AB system (Oke & Gunn, 1983).
## 2 Sample
In this Letter, we use the JWST-ERO samples taken from Barro et al. (2023) and Matthee et al. (2023) and the BluDOG sample taken from Noboriguchi et al. (2022).
Barro et al. (2023) selected 37 JWST-EROs in CEERS fields (Finkelstein et al., 2023) based on the single color criterion \((F277W-F444W)_{\rm AB}>1.5\). These EROs have an average magnitude of \(F444W=25.9\) AB mag and \(5<z_{\rm photo}<9\). Surprisingly, their color \((F150W-F277W)_{\rm AB}\sim 0\) indicates a flat slope in the rest-frame UV in contrast to their very red rest-frame optical color. In NIRCam images, these EROs are generally unresolved, point-like sources. Barro et al. (2023) have also reported that among 37 JWST-EROs, four objects are found in the MIRI imaging area of CEERS and that these objects are detected as consistent with an extrapolation from \(F444W\) with the red slope. Another set of four EROs is found in the NIRSpec targets of CEERS, and they exhibit clear emission lines, securing robust spectroscopic redshifts. One of them is the broad-line AGN at \(z_{\rm spec}=5.62\) reported in Kocevski et al. (2023). Because Barro et al. (2023) presented individual SEDs of the eight JWST-EROs in addition to two stacked SEDs of the 37 JWST-EROs divided into two groups depending on their photometric redshifts, we adopt these SEDs as typical SEDs of JWST-EROs in this Letter.
Matthee et al. (2023) identified 20 broad-line (\(>1000\) km s\({}^{-1}\)) H\(\alpha\) emitters at \(z\sim 5\) from the wide-field slitless spectroscopy data of the EIGER (Kashino et al., 2023) and FRESCO (Oesch et al., 2023) surveys. These objects are generally spatially compact point-like sources, except in some cases with faint companions. Matthee et al. (2023) concluded that these H\(\alpha\) emitters are AGNs because of their broad emission line and compact morphology. The blackhole masses and bolometric luminosities of the H\(\alpha\) emitters are estimated as log\({}_{10}(M_{\rm BH}/M_{\odot})=6.9\)-8.6 and \(L_{\rm bol}=5.0\)-\(65.8\times 10^{44}\) erg s\({}^{-1}\), respectively, from the
\(\rm H\alpha\) line widths and luminosities (Matthee et al., 2023). Although Matthee et al. (2023) did not adopt any color criteria for the selection, the colors of many \(\rm H\alpha\) emitters are similar to those of JWST-EROs of Barro et al. (2023), i.e., \((F210M-F444W)_{\rm AB}>1.5\) or \((F200W-F356W)_{\rm AB}>0.8\), and \((F182M-F210)_{\rm AB}\sim 0.0\) or \(0.0<(F115W-F200W)_{\rm AB}<1.2\) (see Figure 2 in Matthee et al., 2023).
The BluDOG sample consists of eight objects selected by Noboriguchi et al. (2019). Noboriguchi et al. (2022) conducted spectroscopic observations for four brightest BluDOGs (\(r_{\rm AB}<23\)) among them. The spectroscopic redshifts are between 2.2 and 3.3. Noboriguchi et al. (2022) also estimated the blackhole masses as \(1.1\times 10^{8}<M_{\rm BH}/M_{\odot}<5.5\times 10^{8}\) based on the C iv emission lines using the calibration formula of Vestergaard and Peterson (2006). The bolometric luminosities of the BluDOGs are estimated using SED fitting, and their inferred Eddington ratios are greater than 1 (\(1.1<\lambda_{\rm Edd}<3.8\); Noboriguchi et al., 2022).
## 3 Results
### Comparison of SEDs of JWST-EROs and BluDOGs
First, we compare the SEDs of JWST-EROs (Barro et al., 2023) and those of BluDOGs (Noboriguchi et al., 2022) in Figure 1. There is a remarkable similarity in SEDs between the two galaxy populations; the SEDs are characterized by an extremely red optical color and a flat blue UV color. This characteristic "bimodal-color" SED in UV and optical is seen both in comparisons of individual SEDs (left panel of Fig. 1) and of average SEDs (right panel). Although the overall SEDs are very similar, JWST-EROs exhibit even redder optical colors than BluDOGs, as seen in the comparison of average SEDs (right panel). This could be caused by photometric excesses found in some cases at the rest-frame 0.6-0.7 \(\mu\)m in individual SEDs (left panel), which can be caused by a strong \(\rm H\alpha\) emission line, as shown in Barro et al. (2023). Another difference is the wavelength of the "spectral break" between UV and optical. In the average SEDs, JWST-EROs exhibit a break around the rest-frame 0.3 \(\mu\)m, whereas BluDOGs exhibit a break around 0.2 \(\mu\)m. However, the break wavelengths are dispersed in individual SEDs. In summary, the SEDs of JWST-EROs and BluDOGs are remarkably similar, strongly suggesting their physical connection.
### UV and Optical Spectral Slopes
Second, we compare the UV and optical spectral slopes of the samples in Figure 2, where \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) are defined as the spectral indices in \(f_{\lambda}\propto\lambda^{\beta}\) for the rest-frame UV (\(\sim 0.2\)\(\mu\)m) and optical (\(\sim 0.5\)\(\mu\)m), respectively. Matthee et al. (2023) reported the measurements of \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) of the broad-line \(\rm H\alpha\) emitters by applying power-law fitting to photometric data at observed wavelengths of 1-2 \(\mu\)m and 2-4 \(\mu\)m, respectively. These wavelength ranges correspond to the rest-frame UV and optical. For the eight individual EROs from Barro et al. (2023), we have measured \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) using the photometric data in the same wavelength ranges as those used in Matthee et al. (2023). In addition, we found \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) values of one JWST-ERO reported by Furtak et al. (2023). As depicted in Figure 2, these JWST samples share the distribution in the \(\beta_{\rm UV}\)-\(\beta_{\rm opt}\) diagram, indicating the similarity of the overall SEDs. That is, the optical color is very red (\(\beta_{\rm opt}>-1\)), and the UV color is blue (\(\beta_{\rm UV}\sim-2\)), with few exceptions. Therefore, the broad-line \(\rm H\alpha\) emitters from Matthee et al. (2023) are also called JWST-EROs in this Letter.
The BluDOGs are also plotted on the \(\beta_{\rm UV}\)-\(\beta_{\rm opt}\) diagram of Figure 2. \(\beta_{\rm UV}\) of BluDOGs are calculated by applying power-law fitting to \(grizy\)-bands of Subaru/HSC (Noboriguchi et al., 2022).
We also calculate the \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) for Type-1 QSO and star-forming galaxy (SFG) templates as comparison data. The Type-1 QSO template is taken from the SWIRE template library (Polletta et al., 2007), and the SFG templates are the cases with constant star formations of 10, 100, and 500 Myr (Inoue, 2011). By defining the wavelength ranges of \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) as the rest-frame 1500-3000 and 3000-6000 A, respectively, we calculate \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) using power-law fitting. The \(\beta_{\rm UV}\) and \(\beta_{\rm opt}\) of the Type-1 QSO are \(-1.40\) and \(-1.69\), respectively. The SFG templates of the ages = 10, 100, and 500 Myr have (\(\beta_{\rm UV}\), \(\beta_{\rm opt}\)) = (\(-2.54,-2.66\)), (\(-2.42,-1.88\)), and (\(-2.26,-1.13\)), respectively.
Figure 2 shows the resultant \(\beta_{\rm UV}\)-\(\beta_{\rm opt}\) diagram. We find that the UV and optical colors of the JWST samples are similar, blue in UV but extremely red in optical, while some objects are very red in UV and a few objects are blue in both UV and optical. The BluDOGs are found near the center of the distribution of the JWST samples, indicating the similarity of the overall SEDs between these galaxy populations. Conversely, SFGs are significantly bluer both in UV and optical, and Type-1 QSO is also significantly bluer in optical but similar in color or slightly redder in UV. Although arbitrary, we select objects with \(\beta_{\rm UV}<-1.5\) (bluer than the Type-1 QSO) and \(\beta_{\rm opt}>-1.0\) (redder than the SFGs) as BluDOG-like objects, and 18 of 29 JWST-EROs satisfy these criteria. These BluDOG-like objects are located in different areas in the \(\beta_{\rm UV}\)-\(\beta_{\rm opt}\) diagram from
the SFGs and Type-1 QSO, which are common objects in \(z>3\). Even if we consider SFGs and Type-1 QSOs with dust reddening, it is difficult to reproduce their red optical and blue UV colors simultaneously, as previously reported (e.g., Akins et al., 2023; Barro et al., 2023; Labbe et al., 2023).
### Distribution in \(M_{\rm BH}\) vs. \(L_{\rm bol}\) Diagram
To compare the physical parameters of JWST-EROs and BluDOGs, we examine their distribution in the diagram of \(M_{\rm BH}\) vs. \(L_{\rm bol}\) (Figure 3). Note that \(M_{\rm BH}\) and \(L_{\rm bol}\) are estimated in different ways for JWST-EROs and BluDOGs explained in Section 2. Given the dusty nature of JWST-EROs, the H\(\alpha\) based estimations of Matthee et al. (2023) are likely to suffer from dust extinction and should be regarded as lower limits. As shown in Figure 3, JWST-EROs and BluDOGs have \(\log_{10}(M_{\rm BH}/M_{\odot})\) of 7-8 and 8-9, respectively, indicating that the JWST-ERO systems are one order of magnitude smaller than the BluDOG systems. The bolometric luminosities exhibit two orders of magnitude difference as \(\log_{10}(L_{\rm bol}/L_{\odot})\) of 11-11.5 for JWST-EROs and 12.5-14.0 for BluDOGs. If dust extinction correction is included in the JWST-ERO estimations, these differences should become smaller. For the Eddington ratio, in which the effect of dust extinction is cancelled, we observe an order of magnitude difference in the Eddington ratios of JWST-EROs and BluDOGs as \(\sim 0.2\) and \(>1\), respectively. The Eddington ratios of JWST-EROs are similar to those of blue (normal) luminous QSOs at \(z_{\rm spec}<5\) (\(\log_{10}(M_{\rm BH}/M_{\odot})=\)8-10, \(\log_{10}(L_{\rm bol}/L_{\odot})=\)12.5-14.0: Shen et al., 2011). Therefore, the BluDOGs are more actively accreting supermassive blackhole (SMBH) systems than JWST-EROs and normal QSOs. Conversely, JWST-EROs appear to be a more common type of AGN than BluDOGs.
## 4 Discussion
### Number densities of JWST-EROs and BluDOGs
The number densities of JWST-EROs and broad-line H\(\alpha\) emitters are reported as \(\sim 4\times 10^{-5}\) comoving Mpc\({}^{-3}\)(Barro et al., 2023) and \(\sim 1\times 10^{-5}\) comoving Mpc\({}^{-3}\)(Matthee et al., 2023), respectively. These densities are similar to those of faint X-ray AGNs reported by Giallongo et al. (2015, 2019). BluDOGs are significantly rarer than JWST-EROs: only 8 of 571 DOGs were found
Figure 1: Left panel: Comparison of individual SEDs of eight EROs (red stars) at \(5<z<8\) found with JWST (Barro et al., 2023) and eight BluDOGs (gray circles) at \(2<z<3\)(Noboriguchi et al., 2022). The filled and open symbols represent objects with spec-\(z\) and photo-\(z\), respectively. Right panel: Comparison of average SEDs of JWST-EROs and BluDOGs. The red and orange stars represent the averages of JWST-EROs with \(z_{\rm photo}=5.5\) and 7.5, respectively. The gray filled circles represent the average of BluDOGs with spec-\(z\). Both galaxy populations share the characteristic bimodal colors: blue UV and extremely red optical colors. All flux densities are normalized by that at the rest-frame 0.4 \(\mu\)m.
in a survey area of 105 deg\({}^{2}\)(Noboriguchi et al., 2019), corresponding to a number density of \(\sim 7\times 10^{-9}\) comoving Mpc\({}^{-3}\) if we assume a redshift range of \(2<z<3\). The number density of DOGs is \(\sim 3\times 10^{-7}\) comoving Mpc\({}^{-3}\)(Toba et al., 2015; Noboriguchi et al., 2019) There is a population of dust-obscured AGNs \(\sim 10\) times brighter than DOGs, called HotDOGs (Wu et al., 2012; Eisenhardt et al., 2012; Assef et al., 2015). Its number density seems significantly smaller, \(\sim 1000\) in all sky, corresponding to \(>2\) orders of magnitude smaller number density than DOGs, \(\sim 10^{-9}\) comoving Mpc\({}^{-3}\) if we assume a redshift range of \(1<z<4\)(Eisenhardt et al., 2012; Assef et al., 2015).
The difference in the number densities is reasonable if we consider the difference in the luminosities and SMBH masses of these populations of AGNs. JWST-EROs are \(\sim 10\) times fainter luminosities and smaller SMBH masses than BluDOGs/DOGs. Conversely, HotDOGs are \(\sim 10\) times more luminous and have larger SMBH masses than BluDOGs/DOGs. However, it is difficult to quantitatively compare their number densities as a function of luminosity or SMBH mass further because these populations have different selection methods and redshift ranges. It would be highly interesting to develop a homogeneous selection method for these AGNs, to construct a statistical sample of them across cosmic time, and to discuss their evolution.
### Blue-Excess Fraction
The fraction of objects with excess UV emission varies significantly between dust-obscured AGN populations. We call this the blue-excess fraction. Although Barro et al. (2023) only applied a single color criterion of \(F277W-F444W>1.5\), the 37 selected objects have a flat UV color of \(-0.5<F150W-F200W<0.5\)
Figure 3: Diagram of SMBH mass vs. bolometric luminosity. The gray and magenta plots denote BluDOGs from Noboriguchi et al. (2022) and broad-line H\(\alpha\) emitters from Matthee et al. (2023), respectively. The blue-dashed lines represent a constant Eddington ratio of \(\lambda_{\rm Edd}=0.01\), \(0.1\), \(1.0\), and \(10.0\).
Figure 2: \(\beta_{\rm UV}\) vs. \(\beta_{\rm opt}\) diagram. The magenta, red, and purple stars represent JWST-EROs from Matthee et al. (2023), Barro et al. (2023), and Furtak et al. (2023b), respectively. The filled stars have \(\beta_{\rm UV}<-1.5\) and \(\beta_{\rm opt}>-1.0\), and the open stars have \(\beta_{\rm UV}>-1.5\) or \(\beta_{\rm opt}<-1.0\). The gray filled and open circles represent BluDOGs with spec-\(z\) and photo-\(z\) from Noboriguchi et al. (2022), respectively. The orange circle represents Type-1 QSO from the SWIRE template library (Polletta et al., 2007). The green triangle, square, and pentagon plots denote templates of star-forming galaxies with metallicity of 0.2 Solar value and a constant SFR with durations of 10, 100, and 500 Myr, respectively (Inoue, 2011). The black solid and dashed lines denote \(\beta_{\rm opt}=-1.0\) and \(\beta_{\rm UV}=-1.5\), respectively. The black arrow represents a reddening vector with \(E(B-V)=0.3\) for the Calzetti law (Calzetti et al., 2000).
Thus, their JWST-EROs have a blue-excess fraction of 100%. Matthee et al. (2023) constructed broad-line H\(\alpha\) emitters without any color selection. As depicted in Figure 2, among their 20 H\(\alpha\) emitters, we have identified 17 objects with \(\beta_{\rm opt}>-1\), i.e., objects as red in the rest-frame optical as JWST-EROs of Barro et al. (2023). 11 of 17 objects have blue UV slopes as \(\beta_{\rm UV}<-1.5\), which are as blue in the rest-frame UV as JWST-EROs and BluDOGs. Therefore, the blue-excess fraction is \(\sim 2/3\). Conversely, Noboriguchi et al. (2019) identified only 8 BluDOGs among 571 DOGs, resulting in the blue-excess fraction of \(\sim 1\%\). Assef et al. (2016) reported the blue-excess fraction of \(\sim 8\%\) among HotDOGs with \(W4<7.4\) mag (Vega) even though it is still uncertain because of the complex selection function. They also noted that the fraction can be smaller for their entire HotDOG sample because they have found only 2 blue-excess HotDOGs with \(W4>7.4\) mag (Vega).
The significantly large blue-excess fraction (\(\sim 2/3\)) in the broad-line H\(\alpha\) emitters of Matthee et al. (2023) can be explained by selecting H\(\alpha\) emission. The dustiness of the objects is limited to the level of observability at the optical wavelength. However, the selection by Barro et al. (2023) was only the red optical color, and the 100% blue-excess fraction was striking. Conversely, DOGs/HotDOGs are selected by their extremely red color due to MIR excess emission, and these AGNs can be more dust-rich, resulting in smaller blue-excess fractions. However, because of the different sample selections and blue-excess criteria, comparing the blue-excess fraction is complicated. It is crucial to examine the blue-excess fraction under homogeneous sample selection and the criterion of the blue-excess in the future.
Quantifying the blue-excess fraction enables us to discuss not only the physical origin of the blue-excess emission but also the structure of the AGN core. For example, Assef et al. (2022) reported a high linear polarization degree of 10.8% in the excess UV emission from a HotDOG W0116\(-\)0505, demonstrating that the blue-excess is produced by scattering. That is, the observer's line of sight to the central nucleus is through the dusty torus, the so-called Type-2 line of sight, but the UV radiation from the accretion disk and broad-line regions is scattered by something above the torus and reaches the observer. In this case, the blue-excess fraction is related to the opening angle of the dusty torus and the covering fraction of the scattering media. Alternatively, the dusty medium can completely cover the nucleus, but there are holes through which the blue-excess emission passes. Scattering on the walls of the holes may also occur. In this case, the blue-excess fraction is related to the covering fraction of the holes.
### Average UV Spectrum of BluDOGs
The remarkable similarity between JWST-EROs and BluDOGs suggests the usefulness of an average spectrum of BluDOGs as a possible template UV spectrum for JWST-EROs. We show average rest-UV spectra of BluDOGs based on the four spectra reported by Noboriguchi et al. (2022) in Figure 4. After correcting the BluDOG spectra for dust reddening \(E(B-V)\) and normalizing them using the flux density at 1750 A, we have calculated the average spectrum. When we correct the BluDOG spectra, we have adopted the Calzetti law (Calzetti et al., 2000). As the sample number is restricted to four, the short/long wavelength parts are based only on one or two objects.1 In Figure 4, we also show the quasar average spectra (Vanden Berk et al., 2001) without dust reddening as references.
Footnote 1: Each spectral section is based on the objects listed as follows: \(\lambda_{\rm rest}=1210\)–1360 Å based on J1443, \(\lambda_{\rm rest}=1360\)–1490 Å based on J1202 and J1443, \(\lambda_{\rm rest}=1490\)–1850 Å based on J0907, J1202, J1207, and J1443, \(\lambda_{\rm rest}=1850\)–2080 Å based on J0907, J1202, and J1207, \(\lambda_{\rm rest}=2080\)–2270 Å based on J0907 and J1207, and \(\lambda_{\rm rest}=2270\)–2880 Å based on J0907.
The BluDOG spectrum exhibits extremely strong emission lines compared with the average spectrum of quasars. The rest-frame EWs of the C iv line of BluDOGs range from 100 to 200 A (Noboriguchi et al., 2022), whereas the SDSS quasars in the similar luminosity range show EWs well smaller than 100 A (Shen et al., 2011). The C iv EWs are 2-4\(\sigma\) above the C iv EW distribution of SDSS quasars. Such large EWs are also observed in the blue-excess HotDOG W0116\(-\)0505 reported by Assef et al. (2020). Although the physical origin of the large EWs is still unclear, a possible scenario is different dust attenuation amounts for the accretion disk (i.e. continuum) and the broad-line regions (i.e. emission lines). Namely, the accretion disk is more heavily obscured than the broad-line regions, leading to the fainter continuum and the larger emission line EWs. As an extreme case of this scenario, we may consider that only the continuum suffers from dust reddening but the emission lines do not. Then, if we apply dust correction only for the continuum, the C iv EWs are reduced to \(\approx 60\) A, which is consistent with the average value of the SDSS quasars for the luminosity range. This is indicative, albeit an extreme case.
We may expect the spectrum of the rest-UV blue-excess of JWST-EROs to be similar to that of BluDOGs, which is composed of a moderately reddened continuum from the AGN accretion disk and broad emission lines with extremely large EWs. However, Furtak et al. (2023) and Kokorev et al. (2023) showed the spectra
of two JWST-ERO-like objects without strong metal emission lines in the rest-frame UV. Langeroodi et al. (2023) reported the UV to optical spectra of some of the JWST-ERO sample of Labbe et al. (2023), showing no strong metal emission lines in UV, either, in addition to a report of three brown dwarfs in the Milky Way as contaminants (see also Burgasser et al., 2023). Although a systematic and statistical sample of the UV spectra of JWST-EROs is required for a firm conclusion in the future, these initial results show a difference in the UV metal emission line strength between JWST-EROs and BluDOGs, implying differences in the physical properties of the AGN systems, such as metallicity.
### An Attempt to Locate the Dusty AGN Populations in a Coherent Picture
We have discussed three dusty AGN populations of JWST-EROs, DOGs, and HotDOGs and their blue-excess emissions in the rest-frame UV range. Here, we attempt to consider the relation between these dusty AGN populations. As shown in Figure 3, the SMBH masses in JWST-EROs is \(\log_{10}(M_{\rm BH}/M_{\odot})\sim 7.5\), and those in BluDOGs is \(\log_{10}(M_{\rm BH}/M_{\odot})\sim 8.5\). The SMBH masses of HotDOGs are close to or greater than \(\log_{10}(M_{\rm BH}/M_{\odot})\sim 9\)(Wu et al., 2018). Thus, these dusty AGN populations are different on the SMBH mass scale. However, the overall SEDs of JWST-EROs are similar to those of BluDOGs and blue-excess HotDOGs, indicating physical similarity. Therefore, we propose a unified picture for the three dusty AGN populations in the gas-rich major merger scenario of AGN formation (Hopkins et al., 2006), as shown in Figure 5.
A major merger event triggers intense star formation that is heavily obscured by the dusty interstellar medium (Dusty SF-phase). For example, gas feeding onto the central SMBH can occur because of the radiation drag effect during a strong dusty starburst (Umemura, 2001), which ignites the AGN still obscured by the surrounding dusty medium (Dusty AGN-phase). As a feedback effect by the AGN, the outflow partly clears the dusty medium and a part of the UV emission escapes through the medium, producing the blue-excess emission (Dusty outflow phase). Finally, a significant part of the dusty medium is cleared, and the central AGN can be observed as an unobscured quasar (Quasar phase). In this scenario, dusty AGNs with blue-excess such as BluDOGs and JWST-EROs with \(\beta_{\rm UV}<-1.5\) are observed in the Dusty outflow phase.2
Footnote 2: As described in Section 1, BluDOGs exhibit an evidence of outflows in their spectra (Noboriguchi et al., 2022). It is unclear whether DOGs without blue-excess exhibit outflows or not due to the lack of spectroscopic information. However, if outflows produce less-dusty channels, the blue-excess would be observed through the channels. Therefore, in Figure 5, we simply consider that the blue-excess is only observable in the outflow phase.
As discussed in Section 4.2, the blue-excess fraction can differ significantly between the three dusty AGN populations, although the selection method and criterion of the blue-excess are not yet homogeneously defined. The different blue-excess fraction implies differences in the evolutionary time scale, outflow efficiency, or the geometry of the dusty medium around the nucleus, as discussed in Section 4.2. In Figure 5, we note an equation describing the blue-excess fraction, which is summarized as a combination of the covering fraction of dusty media and the time-scales of the outflow and DOG phases.
Figure 4: Average spectrum of four BluDOGs (Noboriguchi et al., 2022). The blue line represents the dust-corrected average spectrum of BluDOGs (i.e., \(E(B-V)=0.00\)), and the red line represents the composite quasar spectrum (Vanden Berk et al., 2001). The spectra are normalized by the flux density at 1750 Å. Black arrows and text denote the detected major emission lines. The average BluDOG spectra are available online.
If the JWST-EROs are in the Dusty outflow phase, there can be a significant number of galaxies in the Dusty SF and AGN phases without blue-excess in the epoch of cosmic dawn. However, the current estimate of the blue-excess fraction in JWST-EROs is \(\sim 1\), implying that there is not much room for the dusty SF/AGN phases. The JWST-EROs are the AGN population in the very early Universe such as \(z>5\). Possible lower metallicity and a smaller dust-to-gas ratio in the interstellar medium may reduce the covering fraction of the dusty media and/or shorten the time scale of the DOG phase, resulting in a larger blue-excess fraction. However, the time scale of the DOG phase remains uncertain. Yutani et al. (2022) suggested the entire time scale to be 4 Myrs from their SED simulation of DOGs based on the post-processed radiation transfer calculation after hydrodynamic simulations of a major merger event. It will be interesting to examine any metallicity dependence on the time scale in future work.
Less massive AGNs similar to JWST-EROs should exist in cosmic noon when DOGs/HotDOGs are observed. These less massive AGNs are missing thus far because of the limited sensitivity of the MIR survey data based on WISE and the limited survey areas of Spitzer/MIPS 24 \(\mu\)m imaging (references). For example, JWST-EROs are 100 times fainter than BluDOGs in bolometric luminosity, corresponding to a 5 mag difference. JWST MIRI survey data would be useful to search for the cosmic noon counterpart of high-\(z\) JWST-EROs, but the limited survey area would be a bottleneck. For wide-field imaging surveys, the Euclid and Roman Space Telescope will emerge in the near future. However, their wavelength coverage is limited to the wavelength \(<2~{}\mu\)m, which would not be sufficiently long to sample JWST-ERO-like objects. A Japanese future space mission concept, GREX-PLUS (Inoue et al., 2022; GREX-PLUS Science Team et al., 2023) will conduct a 1,000 deg\({}^{2}\) imaging survey with wavelength coverage up to 8 \(\mu\)m. Because the GREX-PLUS survey depths are \(>5\) mag deeper than those of _WISE_ used in Noboriguchi et al. (2019), a significant number of DOGs, including optically dark AGNs, (e.g., Toba et al., 2020) as faint as JWST-EROs would be detected.
The authors gratefully acknowledge the anonymous referee for a careful reading of the manuscript and very helpful comments. This work was supported by JSPS KAKENHI Grant Numbers JP23H00131 (A.K.I.), JP23H01215 (T.N.), JP22H01266 (Y.T.), and 21H01126 (T.M.). astropy (Astropy Collaboration et al., 2013, 2018, 2022)
|
2305.18217
|
The Optical Phase Curves of CoRoT-1 b
|
Of the three space telescopes launched so far to survey transiting extrasolar
planets, CoRoT is unique in that it was the only one with spectral resolution,
allowing for an extraordinary opportunity to study the reflective properties of
exoplanets at different wavelengths. In this work, I present a systematic
lightcurve analysis of the white-light and chromatic CoRoT lightcurves of
CoRoT-1 in order to search for the secondary eclipse and orbital phase
variation of the transiting extrasolar planet CoRoT-1 b, as well at search for
any chromatic difference in the aforementioned effects. I manage to detect a
significant secondary eclipse in the white lightcurve, and detect the eclipse
marginally in all three of the color channels. However I am only able to
significantly detect the planetary phase variation in the red channel
lightcurve. The retrieved secondary eclipse depth is higher in the blue and
green channels compared to the white and red, suggesting that CoRoT-1 b has a
higher geometric albedo at shorter wavelengths. I also attempt to detect the
secondary eclipse using TESS, but show that the available volume and precision
of the data is not high enough to allow detection of the secondary eclipse.
|
Andrew Li
|
2023-05-25T16:48:32Z
|
http://arxiv.org/abs/2305.18217v1
|
# The Optical Phase Curves of CoRoT-1 b
###### Abstract
Of the three space telescopes launched so far to survey transiting extrasolar planets, CoRoT is unique in that it was the only one with spectral resolution, allowing for an extraordinary opportunity to study the reflective properties of exoplanets at different wavelengths. In this work, I present a systematic lightcurve analysis of the white-light and chromatic CoRoT lightcurves of CoRoT-1 in order to search for the secondary eclipse and orbital phase variation of the transiting extrasolar planet CoRoT-1 b, as well at search for any chromatic difference in the aforementioned effects. I manage to detect a significant secondary eclipse in the white lightcurve, and detect the eclipse marginally in all three of the color channels. However I am only able to significantly detect the planetary phase variation in the red channel lightcurve. The retrieved secondary eclipse depth is higher in the blue and green channels compared to the white and red, suggesting that CoRoT-1 b has a higher geometric albedo at shorter wavelengths. I also attempt to detect the secondary eclipse using TESS, but show that the available volume and precision of the data is not high enough to allow detection of the secondary eclipse.
## 1 Introduction
Since the commencement of observations in February 2006, the Convection, Rotation, and planetary Transits satellite (CoRoT) continuously monitored several fields of the night sky for six years in order to search for transiting extrasolar planets, becoming the first space telescope of its kind launched for such a purpose. One of the advantages of such continuous monitoring of transiting exoplanet systems is that it allows for high precision photometry using relatively small optical components, with the high optical precision being achieved by the large volume and time baseline of the data. As a result, the study of the optical phase curves of transiting hot Jupiter-type exoplanetary systems, which require a high photometric precision, has benefited enormously from the advent of CoRoT, and its successor survey telescopes, Kepler and TESS. Specifically, observations of the planetary secondary eclipse and phase curve are only possible with the photometric precision afforded by space telescopes.
Phase curves of exoplanets usually are studied at infrared wavelengths, both because the phase curve has a higher amplitude at infrared wavelengths, and the fact that observing the infrared phase curves of exoplanets allows for the modelling of the planet's atmospheric structure and composition. However, at optical wavelengths, the phase curve instead provides information about the reflective properties of the exoplanet, including the geometric albedo of the planet and the homogeneity or existence of clouds in the planet atmosphere (Shporer and Hu (2015)).
However, one measurement at one wavelength is usually not enough to model or constrain the reflective and thermal properties of an exoplanet. Because of this, numerous telescopes from both space and the ground have been used to study the secondary eclipses and phase curves of hot Jupiters in the infrared, allowing for the detailed modelling of exoplanet thermal emission spectra.
The same cannot be said for visible-light measurements, where detailed multi-wavelength secondary eclipse observations of hot Jupiters in the optical are few and far between, especially at wavelengths shorter than 500 nm. At the moment, detailed secondary eclipse observations of exoplanets shortward of 500 nm have only been conducted for nine1 exoplanets, with only one of these attempts resulting in a successful detection.
Footnote 1: The author is aware of four additional exoplanets with unpublished secondary eclipse observations shortward of 500 nm, including a previous analysis of CoRoT-1 b.
CoRoT-1 b is an exoplanet in a unique position ripe for chromatic analysis of its secondary eclipses and planetary phase curves. It possesses an orbital configuration that makes the expected eclipse depth relatively large,
and the continuous white-light and chromatic monitoring of the host star by CoRoT means that there is ample data to make a detection of the eclipse and phase curve. Indeed, previous attempts to detect the secondary eclipse have proven that the eclipse depth and phase curve is large and easily detectable with the CoRoT data (Alonso et al. (2009), Snellen et al. (2009)). Taking all of the factors that make this planet conducive to study of its eclipse and phase curve, I endeavor to try and detect these effects in the white-light and chromatic CoRoT lightcurves.
In addition to the CoRoT lightcurves of CoRoT-1, CoRoT-1 and its transiting planet have also been observed by the Transiting Exoplanet Survey Satellite (TESS). In an effort to add another data point to the chromatic eclipse depths of CoRoT-1 b, I also attempt to detect the eclipse in TESS data.
## 2 Methods
I use the public N2 CoRoT data for CoRoT-1 in.tbl format, as provided by the NASA Exoplanet Archive. The white-light and chromatic lightcurves are then extracted from the lightcurve file. To allow a uniform analysis of all of the lightcurve, the section with 32s candence was split off from the previous section with 512s cadence, and binned into bins of 16 points to match the rest of the data. The resulting lightcurve consists of 7117 datapoints spread across 54.72 days and 36.26 planetary orbits.
The first and most obvious systematic effect in the CoRoT lightcurves are the numerous flux discontinuities, or "jumps", which are caused by the impact of high-energy particles onto the spacecraft's CCD detectors. I designed a fairly basic but effective way of removing these discontinuities from the lightcurve by taking the average flux of the lightcurve 0.5 days before and after the flux jump and subtracting the differences in the means from the part of the lightcurve that occurs after the jump. The resulting lightcurves were much smoother than the originals.
For the TESS lightcurves, I use the SPOC 120s cadence Sector 6 and 33 lightcurves as provided by the download function of the Python package lightcurve (Lightkurve Collaboration et al. (2018)). I extracted the PDC lightcurve from these files, as compared to the SAP lightcurve, the instrumental effects impacting the SAP lightcurve have been removed in the PDC lightcurve. The lightcurves from each of the two sectors were then joined and analyzed together. Because the time gap between the two sectors' data is large, I do not expect this action to significantly affect the detrending process or the end results of the analysis. The resulting lightcurve contains 30298 data points, is near continuous over 47.61 days, and covers 31.55 planetary orbits, with a large gap in time between the Sector 6 and Sector 33 data.
The TESS and CoRoT lightcurves were then detrended using the Python package wotan (Hippke et al. (2019)). In the case of the CoRoT data, I used a biweight filter with a window size equal to that of the planetary orbital period. This is chosen specifically because, in this way, the detrending algorithm will detrend out all long term trends across longer timescales than the planetary orbital period while keeping any possible planetary phase variation in the detrended lightcurve. In the case of the TESS data, I initially also processed
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{ Planet} & \(\delta_{occ}\) (ppm) & Reference \\ \hline CoRoT-2 b & - & Snellen et al. (2010) \\ HD 189733 b & \(126^{+36}_{-37}\) & Evans et al. (2013) \\ KELT-9 b & \(-71\pm 84\) & Hooton et al. (2018) \\ TrES-3 b & \(80\pm 90\) & Mallonn et al. (2022) \\ WASP-12 b & \(59\pm 134\) & Sing et al. (2019) \\ WASP-19 b & \(10\pm 280\) & Burton et al. (2012) \\ WASP-43 b & \(<860(3\sigma)\) & Chen et al. (2014b) \\ & \(-70\pm 110\) & Mallonn et al. (2022) \\ WASP-46 b & \(490\pm 300\) & Chen et al. (2014a) \\ WASP-103 b & \(200\pm 160\) & Mallonn et al. (2022) \\ \hline \end{tabular} Note. – Snellen et al. (2010) did not detect the secondary eclipse of CoRoT-2 b in the blue CoRoT channel, but did not provide an upper limit value.
\end{table}
Table 1: Previous Secondary Eclipse Measurements with \(\lambda<500nm\)
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{ Parameter} & Value & Reference \\ \hline \(a/R_{*}\) & 4.751 & von Essen et al. (2019) \\ \(R_{P}/R_{*}\) & 0.1419 & von Essen et al. (2019) \\ \(P\) & 1.508968772 d & Ivshina and Winn (2022) \\ \(T_{*}\) & 6355 K & Stassun et al. (2019) \\ \(T_{P,day}\) & 2279 K & Deming et al. (2023) \\ \hline \end{tabular} Note. – The cited \(T_{P,day}\) value is the average of the brightness temperatures in the 3.6 and 4.5 micron bands.
\end{table}
Table 2: Adopted Planetary Parameters
the TESS data in the same manner, but discovered that, because of the faintness of the host star, correlated noise dominated the resulting lightcurve and its analysis. As a result, I decided to utilize a smaller window of about 0.7 days. This was too small to detrend over the planetary transits properly, so the transits in the TESS lightcurves were removed before the detrending process was applied. This means that the phase curve will be detrended out of the lightcurve, and that I will no longer be able to search for it in the TESS data. However, the secondary eclipse should still be present, since it is a high frequency feature on a scale much smaller than the detrending window.
The resulting TESS and CoRoT lightcurves were then \(\sigma\)-clipped to 3\(\sigma\), which had the effect of clipping out not only any possible outliers, but also the planetary transits in the CoRoT lightcurves as well. After detrending was performed, it became obvious that several low-level flux modulations and residual incompletely removed discontinuities remained. I tested a variety of methods to remove these effects, but, in accordance with the methods of previous analyses such as Snellen et al. (2009), I found that the best way to account for these effects was to just remove them from the lightcurve altogether. To do this, I constructed a "blacklist" of ranges of data points, determined by a visual inspection of the data, and then automatically removed these sections from the sigma-clipped data. In the end, the red and green channel CoRoT lightcurves, as well as the TESS lightcurve, did not have a blacklist, meaning that I did not find any portions of those respective lightcurves that warranted removal.
After the previously mentioned processing steps, the TESS and CoRoT white lightcurves are ready to be analyzed for the presence of planetary phase variations and secondary eclipses. However, the CoRoT color lightcurves need additional processing. This is because the color channel lightcurves, unlike the white lightcurve, are not corrected for systematics caused by pointing drifts of the telescope. Two main systematic effects are visible in the color channel lightcurves: a 103-min high-amplitude variation cognate with the satellite orbital period, and a 24-hour low-amplitude sinusoidal variation. I remove these variations in largely the same way that is done by Snellen et al. (2009). A sinusoidal function with a period of 24 hours was fitted to the data, with the resulting lightcurve being the residuals of the fit. Then, the data was then phase-folded to the 103-min orbital period of the satellite, and then boxcar-smoothed with a window of 8 datapoints. The resulting curve was then used to detrend the entire lightcurve so that no systematic variation remained. I should note that Snellen et al. (2009) find evidence of a \(\sim\)10 day variation in the data, which they attribute to spot variability and/or stellar rotation. I do not attempt to find and remove this variability, because such a large-period variation is likely to have already been removed by the detrending algorithm, and the estimated effect of such a low-amplitude variation on the final values is ostensibly small.
After all detrending was accomplished, the lightcurves were then phase-folded to the planetary orbital period. I then tested fitting two different models to the CoRoT lightcurves - one a pure phase curve and eclipse model, and the other including the eclipse and all three phase curve effects, reflection and thermal emission, ellipsoidal variation, and Doppler beaming. Note that I do not expect to detect the ellipsoidal variation and Doppler beaming in the lightcurves, since the expected amplitude is quite small compared to the photometric precision of the lightcurves, but instead use the ellipsoidal variation and doppler beaming amplitudes as "correlated noise detrending functions". The rationale behind this is that, due to the nature of the color channel lightcurves, it is likely that significant correlated noise remains in the lightcurves. The "ellipsoidal variation" and "Doppler beaming" curves would therefore be able to eliminate the two sources of noise most likely to influence the phase curve amplitude: noise at phases 0.25 and 0.75, and noise causing the first half of the phase curve to be higher than the second half. After the fitting was complete, a visual inspection of the fit was done, and based on this visual inspection I adopted the "reflection-only" model for the white lightcurve and the red and green
\begin{table}
\begin{tabular}{l l l} \hline \hline Color & Interval & Reason for Removal \\ \hline WHITE & \{2594.0, 2596.2\} & flux jump \\ WHITE & \{2598.5, 2599.0\} & flux jump \\ WHITE & \{2622.0, 2623.0\} & undetrended flux variation \\ WHITE & \{2635.5, 2637.7\} & flux jump \\ WHITE & \{2640.0, 2640.8\} & flux jump \\ WHITE & \{2642.5, 2644.4\} & flux jump \\ BLUE & \{2594.0, 2595.0\} & flux jump \\ BLUE & \{2598.5, 2599.0\} & undetrended flux variation \\ BLUE & \{2637.0, 2637.7\} & flux jump \\ BLUE & \{2642.5, 2644.4\} & flux jump \\ \hline \end{tabular}
\end{table}
Table 3: CoRoT-1 b Lightcurve Blacklist
color lightcurves. For the blue-channel lightcurve, the "3-effect" model provided a better fit, so I adopted this model for the blue channel lightcurve instead. In the case of the TESS lightcurve, where I do not expect any contribution from a putative phase curve signal due to the nature of the lightcurve detrending, I simply adopted an eclipse-only fit.
The fitting was done using a DLS (damped least-squares) method, and the error estimation was done using a residual-shuffling bootstrap method similar to the one used in Alonso et al. (2009). I should note that such a method does not account for additional uncertainty due to correlated noise, and as a result I expect that my derived error bars are marginally underestimated. Ideally, analysis of this data should be done using an MCMC (Markov-chain Monte Carlo) parameter and uncertainty estimation method, which is more robust to different sources of uncertainty than the types of bootstrap methods described here, but unfortunately, due to the limited computational resources of the author, I am unable to perform an MCMC analysis at this time. Despite the limitations of my analysis, a visual inspection of the fitted model suggest that the analysis is robust.
## 3 Results
From my analysis, I clearly detect the secondary eclipse in the white (\(4.44\sigma\)) channel data, and marginally detect the eclipse in the three color channels red (\(2.72\sigma\)), green (\(2.99\sigma\)), and blue (\(2.44\sigma\)). The eclipse is not detected in the TESS data, and I am only able to set a \(1\sigma\) upper limit of 223 ppm for a possible eclipse depth. As for the planetary phase variation, I clearly detect the planetary phase variation only in the red channel (\(3.01\sigma\)), with only marginal detections in the white (\(2.50\sigma\)), green (\(1.23\sigma\)), and blue (\(2.41\sigma\)) channels. As mentioned before, I do not expect to detect the planetary phase variation in the TESS lightcurve due to the nature of the detrending algorithm. According to Parviainen et al. (2013), the flux contamination in the CoRoT lightcurves is negligible, and the TESS PDC lightcurves come pre-adjusted for contamination, so no further correction was required.
To verify that the detected eclipse signal is real, I shifted the eclipse model in units 0.01 in phase between phases 0.25 and 0.75, and fitted each model with its own corresponding eclipse depth. The resulting array of values show a spike in the eclipse depth at phase 0.5 and a lack of periodic variability elsewhere, suggesting that my detection of the eclipse depth is real and not due to correlated noise or undetrended systematics.
## 4 Discussion
### Significance of Results
For each of the four CoRoT color channels, I measure the significance of the differences between the phase curve amplitude and the secondary eclipse depth (or in other terms, the nightside flux) to be \(1.30\sigma\) (white), \(0.23\sigma\) (red), \(1.49\sigma\) (green), and \(0.64\sigma\) (blue). The apparently significant detection of the nightside flux in the white and green channels is probably not indicative of real nightside emission, as such emission would not be detectable at the wavelengths of the white and the green channels. Instead, the difference probably indicates that the apparent phase curve signal has been partially masked by unmodeled systematics and other effects caused by pointing drifts of the CoRoT spacecraft, and that these values of the phase curve amplitudes should not be taken at their face value. Thus, I can only confidently conclude that I have detected the phase variation in the red channel, and marginally in the blue channel, while in the white lightcurve I probably have also detected the phase variation of the planet, but that I cannot determine the true value of the phase amplitude due to residual systematic effects. In the case of the green channel, any possible phase variation has been completely wiped out by the noise in the data.
My derived white-light eclipse depth is consistent with the value found by Alonso et al. (2009), while my red channel eclipse depth is marginally larger than the value found by Snellen et al. (2009), but is still somewhat consistent. I attribute these differences to the difference in data processing.
### Solving the CoRoT Color Channels
Calculations of the wavelength ranges of the channels is required to place these measurements into their proper context. The CoRoT color channels do not correspond to any conventional photometric system, and are actually different for each star. This is because the color channels are formed by a small bi-prism that is placed in front of the Planet-Finder channel, which disperses
\begin{table}
\begin{tabular}{c c c} \hline \hline Color & \(\delta_{\rm occ}\) (ppm) & \(A_{Refl}\) (ppm) \\ \hline WHITE & 192 \(\pm\)43 & 56 \(\pm\)22 \\ RED & 147 \(\pm\)54 & 82 \(\pm\)27 \\ GREEN & 334 \(\pm\)112 & 58 \(\pm\)47 \\ BLUE & 323 \(\pm\)132 & 110 \(\pm\)46 \\ TESS & \(<223\) (\(1\sigma\)) & \\ \hline \end{tabular}
\end{table}
Table 4: CoRoT-1 b Phase Curve Fits
the light slightly. A photometric mask is then applied to the dispersed flux, and based on the relative intensities of the photons and the position of the star on the CCD, the three color channels are created. While the exact ratio of red to green to blue flux is nontrivial to calculate, Rabello Soares et al. (2022) use empirically calculated values to approximate the "average" wavelength properties of the color channels, which I use for the purposes of this analysis. To calculate the wavelength limits of the color channels, I multiply a calculated Planck curve of the star by the CoRoT CCD response function from Auvergne et al. (2009), then took the wavelength cutoff with the bluest 22% of the flux to be the blue channel cutoff, and the wavelength cutoff with the reddest 63% of the flux to be the red channel cutoff. From these calculations, I calculate the blue wavelength cutoff to be 502 nm, and the red wavelength cutoff to be 561 nm, and the corresponding effective wavelengths of the white, blue, green, and red channels to be 623 nm, 456 nm, 533 nm, and 703 nm. This is in agreement with Snellen et al. (2009), who find an effective wavelength of the red channel for CoRoT-1 of 710 nm and a cutoff of 560 nm.
It would also be appropriate to conduct an analysis of the effective wavelength of the TESS lightcurve, but given that I do not detect the secondary eclipse in the TESS data, and the fact that most other publications have not adjusted the TESS bandpass to account for the stellar spectrum, I decide not to proceed for consistency with other literature analyses of TESS eclipses. In any case, the potential impact on the geometric albedo upper limit due to this decision is expected to be small due to the relatively wide bandpass of the TESS cameras.
### A Wavelength-Dependent Albedo of CoRoT-1 b?
Provided that the dayside temperature of the planet, determined from infrared observations of the secondary eclipse, is known, one can calculate the geometric albedo (corrected for the presence of thermally emitted light) of the exoplanet under the assumption that the star and the planetary dayside radiate like black bodies using the following equation:
\[A_{g}=\delta_{occ}(\frac{a}{R_{P}})^{2}-\frac{B(\lambda,T_{P,day})}{B(\lambda, T_{*})}(\frac{a}{R_{*}})^{2} \tag{1}\]
I take the derived dayside brightness temperatures from Deming et al. (2023) and average the 3.6 micron
Figure 1: CoRoT white-light phase curve of CoRoT-1 b.
and 4.5 micron temperatures to arrive at an approximate estimate of the planetary dayside temperature to use in calculating the albedo. Thus, the thermally corrected albedos of CoRoT-1 b can be calculated.
The geometric albedo of CoRoT-1 b appears to be elevated in the green and blue channels, in stark contrast to most other hot Jupiter-type planets, which typically have very low albedos (Esteves et al. (2015)). This suggests the presence of reflective clouds on the dayside of CoRoT-1 b. In addition to the elevated geometric albedo, the albedo in the green and blue channels is higher than the albedo in the red channel, as well as the overall white channel albedo (which predictably is in between the albedos of the red channel and the green and blue channels), and the 1\(\sigma\) upper limit of the albedo in the TESS bandpass. This suggests some chromatic dependence in the geometric albedo of CoRoT-1
Figure 4: CoRoT blue channel phase curve of CoRoT-1 b.
Figure 3: CoRoT green channel phase curve of CoRoT-1 b.
Figure 2: CoRoT red channel phase curve of CoRoT-1 b.
b, possibly due to Rayleigh scattering, or the inclusion of pigmenting and/or scattering compounds in the clouds. This chromatic difference in the albedo means that, to the human eye, CoRoT-1 b would likely appear to be bluish in color. However, I should caution that the significance of the eclipse depths and albedos I derive here mean that a cloudless and unreflective atmosphere with a flat reflection spectrum is plausible, since such a model would be consistent with my results to within 3\(\sigma\). The conclusions I draw here are only tentative, and should not be taken as a confirmation of any phenomena in the atmosphere of CoRoT-1 b.
## 5 Conclusion
In this work, I searched and analyzed white-light and chromatic CoRoT data, as well as TESS data, in an attempt to find the phase variations and secondary eclipse of CoRoT-1 b. I detected the secondary eclipse in each of the CoRoT lightcurves to better than 2.4\(\sigma\), while in the case of the phase variations I can only confidently conclude that I have detected the phase variation in the red channel, and marginally in the blue channel, while in the white and green channels, I probably have also detected the phase variation of the planet, but that I cannot determine the true value of the phase amplitude due to residual systematic effects. In the case of the TESS data, I am unable to detect the secondary eclipse, while my choice of processing method for the TESS lightcurve leaves me unable to study the presence of any possible phase variations. I find tentative evidence that the geometric albedo of CoRoT-1 b appears to be elevated and chromatically dependent, with the albedo being higher in the green and blue CoRoT data than in the white CoRoT data, red CoRoT data, and the TESS data, suggesting that the atmosphere of CoRoT-1 b may contain a reflective cloud deck and/or exhibits Rayleigh scattering. Given the marginal significance of my fitted values, I encourage future study of the CoRoT-1 system to confirm or refute these tentative results.
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. This research has also made use of Lightkurve, a Python package for Kepler and TESS data analysis (Lightkurve Collaboration et al. (2018)).
I would also like to thank Hippke et. al. for the development of the wotan Python package, and D.J. Dorado-Daza for the ACROM set of Python scripts for processing CoRoT data. Many of the custom Python analysis scripts used in this research were derived from these two packages, and this research would not have been possible without each of them.
Exoplanet Archive, CoRoT
astropy (Astropy Collaboration et al., 2013, 2018), wotan (Hippke et al. (2019)), ACROM (Dorado-Daza, 2018)
Figure 5: TESS white-light phase curve of CoRoT-1 b.
Figure 6: CoRoT white-light and chromatic secondary eclipse depths.
\begin{table}
\begin{tabular}{c c c} \hline \hline Color & \(A_{g}\) & 3\(\sigma\) upper limit \\ \hline WHITE & 0.182 \(\pm 0.048\) & - \\ RED & 0.097 \(\pm 0.061\) & \(<0.280\) \\ GREEN & 0.363 \(\pm 0.125\) & \(<0.738\) \\ BLUE & 0.359 \(\pm 0.148\) & \(<0.803\) \\ TESS & \(<0.155\) (1\(\sigma\)) & \(<0.655\) \\ \hline \end{tabular}
\end{table}
Table 5: Geometric Albedos
|
2308.01976
|
Domain specificity and data efficiency in typo tolerant spell checkers:
the case of search in online marketplaces
|
Typographical errors are a major source of frustration for visitors of online
marketplaces. Because of the domain-specific nature of these marketplaces and
the very short queries users tend to search for, traditional spell cheking
solutions do not perform well in correcting typos. We present a data
augmentation method to address the lack of annotated typo data and train a
recurrent neural network to learn context-limited domain-specific embeddings.
Those embeddings are deployed in a real-time inferencing API for the Microsoft
AppSource marketplace to find the closest match between a misspelled user query
and the available product names. Our data efficient solution shows that
controlled high quality synthetic data may be a powerful tool especially
considering the current climate of large language models which rely on
prohibitively huge and often uncontrolled datasets.
|
Dayananda Ubrangala, Juhi Sharma, Ravi Prasad Kondapalli, Kiran R, Amit Agarwala, Laurent Boué
|
2023-08-03T18:11:00Z
|
http://arxiv.org/abs/2308.01976v1
|
Domain specificity and data efficiency in typo tolerant spell checkers: the case of search in online marketplaces
###### Abstract
Typographical errors are a major source of frustration for visitors of online marketplaces. Because of the domain-specific nature of these marketplaces and the very short queries users tend to search for, traditional spell cheking solutions do not perform well in correcting typos. We present a data augmentation method to address the lack of annotated typo data and train a recurrent neural network to learn context-limited domain-specific embeddings. Those embeddings are deployed in a real-time inferencing API for the Microsoft AppSource marketplace to find the closest match between a misspelled user query and the available product names. Our data efficient solution shows that controlled high quality synthetic data may be a powerful tool especially considering the current climate of large language models which rely on prohibitively huge and often uncontrolled datasets.
search relevance, synthetic data, spell checking, behavioral statistics, NLP
## 1 Introduction
One of the most common problems that users face while searching for information is typos. Typos, or typing errors, can lead to inaccurate search results and create frustration for the users. Even though search engines use complex algorithms to match the user's search terms with relevant web pages, even minor spelling errors can completely alter the search results. As such, the question of typo tolerance in search has become a major concern for both users and search engine providers alike.
We focus on situations where user queries are very domain-specific and tend to be rather short. This is a common scenario for online marketplaces where users typically search by typing in directly the name of the product they are looking for instead of a grammatically well-formed sentence.
We present a method to identify context-limited typos in domain-specific settings. Our solution can be split into three parts. First, we analyze and classify real-world typographical errors made by users on other platforms. These foundational statistics are used to generate synthetic training datasets that are specific to our target corpus of AppSource marketplace product names. Second, we use these datasets to train a multi-layer LSTM model. Using this trained model, we gather embeddings for the entire AppSource product catalog. Third, those corpus-wide embeddings are compared in real-time with the embeddings of the search query input by the user to get the closest match from product corpora.
Considering the lack of annotated typo data, our model is trained entirely on synthetically generated datasets. Through progressively more realistic versions of data augmentation strategies, our final model improves the CTR (clickthrough rate) of search results by more than 4% and decreases the rate of no search results by 8%. This lift in performance is remarkable in that the model is trained on synthetic data only.
Our model has been deployed as a real-time API consumed by the Microsoft AppSource marketplace website. AppSource (formerly known as Office Store) is Microsoft's official marketplace for business applications, add-ins, and
content packs that extend the functionality of Microsoft products such as Microsoft 365, Dynamics 365, Power BI, Azure, and more. It provides a platform for developers and partners to publish and distribute their solutions to a wide range of Microsoft customers. Using AppSource, users can discover and acquire applications and add-ins to enhance their Microsoft productivity and business solutions. These applications range from industry-specific solutions to productivity tools, analytics dashboards, project management tools, customer relationship management (CRM) systems, and more. AppSource offers a curated collection of trusted applications that have undergone a review process by Microsoft to ensure quality, security, and compatibility. Users can explore various categories, search for specific solutions, read detailed descriptions and reviews, and even try out free trial versions of the applications before making a purchase. By leveraging AppSource, businesses can extend the capabilities of Microsoft products and tailor them to their specific needs, enhancing productivity and enabling digital transformation within their organizations. For partners, AppSource represents a highly visible opportunity to showcase and sell their software solutions to a vast customer base, benefiting from the extensive reach and recognition of Microsoft's brand and ecosystem. Overall, App Source has \(\approx 23,000\) apps in its catalog and the catalog grows roughly by \(\approx 200\) apps per month.
With the deployment of our typo-tolerant spell checker, Microsoft AppSource constitutes one of the few places where non-dictionary-based typo correction systems have been deployed in a production system.
## 2 Related work
Usual spell checkers, such as those found in word processing software, rely on a dictionary-based approach to correct spelling errors. These lexicons or unigram language models compare the user's input against a pre-existing dictionary of correctly spelled words and flag words that do not match their built-in dictionaries [1, 2]. Other rule-based systems use word mismatches as defined by traditional natural language processing techniques [3, 4, 5, 6, 7] to flag potential typos. Although these approaches are well founded, they are not effective in domain-specific settings because the dictionaries used by common spell checkers are limited to commonly used words and may not include technical terms or jargon specific to a particular field [8, 9]. Typically, more modern machine learning based solutions rely on context [10, 11] to identify typos and therefore do not lend themselves to situations of online marketplaces queries which are very short and without much surrounding context. In fact, we explicitly evaluate the performance of common spellcheckers as exemplified by techniques established in [12, 13] in Section 5 and quantify their poor performance. As noted in [14] more than 80% of errors differ from the correct word by only a single letter. Furthermore, errors can accurately be classified into just a small number of independent categories [15] and several efforts have been made towards generating datasets based on artificial grammatical mistakes [16, 17, 18, 19]. However, real-world typos do not necessarily follow those grammatical constructs [20, 21] and there arises a need to generate synthetic training datasets based on historical typo statistics from open source datasets [22, 23]. This will be the topic of sections 3 and 4 of this paper. Regarding the model architecture, our work most resembles [24] in the use of recurrent neural networks although we use the network in training mode primarily self-supervised [25] on synthetic data to learn domain-specific embeddings [26], see Section 5.
## 3 Classification of typographical errors
Although one usually refers to typing mistakes under the umbrella term "typos", it turns out that typographical errors actually may come under many different guises. Following previous classification studies [8, 14, 15, 20], we consider the set \(\mathcal{T}\) comprising of one-character typos
\[\mathcal{T}=\left\{\mathcal{T}_{\text{Deletion}},\mathcal{T}_{\text{Insertion}}, \mathcal{T}_{\text{Replication}},\mathcal{T}_{\text{Substitution}}, \mathcal{T}_{\text{Transposition}}\right\} \tag{1}\]
Using the ground-truth string \(s_{\text{gt}}=\texttt{finally}\) as an example, these \(|\mathcal{T}|=5\) error types can be illustrated as follows:
* \(\mathcal{T}_{\text{Deletion}}=\texttt{finally}\neq s_{\text{gt}}\) ; missing one character.
* \(\mathcal{T}_{\text{Insertion}}=\texttt{finally}\neq s_{\text{gt}}\) ; additional character.
* \(\mathcal{T}_{\text{Replication}}=\texttt{finally}\neq s_{\text{gt}}\) ; special case of \(\mathcal{T}_{\text{Insertion}}\) when the added character is the same as its preceding character.
* \(\mathcal{T}_{\text{Substitution}}=\texttt{finally}\neq s_{\text{gt}}\) ; character replaced by another one.
* \(\mathcal{T}_{\text{Transposition}}=\texttt{finally}\neq s_{\text{gt}}\) ; special case of \(\mathcal{T}_{\text{Substitution}}\) when the substituted character takes the place of its neighbor.
We are considering multiple datasets
\[\delta\in\mathcal{D}=\{\text{GitHub}\;[22],\text{Twitter}\;[23],\text{Proprietary },\cdots\} \tag{2}\]
The Twitter Typo Corpus contains \(\approx 40,000\) pairs of words with typographical errors along with the correct word representing a good variety of typographical errors commonly found in informal social media text. With about \(\approx 350,000\) edits collected from code commits, the GitHub Typo Corpus is the largest available public typo dataset. Both of these datasets, along with other proprietary ones we gathered ourselves based on the AppSource search
telemetry, are used to gather historical statistical properties that are used in our data augmentation strategies as explained in detail in the following sections.
The distribution of classes of typographical errors for a dataset \(\delta\) is denoted by the \(\mathcal{T}\)-dimensional vector
\[p_{\mathcal{T}}\langle\delta\rangle=\left[p_{\mathcal{T}_{i}}\langle\delta \rangle\mid\mathcal{T}_{i}\in\mathcal{T}\text{ and }\sum_{\mathcal{T}_{i}}p_{\mathcal{T}_{i}}\langle\delta\rangle=1\right] \tag{3}\]
where \(p_{\mathcal{T}_{i}}\langle\delta\rangle\) refers to the probability of observing error type \(\mathcal{T}_{i}\) in a dataset \(\delta\). An example of the distribution of these classes of errors can be seen in Fig 1 for the GitHub dataset.
The next step consists in classifying each typo in all datasets \(\mathcal{D}\) into a specific instance \(\mathcal{T}_{i}\in\mathcal{T}\). This is achieved by identifying the necessary edits to transform one string (potentially affected by a typo) with another (ground-truth). This can be done efficiently using standard dynamic programming techniques for sequence matching [27].
## 4 Statistics of typographical errors
### Non-locality of the errors
All instances \(\mathcal{T}_{i}\in\mathcal{T}\) of error types may emerge from different underlying mechanisms and, as a result, may be characterized by different statistical properties.
Let us denote by \(\mathcal{K}\) the set of all keys on a keyboard. We define a function \(\mathcal{T}_{w}\) that takes as argument a dataset \(\delta\in\mathcal{D}\), a class of typo \(\mathcal{T}_{i}\in\mathcal{T}\) and a keyboard key \(\kappa\in\mathcal{K}\) and returns a dependently-typed object \(\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle\) such that
\[\mathcal{T}_{w}:\left[\left(\delta\in\mathcal{D}\right)\times\left(\mathcal{T }_{i}\in\mathcal{T}\right)\times\left(\kappa\in\mathcal{K}\right)\right] \rightarrow\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle \tag{4}\]
where \(\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle\) may be either:
* \(\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle\in[0,1]\) if \(\mathcal{T}_{i}\in\{\mathcal{T}_{\text{Deletion}},\mathcal{T}_{\text{Insertion}}\}\). In this case \(\mathcal{P}\langle\delta,\mathcal{T}_{i},\kappa\rangle\) is a constant that encodes the probability of deletion / insertion of the key \(\kappa\).
* \(\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle=\left[p_{1},\cdots,p_{|\mathcal{K}|}\right]\in\left[[0,1]\times\cdots\times[0,1]\right]\) if \(\mathcal{T}_{i}\in\{\mathcal{T}_{\text{Replication}},\mathcal{T}_{\text{ Substitution}},\mathcal{T}_{\text{Transposition}}\}\). In this case \(\mathcal{P}\langle\delta,\mathcal{T}_{i},\kappa\rangle\) is a probability density function such that \(\sum_{\kappa^{\prime}\in\mathcal{K}}p_{\kappa^{\prime}}=1\). It represents the probability of replicating / substituting / transposing the initial key \(\kappa\) by any other key \(\kappa^{\prime}\in\mathcal{K}\).
In practice the function \(\mathcal{T}_{w}\) is implemented efficiently using nested key-value data stores.
We populate our \(\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle\) statistics completely on real-world examples of typos. For illustration purposes, we show in Fig 2 a typical probability density function of keystroke mistakes estimated from the GitHub dataset. The non-local effects are clearly visible with many keys \(\kappa^{\prime}\) physically far away from \(\kappa\) being attributed higher probabilities of substitutions than those closer to it. Among other causes, this may happen due to language/phonetic effects such as "farward" instead of "forward" or "thaought" instead of "thought"...
At any rate, this observation invalidates the assumptions of keyboard locality implied in the QWERTY distance (and its derivatives) showing that non-local effects are very strong and should not be ignored. Taking these into account thanks to our sophisticated \(\mathcal{T}_{w}\) is what allows us to create a more powerful synthetic data augmentation strategy.
Figure 1: Pie chart representation of the distribution of the different error types \(p_{\mathcal{T}}\langle\delta=\text{GitHub}\rangle\). One can see that deletions are far more prevalent than other types of typing errors.
### Position distribution of the errors
The character position \(r\) at which the typographical errors occur is another random variable characterizing the statistics. We normalize \(r\) by the length of the mistyped string so that strings of any lengths can be compared to each other (\(r=0\) always corresponds to the first character and \(r=1\) coincides with the last character).
Obviously, the statistics of \(r\) may depend on the class of typo and following the notation from the previous section, we denote by \(\mathcal{T}_{r}\) the function
\[\mathcal{T}_{r}:\left[\left(\delta\in\mathcal{D}\right)\times\left(\mathcal{ T}_{i}\in\mathcal{T}\right)\right)\right]\rightarrow\mathcal{P}_{r}\langle \delta,\mathcal{T}_{i}\rangle \tag{5}\]
where \(\mathcal{P}_{r}\langle\delta,\mathcal{T}_{i}\rangle\) is a probability distribution that quantifies the likelihood of relative character position \(r\) being affected by an error of type \(\mathcal{T}_{i}\) for dataset \(\delta\).
As we can see in Fig 3 for deletions, \(\mathcal{P}_{r}\langle\delta,\mathcal{T}_{i}\rangle\) does not follow a uniform distribution. The same observation carries over for the other classes of typos \(\in\mathcal{T}\) as well and those statistical properties will be taken into account in our synthetic datasets.
## 5 Typo correction ML formulation
Before we move on to the different data augmentation strategies and their relative performance, we briefly describe the formulation of our typo correction solution.
Common spellcheckers which are typically built on top of Levenshtein-like distances such as the ones used in many Microsoft products are not accurate enough for the short and domain-specific queries specialized online marketplaces such as AppSource face even if their dictionaries are regularly updated. As an example, we have used the popular open-source package pyspellchecker which works by comparing permutations within a predefined Levenshtein distance. When trained only on default dictionaries, the spell checker achieves only a very small accuracy of \(18.3\%\). Even when the dictionary is enhanced with product names from the AppSource catalog, the accuracy reaches only \(59.9\%\) which is well below our baseline model (see Table 1). Considering the poor performance of traditional spellcheckers, we now introduce our formulation of domain-specific typo correction as a multi-class classification problem.
### Training: multiclass classification
We start by training a supervised classification model with \(|\mathcal{V}|\approx 23,000\) classes corresponding to the product names in the AppSource marketplace catalog. The details of model architecture are shown in Fig. 4. As the focus of the present study is about characterizing different types of data augmentation strategies and their performance, we limit ourselves to relatively small and simple recurrent networks upon which we can iterate quickly. Once this model has been trained, we use it as a proxy from which we can extract the domain-specific "embedding" 1 representations for the \(|\mathcal{V}|\) product names which we cache into a database.
Footnote 1: By “embedding”, we refer to the feature map at the last layer before the softmax activation as is common terminology in the literature.
### Inference: nearest neighbor in embedding space
When users type in a query, the embedding representation of this query can be compared to our database embeddings of \(\mathcal{V}\) and the nearest neighor (as measured by cosine similarity) is returned as the "predicted" class. In the special case where the user query matches exactly an existing product name, the similarity will be exactly 1 as expected and this similarity score will then decrease as typos get more and more different from the product names in \(\mathcal{V}\).
### Model performance evaluation
Using historical production web telemetry data, we extracted \(3,303\) of the most common user queries which we identified as being typos with respect to an existing product name in the AppSource marketplace catalog. Then, we manually labeled each one of these typos with the correct product that the user eventually clicked on. This process enabled us to build a validation dataset upon which the accuracy of the model can be evaluated in the inference mode described in section 5.2. Accuracy is simply defined as
Figure 3: Illustration of non-uniform effects in keystroke position mistake distribution. As an example, we see that the probability of key deletion increases linearly as a function of letter position: letters are far more likely to be deleted towards the end of words than they are at the beginning.
the number of times the model predicts the correct class normalized by \(|\mathcal{V}|\).
## 6 Training on completely synthetic data
As discussed in the introduction, we are facing an unusual situation where there is no training data other than the ground-truth vocabulary \(\mathcal{V}\) of product names. Therefore, if one is to train the supervised machine learning model specified in section 5.1, we have to resort entirely on creating a synthetic training dataset.
We consider multiple stages of sophistication in creating such synthetic data and demonstrate via careful experiments that the model performance can be significantly improved by gradually introducing more realistic synthetic data.
All augmentation strategies presented below follow the same procedure of generating a dataset by running algorithm (1) on all product names from \(\mathcal{V}\). This creates a list of \(n\times|\mathcal{V}|\) samples where each class (i.e. product name) has \(n\) (potentially duplicated; see below) synthetic samples associated with it. Eventually, this synthetic dataset is used to train the supervised model of Fig. 4. The accuracy of the model is estimated on the manually annotated dataset described in Section 5.3.
Because those synthetic datasets are created directly from product names in \(\mathcal{V}\), they are, by construction, domain-specific to this catalog.
### Random augmentation
The first stage consists taking a uniform distribution over the error types \(\mathcal{T}\) defined in Eq.(1) and forcing their respective statistics \(\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle\) and \(\mathcal{P}_{r}\langle\delta,\mathcal{T}_{i}\rangle\) to be simple uniform random variables. With this statistical set-up in place, we follow algorithm (1) to generate a synthetic dataset and train the model. This augmentation strategy leads to a performance \(62.37\%\) (see table 1).
```
Inputs Ground-truth string \(s_{\text{gt}}\), desired number of synthetic samples \(N\), classes of typos \(\mathcal{T}\), historical statistics \(\mathcal{T}_{w}\) and \(\mathcal{T}_{r}\), \(\delta\) dataset Output: Synthetic samples \(\mathcal{S}=[s_{1},\cdots,s_{N}]\) Set keepDuplicate = True Initialize \(\mathcal{S}\leftarrow[\ ]\) and \(i\gets 1\) while\(i\leq N\)do Pick random \(\mathcal{T}_{i}\) with probability given by Eq. (4). Pick random \(r\) with probability given by Eq. (5). \(\kappa\leftarrow\) character from \(s_{\text{gt}}\) picked from the 2 steps above. Apply appropriate action on \(\kappa\) \(s_{i}\leftarrow\) Associate synthetic sample with label \(s_{\text{gt}}\) if\(s_{i}\in\mathcal{S}\) and keepDuplicate = False then \(i\gets i-1\) continue endif Add \(s_{i}\) to \(\mathcal{S}\) \(i\gets i+1\) endwhile
```
**Algorithm 1** Synthetic training dataset generation
### QWERTY-distance based augmentation
Here, once again the error types are drawn from a uniform distribution over \(\mathcal{T}\). The only difference from the purely random augmentation of section 6.1 is that the pair of keys involved in substitutions are now limited to nearby keys on the physical keyboard. In practice, given a key \(\kappa\), we limit the possible substitutions to keys that are a QWERTY distance of one compared to \(\kappa\). This is akin to a weighted Levenshtein distance where only the keys immediately surrounding the key \(\kappa\) of interest are assigned equal and non-zero weights. This augmentation strategy leads to a performance \(62.25\%\) (see table 1).
Figure 4: We consider product names as a sequence of characters of length \(s=69\) (maximum product name length) where each character is represented as a one-hot-encoded vector of size \(f=37\) (total number of distinct characters). Batching together \(n=128\) products, the input data of shape \(\sim(n\times s\times f)\) is fed into two LSTM layers which return a recurrent hidden layer of shape \(\sim(n\times s\times h)\) with \(h=256\). Eventually, this representation is flattened and connected to a dense 512 neuron layer with RELU activation before a final dense layer with softmax activation for classification into \(|\mathcal{V}|=23,349\) classes using the cross-entropy loss. The model was trained with an Adam optimizer with learning rate of 0.001 over 50 epochs. The “embedding” representation of input string is a 512 dimensional vector.
### Real-world statistics
In this case, we use the real-world distribution of error types \(p_{T}\langle\delta\rangle\) discussed in section 3 along with their appropriate observed historical statistics \(\mathcal{P}_{w}\langle\delta,\mathcal{T}_{i},\kappa\rangle\) and \(\mathcal{P}_{r}\langle\delta,\mathcal{T}_{i}\rangle\) described in section 4 to generate the synthetic training dataset.
The model performance is significantly improved for all 3 independent datasets in \(\delta\in\mathcal{D}\) as one can see in table 1 with the best performance of \(65.06\%\).
Note that we kept open the possibility of removing duplicate synthetic samples in algorithm (1) by controlling the value keepDuplicate. Intuitively, one should expect model performance degradation by removing the duplicated samples as their removal would create a bias away from historical statistics. Indeed this is what we observed with the best performance without duplicates reaching only \(63.88\%\).
### Hyperparametrized dataset fusion
The previous strategy was based on drawing the typo statistics from a single dataset \(\delta\) at a time. It may be that some aspects of our unique AppSource marketplace situation are better represented by some datasets than others. In order to potentially take the best out of all the available datasets, we propose to fuse the statistics of the datasets of \(\mathcal{D}\) together by introducing hyperparameters.
Given a sets of datasets \(\delta\in\mathcal{D}\), see Eq.(2), we combine them by introducing dataset-dependent hyperparameters \(\lambda_{\delta}\) such that the final combined dataset is a linear mixture
\[\mathcal{D}_{\text{dataset fusion}}=\left\{\lambda_{\delta}\times\delta\ \big{|}\ \sum_{\delta\in\mathcal{D}}\lambda_{\delta}=1\right\} \tag{6}\]
Using grid search for hyperparameter tuning, we observe that this optimization is indeed successful in creating more appropriate training data leading to an eventual model accuracy of \(65.58\%\) (see Fig. 5 and table 1).
### Data efficiency
Finally, we conclude this section by commenting on the data efficiency of our augmentation strategy. It turns out that model performance already saturates and reaches its maximum plateau after only about \(\approx 20\) synthetic samples as demonstrated in Fig. 6. This quick convergence rate can be related to the average number of characters \(\approx 24\) of the product names in the AppSource marketplace catalog.
## 7 Model deployment as a real-time API
The model is exposed to the AppSource marketplace team via a real-time API which receives \(\approx 100,000\) daily requests. Model inference takes around \(\approx 400\) milliseconds along with another 100 milliseconds for API call (including
\begin{table}
\begin{tabular}{|l|c|} \hline Training dataset & Accuracy in \% \\ \hline \hline Basic spellcheckker (5) & \(18.3\) \\ Specialized spellchecker (5) & \(59.9\) \\ \hline \hline Random (6.1) & \(62.37\) \\ QWERTY-distance (6.2) & \(62.25\) \\ \hline \hline Real-World Statistics (6.3) & \\ \hline \(\delta=\text{GitHub}\) & \(65.06\) \\ \(\delta=\text{Twitter}\) & \(64.03\) \\ \(\delta=\text{Proprietary}\) & \(64.27\) \\ \hline \(\delta=\text{w/o duplicate samples}\) & \(63.88\) \\ \hline \hline \(\delta=\text{Dataset fusion (6.4)}\) & \(65.58\) \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison for different type of training datasets. One can see that the model performance significantly improves as one is introducing more and more sophisticated data augmentation strategies.
Figure 5: Model performance vs. different mixing ratios of data fusion for \(|\mathcal{D}|=3\) datasets. One can see that the best accuracy is reached for \(\{\lambda_{\text{GitHub}}=0.25,\lambda_{\text{Twitter}}=0.7,\lambda_{\text{Proprietary}}=0.05\}\).
Figure 6: Model performance vs. number of synthetically generated samples showing excellent convergence properties with minimal number of synthetic samples.
load balancing and traffic management). This means that the total response time is around \(\approx 500\) milliseconds. Based on telemetry logs since Feb. 2023, we have seen that \(99.9\%\) of the API calls to our real-time API are getting a response below 500 milliseconds, which is meeting the SLA (Service Level Agreement) with our downstream stakeholders. Since the AppSource volume is now \(\approx 23,000\) and growing at \(\approx 200\) apps/month, we estimate that the current solution would continue to meet SLAs for at least another 2 years of projected volume of apps. In the future, we intend to explore vector databases and faster similarity search techniques to handle the performance for even larger values of \(|V|\).
Even though the primary search engine powering AppSource is Azure Cognitive Search (ACS) [28], this solution frequently fails to return any results and/or any auto-completion for uncommon search queries. When this happens, our API is triggered and returns the closest matched keyword from the catalog \(\mathcal{V}\). This keyword is further passed back to ACS thereby providing incremental benefit on top of the default search engine. After deployment of our model, the CTR (Click Through Rate) improved by 4% (from 35% to 39%) and no-results searches dropped by 8% (from 25% to 17%). Azure Traffic Manager is leveraged for load balancing the search requests across 4 regions: US, Europe, Japan and Australia.
## 8 Conclusion
Solving typos in search is a complex task, particularly in domain-specific settings, because the search terms used in these settings can be highly specialized and technical in nature. Domain-specific search terms are often used by professionals in their respective fields and may include scientific terms, jargon, or acronyms that are not commonly used in everyday language.
We have introduced a domain-specific typo correction model which is completely based on synthetic training data. We have shown that gradually introducing more sophisticated data augmentation strategies led to significantly better model accuracy. We have also demonstrated that the data augmentation strategy is very efficient in terms of data size.
The model has been deployed as a real-time API now powering the AppSource marketplace website which is a major portal for customers as well as Microsoft partners. On average, 50 products get added every week to the AppSource Product catalog and hence our model is trained every week to learn about the newly added products. The model has already been attributed to significant search improvements in monitored metrics such as CTR and 0-search results.
In the future, we intend to expand our universe of error types to include multiple-letter typos and incorporate phonetic language effects into our synthetic data augmentation scheme [2, 29]. Continuing to gradually increase levels of sophistication of data augmentation, one could consider fully hyperparametrized statistics that are no longer drawn from historical datasets. In this case, the requirements that error type distributions form a well defined probability distribution could even be lifted leading to more flexibility and potentially higher model accuracy. Now that we have established the usefulness of our data augmentation strategy to improve the spell-cheking performance, we intend to experiment with more sophisticated network architectures (such as transformer-based) that go beyond our initial recurrent networks.
More generally, our work demonstrates that completely synthetic datasets can be successful in real-world applications that require high levels of accuracy. Longer term, we hope that more frequent use of synthetic data will enable rapid experimentation and testing as well as reduce the risks of privacy violations associated with using sensitive real-world data.
## 9 Acknowledgements
We thank our colleagues in CX Data for their feedback and support. In particular, we thank Manish Shukla, Daniel Yehdego, Yasaswi Akkaraju and Naveen Panwar for initiating earlier versions of this model. Additionally, we thank Noam Ferrara, Gal Horowitz and Greg Oks from the marketplace engineering team for integrating our real-time API into the overall search flow architecture.
|
2310.05782
|
Aligning Language Models with Human Preferences via a Bayesian Approach
|
In the quest to advance human-centric natural language generation (NLG)
systems, ensuring alignment between NLG models and human preferences is
crucial. For this alignment, current popular methods leverage a reinforcement
learning (RL) approach with a reward model trained on feedback from humans.
However, inherent disagreements due to the subjective nature of human
preferences pose a significant challenge for training the reward model,
resulting in a deterioration of the NLG performance. To tackle this issue,
previous approaches typically rely on majority voting or averaging to
consolidate multiple inconsistent preferences into a merged one. Although
straightforward to understand and execute, such methods suffer from an
inability to capture the nuanced degrees of disaggregation among humans and may
only represent a specialized subset of individuals, thereby lacking the ability
to quantitatively disclose the universality of human preferences. To address
this challenge, this paper proposes a novel approach, which employs a Bayesian
framework to account for the distribution of disagreements among human
preferences as training a preference model, and names it as d-PM. Besides,
considering the RL strategy's inefficient and complex training process over the
training efficiency, we further propose utilizing the contrastive learning
strategy to train the NLG model with the preference scores derived from the
d-PM model. Extensive experiments on two human-centric NLG tasks, i.e.,
emotional support conversation and integrity "Rule-of-Thumb" generation, show
that our method consistently exceeds previous SOTA models in both automatic and
human evaluations.
|
Jiashuo Wang, Haozhao Wang, Shichao Sun, Wenjie Li
|
2023-10-09T15:15:05Z
|
http://arxiv.org/abs/2310.05782v3
|
# Aligning Language Models with Human Preferences
###### Abstract
In the quest to advance human-centric natural language generation (NLG) systems, ensuring alignment between NLG models and human preferences is crucial. For this alignment, current popular methods leverage a reinforcement learning (RL) approach with a reward model trained on feedback from humans. However, inherent disagreements due to the subjective nature of human preferences pose a significant challenge for training the reward model, resulting in a deterioration of the NLG performance. To tackle this issue, previous approaches typically rely on majority voting or averaging to consolidate multiple inconsistent preferences into a merged one. Although straightforward to understand and execute, such methods suffer from an inability to capture the nuanced degrees of disaggregation among humans and may only represent a specialized subset of individuals, thereby lacking the ability to quantitatively disclose the universality of human preferences. To address this challenge, this paper proposes a novel approach, which employs a Bayesian framework to account for the distribution of disagreements among human preferences as training a preference model, and names it as **d-PM**. Besides, considering the RL strategy's inefficient and complex training process over the training efficiency, we further propose utilizing the contrastive learning strategy to train the NLG model with the preference scores derived from the d-PM model. Extensive experiments on two human-centric NLG tasks, i.e., emotional support conversation and integrity "Rule-of-Thumb" generation, show that our method consistently exceeds previous SOTA models in both automatic and human evaluations.
## 1 Introduction
Human-centric natural language processing (NLP) aims to develop NLP systems that are finely attuned to human preferences [14; 12; 30]. Consequently, learning from human feedback is well-suited for training models in human-centric NLG tasks [18; 28]. Currently, reinforcement learning (RL) with a reward model is the most popular method to align models with human preferences [26; 11; 41]. Its effectiveness depends heavily on how well human preferences are learned by the reward model [6; 4]. However, modeling human preferences can be challenging.
Due to the high subjectivity of personal standards and human values, it can be difficult to reach a consensus on preferences among individuals, significantly increasing the learning difficulty. As depicted in Figure 1, persons may have varied preferred responses with inconsistent emotions and values given the same context. To tackle this challenge, existing methods mostly adopt aggregation techniques such as majority voting or averaging [11; 6; 4]. However, aggregated preferences potentially cater
to specific subsets of people, risking generating controversial content (see Supporter A in Figure 1). Additionally, subjectivity and inconsistency are intrinsic components of certain tasks, such as emotion analysis [13] and ethical evaluation [10], necessitating careful consideration instead of dismissal [3]. Therefore, for widely acceptable and less controversial outputs, it is necessary for the resulting NLG systems to account for capturing disagreements inherent in human preferences [17].
In this paper, we introduce a novel Bayesian-based approach termed Preference Modeling with Disagreement (**d-PM**). This method is designed to approximate a "universal preference" that comprises the preferences of "all individuals", given the preferences of several individuals. Although a soft label, derived from these several individuals, can intuitively account for disagreement, outliers or extreme labels can disproportionately influence the overall perception. Therefore, we employ Bayesian inference to refine these preferences. Specifically, the observed preference among selected individuals serves as prior knowledge. Our d-PM aims to leverage distribution of all possible universal preferences (likelihood probability) to adjust and smooth the initially observed one, leading to the derivation of a universal preference (posterior). Upon obtaining the universal preference, we calculate the likelihood of the expected preference types to establish a preference score, which is then utilized for further language model alignment.
Based on the d-PM model, we further optimize the language models to generate widely acceptable and less controversial texts. Specifically, we propose utilizing the contrastive learning strategy to calibrate the generation model towards generating texts with high preference scores provided by d-PM. Although existing RL strategies can also be leveraged to make the calibration, they are generally perceived as costly in terms of convergence [9] and online decoding processes [40]. We assess our proposed method on two human-centric NLG tasks: emotional support conversations and moral integrity RoT generation. Experimental results demonstrate that our framework can be applied to state-of-the-art generation models for each task without performance degradation and meanwhile effectively increases global consensus of human preferences embedded in generated texts.
Our main contributions are three-fold: **(i)**. To the best of our knowledge, we are the first to align text generation models with human preferences while considering inherent disagreement among different individuals. **(ii)**. In order to model human preferences with their disagreement, we propose a Bayesian approach, Preference Modeling with Disagreement (d-PM). Additionally, we use its preference scores to calibrate NLG models via contrastive learning for generations that can be widely acceptable and less controversial. **(iii)**. We conduct experiments on two human-centric NLG tasks, i.e., emotional support conversations and integrity RoT generation. Experimental results demonstrate the effectiveness and versatility of our proposed method. 2
Figure 1: People can have different feelings towards the same response in the emotional support conversation because of their own experiences and values. A trustworthy human-centric system is expected to consider the benefits of universal groups, including minorities, and generate less controversial and more helpful content, like supporter B instead of A.
Framework
Aligning text generation models with human preferences requires two essential components: modeling human preferences and calibrating the text generation model.
Preference Modeling with DisagreementHuman preferences can be inferred from a human-annotated dataset, denoted as \(\mathcal{D}\). Each instance in the dataset is represented as a triplet \((c,s,l)\). Here \(c\) is a context; \(s\) is a text; and \(l\) is a label indicating the annotators' preferences. The inherent disagreement in human preferences can be encapsulated within the label in two distinct ways. In the first approach, the label can be a soft label derived from multiple annotations, all attributed to the same sentence [13]. These annotations are sourced from multiple human annotators to preserve disagreement among individuals. The second approach is the direct collection of global consensus. In this context, the label signifies the proportion of people who find a particular sentence acceptable, an estimate provided by a single human annotator [42; 7].
Aimed at capturing human preference with disagreement within the dataset, we assume there is a distribution \(\rho\) over two classes, \(\{acceptable,unacceptable\}\), comprising preferences of all humans, and therefore \(l\) is the sampling result from \(\rho\). We employ a preference model \(\mathcal{R}(\theta)\) to infer \(\rho\) give \(c\) and \(s\) as inputs and the probabilistic format of \(l\) as the prior distribution. Since we focus on whether \(s\) is widely acceptable, the likelihood of the class \(acceptable\) is defined as the preference score:
\[\mathcal{S}_{(s,c)}=\mathcal{R}(s,c;\theta)_{acceptable}. \tag{1}\]
Calibration for AlignmentIn aligning NLG models with human preferences, we calibrate the existing generation model \(\mathcal{G}(\xi_{0})\) with preference scores. Here, \(\mathcal{G}(\xi_{0})\) stands for a model already fine-tuned on a dataset \((X,Y)\), where \(X\) and \(Y\) represent the input set and the corresponding output set, respectively, and \(\xi_{0}\) are the optimized parameters. Significantly, if the dataset for preference modeling is identical to the \((X,Y)\) dataset, then \((x,y)\in(X,Y)\) corresponds to \((c,s)\in\mathcal{D}\). If not, these two datasets should belong to a similar domain. We initially decode \(K\) candidate sequences \(\{\tilde{y}_{k}\}_{k=1}^{k=K}\) via \(\mathcal{G}(\tilde{y}_{k}|x;\xi_{0})\) for each \(x\in X\). Then, we further train \(\mathcal{G}(\xi_{0})\), aimed at a new objective: aligning the likelihoods of candidate sequences with preference scores \(\{\mathcal{S}_{(\tilde{y}_{k},x)}\}_{k=1}^{k=K}\).
## 3 Method
This section presents our method, whose diagram is shown in Figure 2. We first use a Bayesian approach, i.e., d-PM, to model human preferences with disagreement (Section 3.1). Then we calibrate a text generation model by contrastive learning with preference scores of d-PM to align this model with human preferences (Section 3.2).
Figure 2: Diagram for preference modeling with disagreement and calibration for alignment.
### Preference Modeling with Disagreement
We establish a distribution \(\rho\) to represent the universal preference for the text \(s\) given its context \(c\). Therefore, the observed annotations \(l\) are considered as samples from \(\rho\), and can form a prior distribution \(p_{i}(\rho)\). Inspired by [29], we devise a Bayesian approach to approximate \(\rho\) using this prior.
Specifically, we establish a connection between \(\rho\) and \(l\) through the optimization process of a generative model. This model is designed for generating text \(s\) conditioned on \(c_{i}\) and \(\rho\): \(p(s|c_{i},\rho)\). The log-likelihood of the text can be formulated as \(\sum_{i}\log p(s_{i}|c_{i})=\sum_{i}\log\big{(}\sum_{\rho}p(s_{i}|c_{i},\rho)p _{i}(\rho)\big{)}\), where \(p_{i}(\rho)\) is the prior preference distribution. Its optimization can be achieved by introducing a variational posterior distribution \(q(\rho|s_{i},c_{i})\) for the \(i\)-th datapoint, and minimizing the free energy (negated evidence lower bound) formulated as:
\[-\sum_{i}\log p(s_{i}|c_{i})+\sum_{i}\sum_{\rho}q(\rho|s_{i},c_{i})\log\frac{p (s_{i}|c_{i},\rho)p_{i}(\rho)}{q(\rho|s_{i},c_{i})}. \tag{2}\]
Minimization of the free energy involves estimations of both the forward distribution of text \(s_{i}\): \(p(s_{i}|c_{i},\rho)\), and the posteriors \(q(\rho|s_{i},c_{i})\), which can be computed by our preference model:
\[q(\rho|s_{i},c_{i})=\mathcal{R}(s_{i},c_{i}|\theta). \tag{3}\]
As for \(p(s_{i}|c_{i},\rho)\), it is defined only on the \(i\)-th datapoint and is computed by minimizing Equation (2) for fixed \(q(\rho|s_{i},c_{i})\), s.t., \(\sum_{i}p(s_{i}|c_{i},\rho)=1\) for all \(\rho\). Thus, the optimum is achieved by:
\[p(s_{i}|c_{i},\rho)=a_{i,\rho}=\frac{q(\rho|s_{i},c_{i})}{\sum_{j}q(\rho|s_{j},c_{j})}. \tag{4}\]
From Equation (4), the generative model can be regarded as a matrix of variables \(a_{i,\rho}\) describing conditional probabilities of different responses \(s_{i}\) given different latent distributions \(\rho\) with known \(c_{i}\).
In a variational way, Equation (2) can be rewritten as \(-\log p(s_{i}|c_{i})+\sum_{i}\text{KL}(q(\rho|s_{i},c_{i})\|r_{i}(\rho))\). Here, \(r_{i}(\rho)\propto p(s_{i}|c_{i},\rho)p_{i}(\rho)\) is the posterior model of the generative model, and it can be reformulated with reduction of \(p(s_{i}|c_{i},\rho)\) to the matrix in Equation (4):
\[r_{i}(\rho)=\alpha_{i}\cdot p_{i}(\rho)p(s_{i}|c_{i},\rho)=\alpha_{i}\frac{p_{ i}(\rho)q_{i}(\rho|s_{i},c_{i})}{\sum_{j}q_{j}(\rho|s_{j},c_{j})}, \tag{5}\]
where \(\alpha_{i}\) is a scalar enabling \(\sum_{\rho}r_{i}(\rho)=1\). Accordingly, we can minimize the KL divergence between \(q(\rho|s_{i},c_{i})\) and \(r_{i}(\rho)\) to minimize the free energy. The minimization of the free energy in Equation (2) can be derived as:
\[\sum_{i}\text{KL}(q(\rho|s_{i},c_{i})\|r_{i}(\rho))=\min_{\theta}\sum_{i} \text{KL}\bigg{(}\mathcal{R}(s_{i},c_{i};\theta)\Big{\|}\alpha_{i}\cdot p_{i}( \rho)\frac{\mathcal{R}(s_{i},c_{i};\theta)}{\sum_{j}\mathcal{R}(s_{j},c_{j}; \theta)}\bigg{)}. \tag{6}\]
By optimizing the above objective, we can optimize the parameters of our preference model, i.e., \(\theta\).
### Calibration for Alignment
It is nearly possible for \(\mathbb{G}(\xi_{0})\) to generate texts with both high and low preference scores, shown in Figure 3. However, we expect to calibrate the model such that the generation probability aligns with these preference scores. Specifically, we use diverse beam search [34] to generate multiple candidates and then use our d-PM to evaluate these candidates. For the sake of more likely generating a high preference score text, we propose a model-agnostic module to leverage contrastive learning to calibrate generation likelihood aligning with d-PM. Taking inspiration from recent calibration work [32; 23; 39], we implement this module through the following three steps:
Step 1: Candidates Generation.We generate candidates from the text generator \(\mathbb{G}(\xi_{0})\), which has been fine-tuned on corresponding dataset \((X,Y)\) and its parameters are \(\xi_{0}\), on its own training dataset. Given an input sequence \(x\in X\), we first use \(\mathbb{G}(\xi_{0})\) to generate \(K\) candidates \(\{\tilde{y}_{1},\tilde{y}_{2},\cdots,\tilde{y}_{K}\}\) using diverse beam search. As a result, these candidates will get similar possibilities yet different preference scores according to the above preliminary study.
Step 2: Preference-based Ranking.We use our proposed d-PM \(\mathcal{R}(\theta)\) to measure the preference score \(\mathcal{S}_{(\tilde{y}_{k},x)}\) of each candidate \(\tilde{y}_{k}\). Then we rank these candidates according to the above preference score and obtain a list of ranked candidates: \(\tilde{y}^{{}^{\prime}}_{1},\tilde{y}^{{}^{\prime}}_{2},\cdots,\tilde{y}^{{}^{ \prime}}_{K}\), where \(\mathcal{S}_{(\tilde{y}^{{}^{\prime}}_{i},x)}>\mathcal{S}_{(\tilde{y}^{{}^{ \prime}}_{j},x)}\) for \(\forall\ i<j\).
Step 3: Likelihood Calibration.As mentioned before, we leverage contrastive learning to assign higher likelihoods to the candidates with higher preference scores. The following pairwise margin loss is used to adjust the generator \(\mathcal{G}(\xi)\).
\[\mathcal{L}^{r}=\sum_{i}\sum_{j>i}max(0,\mathbf{P}(\tilde{y}^{{}^{\prime}}_{j} ;\xi)-\mathbf{P}(\tilde{y}^{{}^{\prime}}_{i};\xi)+\lambda_{ij}), \tag{7}\]
where \(\lambda_{ij}\) is the default margin \(\lambda\) multiplied by the difference in rank between the samples, i.e., \(\lambda_{ij}=\lambda*(j-i)\). \(\mathbf{P}(\tilde{y}^{{}^{\prime}}_{i};\xi)\) is the length-normalized log-probability of the candidate:
\[\mathbf{P}(\tilde{y}^{{}^{\prime}};\xi)=\frac{\sum_{t=1}^{|\tilde{y}^{{}^{ \prime}}|}\log\mathcal{G}(\tilde{y}^{{}^{\prime}}_{t}|x,\tilde{y}^{{}^{\prime }}_{<t};\xi)}{|\tilde{y}^{{}^{\prime}}|^{\alpha}}, \tag{8}\]
where \(\alpha\) is the length penalty hyperparameter. To avoid forgetting token-level likelihood information of the ground-truth text, we also use an additional token-level negative log-likelihood. The final calibration loss is as follows:
\[\mathcal{L}^{c}=-\lambda\frac{1}{|y|}\sum_{t=1}^{|y|}\log\mathcal{G}(y_{t}|x, y_{<t};\xi)+\mathcal{L}^{r}. \tag{9}\]
We minimize \(\mathcal{L}^{c}\) to optimize the generator's parameters \(\xi\). This process is supervised by our d-PM model and aligns the generation model with human preference.
## 4 Experiments
### Emotional Support Conversation
In an emotional support conversation, a supporter aims to buffer a help-seeker's emotional distress and help the help-seeker to change the difficult situation [22]. In this context, the model functions as the supporter, while the user is always the help-seeker. Due to different personal experiences, different help-seekers may respond with varied feelings and reactions to the same response, as illustrated in Figure 1. Our objective is to enhance the model's ability to generate responses that will not escalate the negative feelings of a diverse range of help-seekers.
Dataset and Base ModelsThe benchmark ESConv [22], containing approximately \(1\)k conversations with \(31\)k utterances, develops each conversation between a help-seeker and a supporter. We include BlenderBot-Vanilla and BlenderBot-Joint proposed in conjunction with the dataset, and the SOTA model MultiESC[5]. We reproduced the base models in accordance with their respective papers and publicly available codes.
Human Preferences with DisagreementWe derive human preferences from the Motivational-Interviewing-Dataset [36]. This dataset encompasses around \(17\)k supporter responses to help-seekers. Each response is annotated by \(2\sim 4\) experts following the MI codes[25]. The labels can be induced into two classes \(\{acceptable,unacceptable\}\), and the human preferences with disagreement can be estimated by our d-PM method. By fine-tuning a BERT model on this dataset using prefix-tuning [37; 19], we obtain the d-PM. Additional details can be found in Appendix B.2.
Figure 3: The maximum and minimum preference scores of \(10\) candidates generated via diverse beam search given the same context. We test on \(1000\) data instances and three emotional support conversation models.
Experimental SetupWe apply our proposed method to align each base model, thus treating the well-trained base model as the generator \(\mathcal{G}(\xi_{0})\). Additionally, to validate the effectiveness of d-PM, we employ three alternative preference models within our framework for comparative analysis:
1. A preference model (major) trained to predict the majority voting result of annotations from different annotators, denoted as \(l_{m}\), and optimized by cross-entropy loss, formulated as: \[\mathcal{L}(\theta)=-\mathbb{E}_{(c,s,l_{m})\sim\mathscr{D}}[p_{l}(l_{m})\log( \mathcal{R}^{l_{m}}(c,s;\theta))].\] (10) where \(p_{l}(l_{m})\) denotes the one-hot vector of \(l_{m}\).
2. A preference model (soft) trained to approximate the direct probabilistic label of annotations, i.e., the soft label \(l\). The model is optimized by: \[\mathcal{L}(\theta)=\mathbb{E}_{(c,s,l)\sim\mathscr{D}}\|\mathcal{R}(c,s; \theta)-l\|^{2}.\] (11)
3. A preference model (w/oA) that does not aggregate annotations and takes each annotation as independent. This model is optimized by cross-entropy loss, similar to Equation (10).
When training the aligned models, we aim to retain the same hyperparameters used in the training of the base models. We set the candidate number \(K\) to \(10\). We train each aligned model five times with five different seeds. Subsequently, we test each of the five trained models on the test dataset and compute the average results.
Automatic EvaluationWe adopt the following metrics commonly used in previous work [5; 22] for the automatic evaluation of our proposed method: BLEU [27] (B-1/2/3/4), ROUGE (R-L) [20], METEOR [2], CIDEr [33], and BOW Embedding-based matching score [21] (Extreme). Results are shown in Table 1.
Our Aligned\({}_{\text{d-PM}}\) significantly improves the performance of the base model in almost all automatic metrics, irrespective of the base model, suggesting the overall effectiveness of our proposed method. Alignedmajor and Alignedsoft are able to enhance the performance when the base model is either Blender-Vanilla or Blender-Joint, however, they do not yield an improvement when the base model is MultiESC. This limitation underscores the constraints inherent in using majority voting labels and soft labels to address disagreement of human preferences. Aligned\({}_{\text{w/oA}}\) surpasses the base model in certain metrics but falls short in others. Notably, it performs significantly lower in CIDEr, a metric evaluating the similarity between TFIDF-weighted n-grams. This shortfall suggests that Alignedw/oA is less likely to generate responses containing critical information found in the ground truth. This issue arises from the preference scores determined by w/oA being closely clustered in value, resulting in its inability to sequence the generated samples logically. These pieces of evidence indicate the potency of our proposed preference model d-PM.
\begin{table}
\begin{tabular}{c|l|c c c c|c|c|c} \hline \hline \multicolumn{2}{c|}{**Model**} & **B-1** & **B-2** & **B-3** & **B-4** & **R-L** & **METEOR** & **CIDEr** & **Extreme** \\ \hline \multirow{4}{*}{Blender -Vanilla} & Base & 17.85 & 7.08 & 3.60 & 2.11 & 17.06 & 7.46 & 15.44 & **51.02** \\ & Aligned\({}_{\text{major}}\) & 19.07 & 7.71 & 3.94 & 2.28 & 17.09 & 7.71 & 15.97 & 50.61 \\ & Aligned\({}_{\text{soft}}\) & 17.88 & 7.21 & 3.68 & 2.12 & 16.52 & 7.31 & 15.50 & 50.73 \\ & Aligned\({}_{\text{sto/oA}}\) & 19.70 & 7.56 & 3.64 & 2.05 & 16.90 & 7.72 & 15.62 & 50.48 \\ & Aligned\({}_{\text{d-PM}}\) & **20.75** & **8.32** & **4.17** & **2.39** & **17.41** & **8.21** & **16.57** & 50.38 \\ \hline \multirow{4}{*}{Blender -Joint} & Base & 18.70 & 7.30 & 3.61 & 2.03 & 17.66 & 7.56 & 16.91 & 50.95 \\ & Aligned\({}_{\text{major}}\) & 20.37 & 8.61 & 4.47 & 2.65 & 19.23 & 8.32 & **21.86** & 51.57 \\ & Aligned\({}_{\text{def}}\) & 19.36 & 7.87 & 3.85 & 2.09 & 17.55 & 7.65 & 15.90 & 50.84 \\ & Aligned\({}_{\text{sto/oA}}\) & 21.05 & 8.14 & 3.89 & 2.07 & 17.65 & 8.11 & 15.29 & 50.68 \\ & Aligned\({}_{\text{d-PM}}\) & **21.05** & **8.97** & **4.74** & **2.78** & **19.39** & **8.48** & 20.34 & **51.81** \\ \hline \multirow{4}{*}{MultiESC} & Base & 20.36 & 8.80 & 4.92 & 3.14 & 21.00 & 8.58 & 30.69 & 52.74 \\ & Aligned\({}_{\text{major}}\) & 19.10 & 8.27 & 4.61 & 2.88 & 20.72 & 8.24 & 30.15 & 52.57 \\ & Aligned\({}_{\text{soft}}\) & 19.30 & 8.33 & 4.62 & 2.88 & 20.83 & 8.35 & 30.75 & 52.54 \\ & Aligned\({}_{\text{sto/oA}}\) & 21.58 & 8.80 & 4.74 & 2.96 & 20.47 & 8.78 & 28.58 & 51.65 \\ & Aligned\({}_{\text{d-PM}}\) & **21.59** & **9.56** & **5.33** & **3.36** & **21.50** & **9.03** & **32.65** & **53.15** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Automatic evaluation results on ESConv. All results are significantly better than the corresponding base model with \(p<0.01\).
Human RatingsWe ask human annotators to evaluate the generations of models based on MultiESC since MultiESC can outperform Blender-Vanilla and Blender-Joint in almost all automatic evaluations. Specifically, we randomly sample \(100\) responses generated by different models for human ratings. We asked annotators to imagine they are help-seekers in the corresponding situation and measure each response in five aspects: (1). _Identification_: on a scale of \(1\sim 5\), how much the response can explore your situation in depth and help identify the problems. (2). _Conforting_: on a scale of \(1\sim 5\), how skillful the response can comfort you. (3). _Suggestion_: on a scale of \(1\sim 5\), how helpful the response can solve your problems. (4). _Overall_: on a scale of \(1\sim 5\), the overall quality of this response for emotional support; (5) _Global Consensus_: the number of people who deem the response can help them, \(1\sim 5\) represent nobody (\(<1\%\)), rare (\(5\%\sim 25\%\)), controversial (\(\sim 50\%\)), most (\(75\%\sim 90\%\)), and all (\(>99\%\)), respectively. Each response is annotated by three annotators, and we averaged these three annotations as the final result for each metric.
From Table 2, our method performs the best among the methods. Aligned\({}_{\text{d-PM}}\) obtained the highest score in all aspects, including the global consensus. It demonstrates that our method can generate less controversial and more helpful responses in the task of emotional support conversation.
### Integrity RoT Generation
We also apply our method to the integrity "Rule-of-Thumb" (RoT) generation task. This task is concerned with describing a chatbot's normative rules, which holds great potential for advancing research on morally-consistent conversational agents [42]. When it comes to outlining a chatbot's normative rules, people's values can vary widely. However, morally-consistent conversational agents are expected to accommodate the values of as many individuals as possible. Therefore, the generation of widely acceptable RoTs is crucial for guiding the behavior of these agents.
Dataset and Base ModelsThe MIC dataset [42] comprises about \(99\)k distinct RoTs that encapsulate the moral assumptions inherent in \(38\)k machine-generated replies to open-ended prompts. Each prompt is associated with three different RoTs, each provided by a distinct annotator. Alongside each RoT, annotators offer a "global consensus" value, \(\beta\), which signifies the estimated proportion of the global population that would agree with the RoT. We utilize T5 (small), Flan-T5 (base) and BART (large) models as our base models, and fine-tune them on the MIC dataset. For model inference, we closely follow the processes presented in [42]. Specifically, we adopt three decoding strategies: greedy decoding, beam search (\(n=3\)), and nucleus sampling (\(p=0.9\)). We generate one RoT for greedy decoding; for the latter two, three hypotheses are generated and the highest-scoring one is selected.
Human Preferences with DisagreementWe learn human preferences with disagreement for normative rules from the MIC dataset. The open-ended prompts are treated as context, and the ground truth RoT is considered the text to be evaluated. A probabilistic label is assigned to each RoT based on its global consensus: the probability for the class _acceptable_ is \(\beta\) while that for \(unacceptable\) is \((1-\beta)\). We utilize this dataset to train the d-PM; details can be found in Appendix B.2.
Experimental SetupWe apply our proposed framework to align each base model that has been rigorously fine-tuned on the MIC dataset. To assess the effectiveness of d-PM, we also train a preference model (soft) by minimizing the loss computed using Equation (11). The number of candidates \(K\) is set to \(5\). We adopt the same hyperparameters for training each aligned model as are used for the base model. We conduct five training runs for each aligned model using five distinct seeds. Then, we evaluate each of the five trained models on the test dataset and calculate the average results.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline
**Model** & **Identification** & **Comforting** & **Suggestion** & **Overall** & **Global Consensus** \\ \hline Base & 3.017 & 2.562 & 2.918 & 2.598 & 2.693 \\ Aligned\({}_{\text{major}}\) & 3.032 & 2.572 & 2.880 & 2.598 & 2.763 \\ Aligned\({}_{\text{soft}}\) & 3.007 & 2.557 & 2.905 & 2.568 & 2.747 \\ Aligned\({}_{\text{d-PM}}\) & **3.052** & **2.587** & **2.952** & **2.637** & **2.783** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Human evaluation results on ESConv. The base model is MultiESC.
Automatic EvaluationIn accordance with previous work [42], we report standard ROUGE [20] (R-1/2/L), ScareBLEU [27], BERTScore [38], and average generation length (Avg. Len) metrics. As each prompt-reply pair in our dataset has three ground truth RoTs, we compute each metric by taking the maximum score from these three, following the method employed by [42]. The results are displayed in Table 3.
The results clearly show that Alignedd-PM generally outperforms its base model. In addition, Alignedd-PM achieves the highest score across all evaluation metrics, except the generation length, when employing a beam decoding strategy. While Alignedsoft slightly improves performance with T5 (small) as the base model, a minor decline is observed when the base model is either Flan-T5 (base) or BART (large). Interestingly, the enhancements observed in this task are not as pronounced as those witnessed in the emotional support conversation task (refer to Table 1). This may be attributed to the MIC dataset inherently accounting for disagreement, as each prompt is paired with three RoTs from different annotators. This enables base models, when fine-tuned on this dataset, to encapsulate various human preferences to a certain degree. Nonetheless, our framework has the potential to further boost model performance by providing more explicit preference information with disagreement during training.
\begin{table}
\begin{tabular}{l|l|c c|c|c} \hline \hline \multicolumn{1}{c|}{**Model**} & \multicolumn{1}{c}{**Well-formedness**} & \multicolumn{1}{c}{**Fluency**} & \multicolumn{1}{c|}{**Relevance**} & \multicolumn{1}{c}{**Global Consensus**} \\ \hline Base & 0.528 & 2.547 & 2.037 & 2.428 \\ Aligned\_soft & 0.550 & **2.602** & 2.028 & 2.502 \\ Aligned\_epM & **0.568** & **2.602** & **2.103** & **2.555** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Human evaluation results on MIC. The base model is BART-large (beam).
\begin{table}
\begin{tabular}{l|l|l|c c c|c|c} \hline \hline \multicolumn{2}{c|}{**Model**} & \multicolumn{1}{c}{**R-1**} & \multicolumn{1}{c|}{**R-2**} & \multicolumn{1}{c|}{**R-L**} & \multicolumn{1}{c|}{**BertScore**} & \multicolumn{1}{c|}{**ScareBLEU**} & \multicolumn{1}{c}{**Avg.Len**} \\ \hline \multirow{8}{*}{\begin{tabular}{c} T5 (Small) \\ \end{tabular} } & \multirow{2}{*}{Beam} & Base & 53.44 & **32.97** & **52.17** & 93.44 & **29.05** & 8.94 \\ & & Aligned\_soft & 52.14 & 31.48 & 50.79 & 93.34 & 27.05 & 8.85 \\ & & Aligned\_epM & **53.45** & 32.82 & 52.09 & **93.49\({}^{\dagger}\)** & 28.51 & **8.99** \\ \cline{2-8} & \multirow{2}{*}{Greedy} & Base & 37.04 & 16.30 & 35.27 & 90.94 & 14.27 & **10.47** \\ & & Aligned\_soft & 37.77\({}^{\dagger}\) & 16.82\({}^{\dagger}\) & 35.97\({}^{\dagger}\) & 91.21\({}^{\dagger}\) & 14.68\({}^{\dagger}\) & 9.83 \\ & & Aligned\_epM & **38.15\({}^{\dagger}\)** & **17.22\({}^{\dagger}\)** & **36.40\({}^{\dagger}\)** & **91.29\({}^{\dagger}\)** & **15.15\({}^{\dagger}\)** & 9.74 \\ \cline{2-8} & \multirow{2}{*}{\(p\)=0.9} & Base & 40.22 & 19.23 & 38.56 & 91.59 & 16.71 & **9.85** \\ & & Aligned\_epM & 40.90\({}^{\dagger}\) & 19.70\({}^{\dagger}\) & 39.24\({}^{\dagger}\) & 91.79\({}^{\dagger}\) & 16.98\({}^{\dagger}\) & 9.47 \\ & & Aligned\_epM & **41.41\({}^{\dagger}\)** & **20.22\({}^{\dagger}\)** & **39.75\({}^{\dagger}\)** & **91.90\({}^{\dagger}\)** & **17.75\({}^{\dagger}\)** & 9.38 \\ \hline \multirow{8}{*}{\begin{tabular}{c} Flan-T5 (Base) \\ \end{tabular} } & \multirow{2}{*}{Beam} & Base & 55.07 & 34.96 & 53.74 & 93.77 & 30.68 & 9.00 \\ & & Aligned\_epM & 54.82 & 34.65 & 53.49 & 93.75 & 30.34 & 9.00 \\ & & Aligned\_epM & **55.18\({}^{\dagger}\)** & **35.07\({}^{\dagger}\)** & **53.86\({}^{\dagger}\)** & **93.79\({}^{\dagger}\)** & **30.83\({}^{\dagger}\)** & **9.01** \\ \cline{2-8} & \multirow{2}{*}{Greedy} & Base & 37.94 & 17.23 & 36.13 & 91.39 & 15.36 & **9.78** \\ & & Aligned\_epM & 37.84 & 17.03 & 36.00 & 91.38 & 15.12 & 9.75 \\ & & Aligned\_epM & **38.34\({}^{\dagger}\)** & **17.52** & **36.53\({}^{\dagger}\)** & **91.44\({}^{\dagger}\)** & **15.49** & 9.77 \\ \cline{2-8} & \multirow{2}{*}{\(p\)=0.9} & Base & 41.41 & 20.41 & 39.70 & 92.02 & 18.02 & 9.30 \\ & & Aligned\_epM & 41.44 & 20.33 & 39.71 & 92.02 & 17.91 & 9.29 \\ & & Aligned\_epM & **41.78\({}^{\dagger}\)** & **20.69\({}^{\dagger}\)** & **40.09\({}^{\dagger}\)** & **92.07\({}^{\dagger}\)** & **18.26\({}^{\dagger}\)** & **9.32** \\ \hline \multirow{8}{*}{
\begin{tabular}{c} BART (Large) \\ \end{tabular} } & \multirow{2}{*}{Beam} & Base & 54.81 & 35.07 & 53.35 & 93.85 & 30.80 & **9.44** \\ & & Aligned\_epM & 54.82 & 34.85 & 53.36 & 93.82 & 30.35 & 9.36 \\ \cline{1-1} & & Aligned\_epM & **55.05\({}^{\dagger}\)** & **35.18** & **53.62** & **93.86\({}^{\dagger}\)** & **30.85** & 9.40 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{Greedy} & Base & 54.77 & 34.85 & 53.30 & **93.84** & **30.51** & **9.54** \\ \cline{1-1} & & Aligned\_epM & 54.54 & 34.53 & 53.10 & 93.80 & 30.01 & 9.47 \\ \cline{1-1} & & Aligned\_epM & **54.81** & **34.86** & **53.39** & 93.83 & **30.51** & 9.48 \\ \cline{1-1} \cline{2-8} & \multirow{2}{*}{\(p\)=0.9} & Base & 54.77 & 34.96 & 53.32 & 93.84 & 30.62 & **9.56** \\ \cline{1-1} & & Aligned\_epM & 54.65 & 34.68 & 53.23 & 93.82 & 30.16 & 9.45 \\ \cline{1-1} & & Aligned\_epM & **54.86** & **35.01** & **53.45** & **93.85\({}^{\dagger}\)** & **30.63** & 9.48 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic evaluation results on MIC. \(\dagger\) represents significantly better than the corresponding base model with \(p<0.01\).
Human RatingWe randomly select \(100\) replies generated by models with BART (large) as the base model for human evaluation. Adhering to previous practice, we assess generated outputs based on the following criteria [42]: (1). _Well-formedness_: yes or no, does the RoT explain the basics of good or bad behavior with a single judgment or action?, (2). _Fluency_: on a scale of 1-5, how much does the RoT align with what an English speaker might naturally say?; and (3) _Relevance_: on a scale of 1-5, how well does the RoT apply to the Answer for this specific Question if we assume the RoT is true? Furthermore, we request annotators to provide a _Global Consensus_: how many people globally will agree with this RoT, similar to the method described in Section 4.1. Three annotators evaluate each RoT, and we average these three evaluations as the final score for each metric.
Results presented in Table 4 indicate that our method produces RoTs that are more universally agreeable than those generated by the other two models. Our Alignd-PM model improves all metrics over the base model and outperforms Alignedsoft.
## 5 Model Analysis
The effect of candidate number \(K\) during calibration.To examine the influence of varying the candidate number \(K\) on model calibration, we modify the MultiESC calibration process using different candidate numbers, namely \(5\), \(10\), \(15\), and \(20\). Specifically, this involves changing the beam widths in the diverse beam search process. Theoretically, a more significant candidate number would encompass more samples under consideration, thereby escalating the upper bound of performance. Nevertheless, as depicted in Figure 4, the model performance initially experiences an augmentation yet subsequently exhibits a reduction with the increment of the variable \(K\). It is because an overly large candidate number could introduce redundant samples with minor differences, and the generation model might erroneously distinguish between them.
Figure 4: Model performances with different candidate numbers \(K\) when calibrating MultiESC with preference scores of d-PM.
Figure 5: Comparison between alignment with RL (RL) and our model (Ours). Left: Automatic evaluation results (#(Samples)/s indicates the number of trained samples per second). Right: Training loss according to training steps.
Contrastive Learning vs. Reinforcement Learning.We opt for contrastive learning to align generation models with human preferences instead of the currently prevalent reinforcement learning (RL) approach. This decision is rooted in several considerations. Firstly, RL requires expensive online decoding procedures, while our framework based on contrastive learning is a one-time offline process [40]. Secondly, RL usually yields a slow convergence speed [9]. To validate this, we align MultiESC using RL. From Figure 5, RL trains fewer samples than our framework per second. After the same steps, the loss of RL is still high, and the model performance is much worse than ours.
## 6 Related Work
Developing human-centric systems, which ensure that human stakeholders benefit from system outcomes [14], remains a great challenge. To build a human-centric system, it is critical to align models with human preferences [35]. There are various methods to implement the alignment. Currently, the most popular and well-known method is reinforcement learning from human feedback, thanks to the GPT series [26]. This method is also used for text summarization [31; 4], detoxification [6], and machine translation [15; 16]. The above-mentioned methods adopt one reward model, while some methods combine rewards computed by different reward models to consider fine-grained aspects of human needs [8; 11]. Moreover, some models use human feedback as the supervision signal directly to learn human preferences, such as fine-tuning pre-trained models with well-established datasets [10]. Human feedback can also be used to augment prompts for better performance [24].
## 7 Conclusion and Future Work
In this work, we strive to align models with human preferences to foster the development of trustworthy human-centric NLG systems. Unlike previous approaches, we take inherent disagreement into account for modeling human preferences. This idea is motivated by two compelling reasons. Firstly, it is impractical to expect consensus in human preferences due to the high degree of subjectivity involved. Secondly, harmonizing preference disagreements can inadvertently disadvantage minority groups. Accordingly, we introduce a Bayesian approach, termed Preference Modeling with Disagreement (short as **d-PM**), to capture the subtleties of disagreement from limited human feedback. We subsequently utilize its preference scores to calibrate pre-existing text generation models. The efficacy of our method is substantiated through experiments in emotional support conversation and integrity Rule-of-Thumb generation.
Despite our focus on disagreement, another critical aspect remains to consider when modeling human preferences. In this work, like most previous studies, we assumed a linear relationship between the preference score and the proportion of the global population that finds the text acceptable. However, this assumption may not always hold true. For example, a sentence that \(20\%\) of people find helpful should ideally have a preference score closer to \(0\) instead of \(0.2\). We believe that this issue merits further exploration in future research. Fortunately, in this study, we sidestepped this problem by using the rank of preference scores rather than the scores themselves.
## 8 Acknowledgements
This work is supported by Research Grants Council of Hong Kong (PolyU/5204018, PolyU/15207920, PolyU/15213323) and National Natural Science Foundation of China (62302184, 62076212).
|
2310.14857
|
GDOP Based BS Selection for Positioning in mmWave 5G NR Networks
|
The fifth-generation (5G) of mobile communication supported by
millimetre-wave (mmWave) technology and higher base station (BS) densification
facilitate to enhance user equipment (UE) positioning. Therefore, 5G cellular
system is designed with many positioning measurements and special positioning
reference signals with a multitude of configurations for a variety of use
cases, expecting stringent positioning accuracies. One of the major factors
that the accuracy of a particular position estimate depends on is the geometry
of the nodes in the system, which could be measured with the geometric dilution
of precision (GDOP). Hence in this paper, we investigate the time difference of
arrival (TDOA) measurements based UE positioning accuracy improvement,
exploiting the geometric distribution of BSs in mixed LOS and NLOS environment.
We propose a BS selection algorithm for UE positioning based on the GDOP of the
BSs participating in the positioning process. Simulations are conducted for
indoor and outdoor scenarios that use antenna arrays with beam-based mmWave NR
communication. Results demonstrate that the proposed BS selection can achieve
higher positioning accuracy with fewer radio resources compared to the other BS
selection methods.
|
A. Indika Perera, K. B. Shashika Manosha, Nandana Rajatheva, Matti Latva-aho
|
2023-10-23T12:25:50Z
|
http://arxiv.org/abs/2310.14857v1
|
# GDOP Based BS Selection for Positioning in mmWave 5G NR Networks
###### Abstract
The fifth-generation (5G) of mobile communication supported by millimetre-wave (mmWave) technology and higher base station (BS) densification facilitate to enhance user equipment (UE) positioning. Therefore, 5G cellular system is designed with many positioning measurements and special positioning reference signals with a multitude of configurations for a variety of use cases, expecting stringent positioning accuracies. One of the major factors that the accuracy of a particular position estimate depends on is the geometry of the nodes in the system, which could be measured with the geometric dilution of precision (GDOP). Hence in this paper, we investigate the time difference of arrival (TDOA) measurements based UE positioning accuracy improvement, exploiting the geometric distribution of BSs in mixed LOS and NLOS environment. We propose a BS selection algorithm for UE positioning based on the GDOP of the BSs participating in the positioning process. Simulations are conducted for indoor and outdoor scenarios that use antenna arrays with beam-based mmWave NR communication. Results demonstrate that the proposed BS selection can achieve higher positioning accuracy with fewer radio resources compared to the other BS selection methods.
Geometric dilution of precision, PRS, beam sweeping, TDOA, LOS/NLOS.
## I Introduction
The use of a radio signal for positioning and navigation has a long history. Among them as one of the most widely used wireless systems, cellular systems has supported positioning since the first generation (1G). Even though they are originally designed for communication purposes every generation of cellular systems from 2G onwards were supported user equipment (UE) positioning. All the cellular positioning technologies make use of signal measurements from cellular base stations (BS) and devices, therefore, typically use the existing cellular infrastructure whether they are designed specifically for that purpose or not. Hence, there are many standard defined as well as undefined measurements and methods that are in use for cellular positioning.
Initially, the introduction of the FCC E911 requirements encouraged the study of accurate localization in cellular systems but as positioning accuracy and capabilities increase other commercial use cases were also started using cellular positioning [1]. Accuracy requirements for such use cases are broad and include many versatile use cases such as wearables, industry, automotive, self-driving, drones, logistic and tracking. Although global navigation satellite systems (GNSS), which originally have been deployed for military purposes, are now widely applied for commercial applications, can meet some of the accuracy requirements of the use cases mentioned above still it suffers from limited coverage in dense urban areas and particularly in indoor environments [2]. Hence, most of the natural use cases of positioning occur in an indoor environment such as in a house/office where GNSS positioning availability is limited as opposed to the cellular positioning. The long range radio (LoRa), Wi-Fi, Bluetooth, and other wireless networks also have positioning capability with the advantages of low cost, low power consumption and low complexity [3]. However, their limited bandwidth, power consumption, and complexity result in a low positioning accuracy as well.
The fifth generation (5G) of mobile communication mainly supported by millimetre-wave (mmWave) multiple-input-multiple-output (MIMO) technology where both UE and BS are equipped with antenna arrays with a large number of antennas. Further, it will operate in high carrier frequencies probably beyond 24 GHz providing large bandwidth for communication thus, high data rates. These favourable properties along with higher network densification, beamforming with narrow beams and precise angle estimation with multiple antennas pave the path of achieving precise positioning with 5G than the earlier generations of cellular networks. Thus, the 3rd generation partnership project (3GPP) puts higher requirements on the positioning accuracy of the 5G system during 5G NR release 16. New NR positioning reference signals (PRS) for downlink (DL) and sounding reference signal (SRS) for uplink (UL), round-trip time (RTT) measurements with multiple base stations (Multi-RTT), DL and UL time difference of arrival (TDOA) measurements and BS angle of arrival (AoA) or angle of departure (AoD) measurement can be mentioned as key physical layer technologies that support to achieve user positioning in the 5G system.
The accuracy of a particular position estimate depends on several factors, including the radio ranging measurement accuracy, the algorithm used to process the measurements, and the geometry of the nodes in the system. Identifying the line-of-sight (LOS)/ non-line-of-sight (NLOS) signals and selecting LOS BSs to estimate position has become a major approach to mitigate the effect of NLOS caused measurement errors on positioning results.
The geometric dilution of precision (GDOP) is defined as the ratio between the accuracy of a position estimate to the statistical accuracy of the ranging measurements [4]. For a terrestrial system, if the location and number of base stations in the desired coverage area are not carefully planned, the
GDOP effect can become the dominant factor in limiting the performance of a system. When the angular positions of BSs are close together, the GDOP value is high, resulting in poor positioning performance. For good positional accuracy, the angular position of the transmitting BSs should be such that the receiving UE is "surrounded" by BSs.
GDOP is a well-investigated metric for the design of GNSS and satellite selection for positioning calculation. Even though GDOP based selection is used for other positioning systems such as satellites [5] and ultrasound [6], there is limited literature on using it for cellular user positioning in 5G. In [7] GDOP analysis has been presented for three types of BSs setup where BS are on the circle and where the mobile device is also on the circle, on radials and near a base station. Position accuracy and GDOP analysis are presented in [8] for indoor mesh positioning systems in multipath and NLOS propagation environments. Authors in [9] propose a GDOP-assisted BS selection method for the hybrid TDOA, RTT and direction of arrival (DOA) positioning in mixed LOS and NLOS indoor open office (IOO) environment where BS geometry is fixed. As per our knowledge existing literature lacks the knowledge on GDOP based BS selection for mmWave beam-based network especially when the BS geometry is not fixed. Since the positions of the BSs, as well as the number of LOS BSs participating in the positioning calculations, play a major role in achieving high accuracy it is worthwhile to investigate this area using the mmWave 5G NR network for achieving stringent positioning accuracies required by the upcoming generations of cellular networks.
In this paper, we investigate a UE positioning accuracy improvement exploiting the geometric distribution of BSs and the LOS condition of the BS. We present a BS selection criteria for UE position calculation in a mmWave 5G NR network that uses beam based communication between the UE and the randomly distributed BSs. Further, derivation of the GDOP for TDOA based positioning measurements is presented and the proposed BS selection algorithm is based on the calculated GDOP of the BSs in a mixed LOS and NLOS environment. Simulation results for indoor and outdoor scenarios demonstrate that the proposed BS selection can achieve higher positioning accuracy with fewer radio resources.
The rest of the paper is organized as follows. In Section II, we introduce the 5G NR PRS structure, system setup and mathematical derivation of GDOP calculation. Further, in Section III we describe the proposed BS selection algorithm in detail. In Section IV we evaluate the performance of the algorithm through simulation results. Finally, in Section V, we summarize our major findings.
## II System Model
### _NR DL positioning reference signals (DL PRS)_
DL PRS resource corresponds to a collection of resource elements arranged in a particular time/frequency pattern where inside each resource element pseudo-random QPSK sequences are transmitted. Within a slot, a DL PRS resource can be configured to span 2, 4, 6, or 12 consecutive orthogonal frequency-division multiplexing (OFDM) symbols. When considering the frequency domain pattern, a DL PRS resource has a comb-like pattern, which means that a QPSK symbol is transmitted on every N-th subcarrier, where N can take the values 2, 4, 6, or 12 [10]. The minimum transmission bandwidth of PRS is 24 contiguous physical resource blocks (PRBs) and the maximum transmission bandwidth is 272 PRBs [11].
### _Communication system setup_
We consider a MIMO OFDM system in which every BS is equipped with a uniform rectangular array (URA) of \(N_{t}\) antennas and a UE equipped with a URA of \(N_{r}\) antennas operating at a carrier frequency \(f_{c}\) and bandwidth \(B\).
We consider BSs transmitting NR PRS to a single UE. At a BS, the NR PRS signal which is generated according to the physical layer cell identity number of the respective BS, is transmitted with \(M_{t}\) number of transmission beams corresponding to each BS beam sweeping direction. At each BS sweep direction, the PRS signal is transmitted again \(M_{r}\) times to corresponds to UE beam sweeping directions. Therefore, there will be a total of \(M_{t}\times M_{r}\) number of NR PRS transmissions between a specific BS and the UE.
mmWave scattering MIMO channel which simulates a multipath propagation channel between the BS transmitting array and the UE receiving array is used. Radiated PRSs from a BS transmitting array are reflected from multiple scatterers in the environment before receive to the UE receiving array. This channel generates a BS to UE distance dependent channel delay, gain, phase change, and atmospheric loss.
### _Positioning system setup_
We consider that the UE could receive downlink PRSs from \(M\) BSs and obtain \(M-1\) number of TDOAs with respect to the reference BS. Without loss of generality, we select the nearest BS as the reference BS and denote it as BS 1. The remaining BSs are sorted by the distance to UE and the farthest is denoted \(M\).
In the following description, we consider two-dimension Cartesian coordinate system. Hence, the actual position of the UE is denoted by \(\boldsymbol{u}=\left(x_{u},y_{u}\right)^{T}\) and BS positions are denoted by \(\boldsymbol{v}_{i}=\left(x_{i},y_{i}\right)^{T}\), \(i=1,\ 2,\ 3,\ldots,\ M\) as shown in Fig. 1. The actual distance between \(i\)th BS and the UE is given by
\[\begin{split} d_{i}=\|\boldsymbol{v}_{i}-\boldsymbol{u}\|=\sqrt {\left(x_{i}-x_{u}\right)^{2}+\left(y_{i}-y_{u}\right)^{2}},\\ i=1,2,3,\ldots,M.\end{split} \tag{1}\]
Fig. 1: System setup comprising of multiple MIMO BSs and a single UE.
The ranging difference between the \(i\)th and the reference BS, calculated using TDOA measurements can be expressed as follows
\[r_{i,1}=c(t_{i}-t_{1}),\ i=2,3,\ldots,M, \tag{2}\]
where \(r_{i,1}\) is the ranging difference calculated by TDOA measurement, \(t_{i}\) is measured PRS arrival time from \(i\)th BS to UE, \(t_{1}\) is measured PRS arrival time from BS \(1\) to UE and \(c\) is the speed of light. Alternatively, the ranging difference can be expressed as follows
\[r_{i,1}=d_{i}-d_{1}+e_{i,1}=d_{i,1}+e_{i,1},\ i=2,3,\ldots,M, \tag{3}\]
where \(d_{i,1}\) is the actual distance difference between the \(i\)th and the reference BS and \(e_{i,1}\) is the measurement error. Two dimensional TDOA measurement model in matrix form when UE is at an arbitrary location \(\mathbf{x}=[x,y]^{T}\) is
\[\mathbf{r}=\mathbf{f}(\mathbf{x})+\mathbf{\tilde{e}}, \tag{4}\]
where \(\mathbf{r}=[r_{2,1}\ r_{3,1}\ldots r_{M,1}]^{T},\ \mathbf{\tilde{e}}=[e_{2,1}\ e_{3,1} \ldots e_{M,1}]^{T}\) and
\[\mathbf{f}(\mathbf{x})=\begin{bmatrix}d_{2,1}(\mathbf{x})&d_{3,1 }(\mathbf{x})&\ldots&d_{M,1}(\mathbf{x})\end{bmatrix}^{T}\] \[\qquad=\begin{bmatrix}\sqrt{\left(x_{2}-x\right)^{2}+\left(y_{2} -y\right)^{2}}-\sqrt{\left(x_{1}-x\right)^{2}+\left(y_{1}-y\right)^{2}}\\ \sqrt{\left(x_{3}-x\right)^{2}+\left(y_{3}-y\right)^{2}}-\sqrt{\left(x_{1}-x \right)^{2}+\left(y_{1}-y\right)^{2}}\\ \vdots\\ \sqrt{\left(x_{M}-x\right)^{2}+\left(y_{M}-y\right)^{2}}-\sqrt{\left(x_{1}-x \right)^{2}+\left(y_{1}-y\right)^{2}}\end{bmatrix}. \tag{5}\]
### _Localization algorithm_
In order to find the position of the UE using the TDOA measurements expressed in (5) we construct the least square cost function as
\[\mathbf{J}(\mathbf{x})=\sum_{i=2}^{M}\left(r_{i,1}-\sqrt{\left(x_{i}-x\right) ^{2}+\left(y_{i}-y\right)^{2}}+\sqrt{\left(x_{1}-x\right)^{2}+\left(y_{1}-y \right)^{2}}\right)^{2},\]
in matrix form
\[\mathbf{J}(\mathbf{x})=(\mathbf{r}-\mathbf{f}(\mathbf{x}))^{T}(\mathbf{r}- \mathbf{f}(\mathbf{x})), \tag{6}\]
and the least square position estimate is
\[\mathbf{\hat{x}}=\arg\min_{x}\mathbf{J}(\mathbf{x}). \tag{7}\]
The minimum value of the (7) is achieved via the steepest descent iterative procedure where the iterative step is calculated using
\[\mathbf{\hat{x}_{k+1}}=\mathbf{\hat{x}_{k}}-\mu\nabla\mathbf{J}(\mathbf{\hat{x }}), \tag{8}\]
where \(\mu\) is a positive constant, which controls the convergence rate and stability, \(\nabla\mathbf{J}(\mathbf{\hat{x}})\) is the gradient vector computed at the \(k\)th iteration estimate.
### _GDOP for the TDOA based positioning_
Let \((x^{\prime},y^{\prime})\) be the estimated position and \(g_{i,1}(x^{\prime},y^{\prime}),\ i=2,3,\ldots,M\) be the functional relationships of estimated position and TDOA measurement errors. For a system containing \(M\) BSs, the TDOA measurement error equations can be written as
\[\begin{split} e_{i}=g_{i,1}(x^{\prime},y^{\prime})=r_{i,1}- \sqrt{\left(x_{i}-x^{\prime}\right)^{2}+\left(y_{i}-y^{\prime}\right)^{2}}\\ +\sqrt{\left(x_{1}-x^{\prime}\right)^{2}+\left(y_{1}-y^{\prime} \right)^{2}},\ i=2,3,\ldots,M.\end{split} \tag{9}\]
In order to calculate the GDOP for TDOA positioning, we linearize the measurement equations using the first order Taylor series. We can approximate \(g_{i,1}(x^{\prime},y^{\prime}),\ i=2,3,\ldots,M\) when UE locates at \((x_{\circ},y_{\circ})\), as
\[\begin{split} g_{i,1}(x^{\prime},y^{\prime})\approx g_{i,1}(x_{ \circ},y_{\circ})+(x^{\prime}-x_{\circ})\frac{\partial g_{i,1}(x_{\circ},y_{ \circ})}{\partial x}\\ +(y^{\prime}-y_{\circ})\frac{\partial g_{i,1}(x_{\circ},y_{ \circ})}{\partial y},\ i=2,3,\ldots,M.\end{split} \tag{10}\]
When there are no TDOA measurement errors, the estimated position \((x^{\prime},y^{\prime})\) and the UE's actual location \((x_{u},y_{u})\) would be same, thus, (10) is rewritten as
\[\begin{split} 0=g_{i,1}(x_{\circ},y_{\circ})+(x_{u}-x_{\circ}) \frac{\partial g_{i,1}(x_{\circ},y_{\circ})}{\partial x}\\ +(y_{u}-y_{\circ})\frac{\partial g_{i,1}(x_{\circ},y_{\circ})}{ \partial y},\ i=2,3,\ldots,M.\end{split} \tag{11}\]
When there are TDOA measurement errors, using \(e_{i},\ i=2,3,\ldots,M\) to express the TDOA measurement errors (10) is rewritten as
\[\begin{split}& e_{i}=g_{i,1}(x_{\circ},y_{\circ})+(x^{\prime}-x_{ \circ})\frac{\partial g_{i,1}(x_{\circ},y_{\circ})}{\partial x}\\ +(y^{\prime}-y_{\circ})\frac{\partial g_{i,1}(x_{\circ},y_{\circ} )}{\partial y},\ i=2,3,\ldots,M.\end{split} \tag{12}\]
Let position estimation error vector be \(\triangle\mathbf{u}=\left[e_{x},e_{y}\right]^{T}\), where \(e_{x}=x^{\prime}-x_{u}\), \(e_{y}=y^{\prime}-y_{u}\). Subtracting (11) from (12), the relationship between \(\triangle\mathbf{u}\) and TDOA measurement errors \(e_{i},\ i=2,3,\ldots,M\) can be presented as
\[e_{i}= e_{x}\frac{\partial g_{i,1}(x_{\circ},y_{\circ})}{\partial x}+e_{y} \frac{\partial g_{i,1}(x_{\circ},y_{\circ})}{\partial y} \tag{13}\] \[= e_{x}(\frac{x_{i}-x_{\circ}}{\sqrt{\left(x_{i}-x_{\circ}\right) ^{2}+\left(y_{i}-y\right)^{2}}}-\frac{x_{1}-x_{\circ}}{\sqrt{\left(x_{i}-x_{ \circ}\right)^{2}+\left(y_{1}-y_{\circ}\right)^{2}}})\] \[+e_{y}(\frac{y_{i}-y_{\circ}}{\sqrt{\left(x_{i}-x_{\circ}\right) ^{2}+\left(y_{i}-y_{\circ}\right)^{2}}}-\frac{y_{1}-y_{\circ}}{\sqrt{\left(x_{1}-x _{\circ}\right)^{2}+\left(y_{1}-y_{\circ}\right)^{2}}}).\]
After linearization,TDOA measurement equations in (9) can be presented as the following matrix form:
\[\mathbf{e}=\mathbf{A}\triangle\mathbf{u}, \tag{14}\]
where
\[\mathbf{e}=\begin{bmatrix}e_{2}\\ e_{3}\\ \vdots\\ e_{M}\end{bmatrix},\mathbf{A}=\begin{bmatrix}\alpha_{2}&\beta_{2}\\ \alpha_{3}&\beta_{3}\\ \vdots&\vdots\\ \alpha_{M}&\beta_{M}\end{bmatrix},\alpha_{i}=\frac{\partial g_{i,1}(x_{\circ},y_{ \circ})}{\partial x},\\ \beta_{i}=\frac{\partial g_{i,1}(x_{\circ},y_{\circ})}{\partial y},\ i=2,3, \ldots,M.\end{split}\]
For different TDOA measurements, the measurement errors \(e_{i},i=2,\ldots,M\) can assumed to be independent and identically distributed. Thus, for all elements in the error vector \(\mathbf{e}\),
\[\mathbb{E}(e_{i})= 0,i=2,3,\ldots,M, \tag{15}\] \[Cov(e_{i}e_{j})= 0,i\neq j\ \text{and}\ i,\ j=2,3,\ldots,M. \tag{16}\]
Assume that standard deviations of the TDOA measurement errors are equal. Hence,
\[Var(e_{i})=\sigma_{tdoa}^{2},i=2,3,\ldots,M, \tag{17}\]
where \(\sigma^{2}_{tdoa}\) is the variance of TDOA measurement errors. The weighted least square (WLS) solution to (14) is given by
\[\triangle\mathbf{u}=\left(\mathbf{A}^{T}\mathbf{W}\mathbf{A}\right)^{-1}\mathbf{ A}^{T}\mathbf{W}\mathbf{e}, \tag{18}\]
where \(\mathbf{W}\) is the weighted matrix, which can be obtained by calculating the covariance of the TDOA measurement errors as
\[Cov(\mathbf{e})=\mathbb{E}\left(\begin{bmatrix}e_{2}\\ e_{3}\\ \vdots\\ e_{M}\end{bmatrix}\begin{bmatrix}e_{2}&e_{3}&\dots&e_{M}\end{bmatrix}\right) \tag{19}\] \[=\sigma^{2}_{tdoa}\begin{bmatrix}1&0&\dots&0\\ 0&1&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1\end{bmatrix}=\sigma^{2}_{tdoa}\mathbf{\Sigma}.\] \[\mathbf{W}=\mathbf{\Sigma}^{-1} \tag{20}\]
Using (15), the mean of \(\triangle\mathbf{u}\) can be calculated as
\[\mathbb{E}(\triangle\mathbf{u})= \mathbb{E}\Big{[}\big{(}\mathbf{A}^{T}\mathbf{W}\mathbf{A}\big{)} ^{-1}\mathbf{A}^{T}\mathbf{W}\mathbf{e}\Big{]}\] \[= \big{(}\mathbf{A}^{T}\mathbf{W}\mathbf{A}\big{)}^{-1}\mathbf{A}^ {T}\mathbf{W}\mathbb{E}(\mathbf{e})=0. \tag{21}\]
The covariance matrix of estimated position error \(\triangle\mathbf{u}\) can be expressed as
\[Cov(\triangle\mathbf{u})= \mathbb{E}[\triangle\mathbf{u}\triangle\mathbf{u}^{T}]=\mathbb{E }[\big{(}\mathbf{A}^{T}(\mathbf{e}\mathbf{e}^{T}\big{)}^{-1}\mathbf{A}\big{)} ^{-1}]\] \[= \big{(}\mathbf{A}^{T}Cov(\mathbf{e})^{-1}\mathbf{A}\big{)}^{-1}. \tag{22}\]
Further, using the results in (19) and (20) the covariance matrix in (22) is calculated as
\[Cov(\triangle\mathbf{u})=\big{(}\mathbf{A}^{T}\frac{1}{\sigma^{2}_{tdoa}} \mathbf{W}\mathbf{A}\big{)}^{-1}=\sigma^{2}_{tdoa}\big{(}\mathbf{A}^{T} \mathbf{W}\mathbf{A}\big{)}^{-1}. \tag{23}\]
GDOP is defined as the ratio of positioning error to positioning signal measurement error [4]. The GDOP which is calculated using WLS is called weighted GDOP [12] and the GDOP of the TDOA based positioning is given by
\[\mathrm{GDOP}= \sqrt{\frac{Var(e_{x})+Var(e_{y})}{\sigma^{2}_{tdoa}}}\] \[= \sqrt{\frac{tr(Cov(\triangle\mathbf{u}))}{\sigma^{2}_{tdoa}}}= \sqrt{tr(\big{(}\mathbf{A}^{T}\mathbf{W}\mathbf{A}\big{)}^{-1})}. \tag{24}\]
## III Methodology
In this section, the NR PRS beam sweeping, measurement procedure and the proposed BS selection algorithm for UE positioning based on the GDOP of the BSs participating in the positioning process are presented in detail.
### _NR PRS beam sweeping and measurements_
Following the typical NR positioning procedure, location management function (LMF) in the 5G core network informs the BSs and the UE to start the positioning process by providing required assistant data. These assistant data required for DL-TDOA contains a list of candidate BSs and their positions together with their DL-PRS configurations. Multiple DL-PRS resources are configured to be transmitted from each candidate BS corresponds to a different transmit beam of the BSs. We assume that BSs in the network are time-synchronized with each other via the backhaul network and also all the participating BSs are transmitting the same number of beams. Hence, the BSs participating in the process, transmit PRSs in a synchronized manner via beam sweeping. UE receives these PRS transmissions with beam sweeping with the number of receive beams configured in UE. Therefore, the PRS transmission repetition factor in the DL-PRS configurations is selected based on the number of UE beams.
Once, all the PRS transmissions are performed, UE calculates the received power and time of arrival (ToA) of the PRS for each of the UE receive beams and transmit beams of every candidate BSs that are participated in the localization process. Then the UE selects transmit and receive beam pair with the highest received power from each BSs as the best beam pair for communication between that BS and UE. UE calculates the ToA of each beam pairs by correlating the received PRS signal with a generated copy of that same signal using the PRS configuration data shared at the beginning of the process. UE selects the minimum ToA value from the set of ToA values that corresponds to all the transmit and receive beam pairs between a BS and UE as the ToA between that specific BS and UE. These ToA values are used to calculate the time difference of arrival with respect to the reference BS that is selected during the BS selection algorithm explain in III-B.
Assuming channel conditions between all the BSs and UE is known to the UE, UE categorizes the channel between each BS and UE into LOS or NLOS. We denote \(BS_{los}\) as the LOS BS set and \(BS_{nlos}\) as the NLOS BS set. Further, the number of BSs in each of the sets are denoted as \(N_{los}\) and \(N_{nlos}\), respectively.
### _Proposed BS selection algorithm_
Once, the UE categorized BSs into LOS and NLOS, it uses the BSs in the \(BS_{los}\) set to calculate an initial position value using the localization algorithm presented in section II-D. As the first step of the BS selection method, UE calculates the GDOP of all the BSs in the \(BS_{los}\) set at the initial position estimate calculated earlier. Here it calculates \(N_{los}\) number of GDOP values corresponds to GDOP when each BS is used as the reference BS. Then these LOS BSs are sorted according to the ascending order of the calculated GDOP values. Similarly, UE calculates the GDOP of the BSs in the \(BS_{nlos}\) set and these NLOS BSs are also sorted according to their ascending GDOP values as earlier.
Then the UE selects \(N\) number of BSs from the participating BSs to calculate the position of the UE. At the selection, UE gives priority to the GDOP sorted \(BS_{los}\) set first and then the remaining number of BS required to satisfy the \(N\) number of BSs are selected from the GDOP sorted \(BS_{nlos}\) set. Hence, the LOS BS with the lowest GDOP is the first BS in the selected set and other LOS BSs follow it with increasing GDOP and at last, the NLOS BSs sorted according to their GDOP are in the selection. The first BS which is the LOS
BS with the lowest GDOP is selected as the reference BS for calculating the TDOA values needed for the UE position calculation. Finally, the position of the UE is calculated using the selected \(N\) number BSs including the reference BS from the above selection criteria and with the least square position estimate presented earlier.
## IV Simulations and Results
We simulate the proposed BS selection technique using the system setup mentioned in section II. We have simulated an urban microcell (UMi) scenario with 7 BSs located at random positions and an IOO scenario that includes 12 BSs with fixed positions as shown in Fig. 2. In both scenarios, we simulate a mixed LOS and NLOS environment where UE has a LOS condition only with some randomly selected BSs. Main simulation parameters are listed in Table I below.
We use 4 BSs for calculating the UE position since a minimum of 4 BSs (including reference BS) are required to achieve a unique position estimate using the TDOA measurement equations. We calculate the position of the UE using 4 BSs selected from our proposed BS selection method and for comparison purposes, we calculate the position of the UE using 4 BSs selected randomly from all available BSs and the closest BS from that selection is considered as the reference BSs for TDOA calculations. Further, we use another common selection method named distance based BS selection where the nearest LOS BSs are selected first and then nearest NLOS BSs are selected for the remaining required number of BSs. In this situation nearest LOS BS would be selected as the reference BS for TDOA calculations. The least-square position estimate is used in position calculations in all the selection methods and simulation scenarios.
### _Urban micro cell scenario_
For the micro cell environment, the BSs are randomly located maintaining a minimum of 100m inter base station distance in \(x\) and \(y\) Cartesian axis directions. We simulate single UE in the UMi environment and the channel between the UE and BSs are simulated as mentioned in section II-B with 5 scatters in random locations. This setup is simulated for many BSs and scattering location instances and the user position is estimated for each instance. Then the error of the estimated position with respect to the true location is calculated as \(\|\triangle\mathbf{u}\|\) for each instance. Fig. 3 shows the cumulative distribution function (CDF) of the error of the estimated position for all BS selection methods.
Results in Fig. 3 show that the positioning error is reduced when the proposed BS selection algorithm is used for position calculation compared to the random selection of BSs and distance-based BS selection. It could be seen that the proposed BS selection algorithm outperforms the random BS selection methods used for the comparison. Even though the proposed BS selection is comparable with the distance-based BS selection, the proposed BS selection has 90% of its positioning error values below 1.55m while in the distance-based BS selection corresponding value is 1.80m. It could be seen that these values are on par with most of the positioning error values currently available in the industry [13]. We are not capable of exactly reproduce the results in [13] for comparison using the available limited resources since those are obtained via system-level simulators. Further, the proposed BS selection method accuracy satisfies the minimum horizontal positioning error targets required for regulatory use cases and commercial use cases as presented in [14]. Hence, the proposed BS selection method could be used for higher accuracy requirements where the computational power is not limited while the less complex distance-based method could be used on devices with low computational capability.
### _Indoor open office scenario_
Indoor open office scenario is simulated according to the 3GPP TR 38.901 indoor office scenario [15]. As shown in
Fig. 3: CDF of the positioning error in UMi with different BS selection methods for positioning.
Fig. 2: Fixed BS locations of the indoor open office scenario with some scatterers located randomly.
Fig. 2 it has fixed BS locations and a minimum of 20m inter BS distance in \(x\) and \(y\) Cartesian axis directions. We simulate a single UE in the IOO environment and the channel between the UE and BSs are simulated as earlier with 5 scatters in random locations. For the calculations of UE position 4 BSs are used and a CDF plot for calculated positioning error is drawn for the proposed BS selection, distance-based selection and the random BS selection as shown in Fig. 4.
Results in Fig. 4 show that the proposed BS selection algorithm provides better positioning capability and low positioning error compared to distance-based BS selection and random BS selection. For the proposed BS selection algorithm 90% of the positioning error values are below 0.45 m and for the distance-based BS selection algorithm, it is 0.8 m. Further, those satisfy the minimum horizontal positioning error targets required for regulatory and commercial use cases. Compared to the previous scenario the proposed BS selection algorithm provides more accuracy improvement compared to the computational complexity added to the system. Since this scenario is an indoor environment, the positions of the BSs are not dynamically changing as opposed to an outdoor scenario. Therefore, considering the low calculation frequency compared to an outdoor scenario, using the proposed BS selection could provide a higher device efficiency as opposed to using it in an outdoor scenario.
Further, positioning accuracy values we achieve for both scenarios are simulated with less bandwidth (56 PRBs) compared to the existing results where most of them are achieved via simulating the maximum allowable bandwidth (272 PRBs) for PRS signal. Hence, we could state that our proposed algorithm required fewer radio resources in achieving the required positioning accuracies contributing to increase the network efficiency.
## V Conclusion
In this paper, we investigate a UE positioning accuracy improvement exploiting the geometric distribution of BSs in mixed LOS and NLOS environment. GDOP provides a measure of the geometry of the nodes in a system which is one of the major factors that the accuracy of a particular position estimate depends on. We proposed a BS selection algorithm for UE positioning based on the GDOP of the BSs used for calculating the position. We derive the GDOP calculation method for TDOA based downlink positioning measurements. Simulations are conducted for the mmWave antenna arrays with beam-based communication in indoor and outdoor scenarios and results demonstrate that the proposed BS selection can achieve higher positioning accuracy with fewer radio resources. Classification of LOS and NLOS BS set used for the algorithm using the measured data itself would be a possible future improvement. Further, the algorithm can extend to improve the positioning and tracking accuracy of a moving user.
|
2307.08669
|
Leveraging Recommender Systems to Reduce Content Gaps on Peer Production
Platforms
|
Peer production platforms like Wikipedia commonly suffer from content gaps.
Prior research suggests recommender systems can help solve this problem, by
guiding editors towards underrepresented topics. However, it remains unclear
whether this approach would result in less relevant recommendations, leading to
reduced overall engagement with recommended items. To answer this question, we
first conducted offline analyses (Study 1) on SuggestBot, a task-routing
recommender system for Wikipedia, then did a three-month controlled experiment
(Study 2). Our results show that presenting users with articles from
underrepresented topics increased the proportion of work done on those articles
without significantly reducing overall recommendation uptake. We discuss the
implications of our results, including how ignoring the article discovery
process can artificially narrow recommendations on peer production platforms.
|
Mo Houtti, Isaac Johnson, Morten Warncke-Wang, Loren Terveen
|
2023-07-17T17:32:30Z
|
http://arxiv.org/abs/2307.08669v4
|
# Leveraging Recommender Systems to Reduce Content Gaps on Peer Production Platforms
###### Abstract
Peer production platforms like Wikipedia commonly suffer from content gaps. Prior research suggests recommender systems can help solve this problem, by guiding editors towards underrepresented topics. However, it remains unclear whether this approach would result in less relevant recommendations, leading to reduced overall engagement with recommended items. To answer this question, we first conducted offline analyses (Study 1) on SuggestBot, a task-routing recommender system for Wikipedia, then did a three-month controlled experiment (Study 2). Our results show that presenting users with articles from underrepresented topics increased the proportion of work done on those articles without significantly reducing overall recommendation uptake. We discuss the implications of our results, including how ignoring the article discovery process can artificially narrow recommendations. We draw parallels between this phenomenon and the common issue of "filter bubbles" to show how any platform that employs recommender systems is susceptible to it.
## Introduction
Wikipedia's gender, geographical, and other content disparities have long been documented and criticized in the popular media [1, 1, 2, 1, 16] and research literature [1, 2, 1, 2, 3]. Both the Wikimedia Foundation (non-profit that supports Wikipedia) and editor-led communities have devoted substantial attention to addressing the problem over the past several years [1, 1, 2, 16] but, while these efforts have succeeded in reducing some gaps, there remains much to be done. For example, one of the most easily quantifiable and widely acknowledged disparities on Wikipedia is the gender gap--yet as of April 2023, biographies about women still only represent 19.5% of all biographies on English Wikipedia (WikiProject: Women in Red 2023).
We propose leveraging recommender systems to reduce these content disparities. Task-routing recommender systems have already been deployed successfully in Wikipedia [12, 16], but their algorithms tend to optimize for editor interest or predicted reader need alone. By modifying them to additionally consider _content equity_--i.e. prioritizing underrepresented topics--these systems could guide editors towards work that would help to reduce content disparities on Wikipedia.
However, editor interest-based recommender systems have been shown to increase editing by _four times_ compared to recommending random articles [12], and re-orienting them in the direction of content equity could reduce these benefits. Indeed, Warncke-Wang et al. (2015) caution against "simplistic attempts" to push editors towards work they are not interested in, last they edit less or leave the platform altogether. We agree there is good reason for caution, but other work also gives us reason to be optimistic. Through qualitative analysis of editors' discussions, Houtti et al. (2022) showed that editors consider content balance--including along gender and geographical lines--to be an essential factor in deciding how articles should be prioritized for improvement. Yet those same editors compiled lists of high-priority articles that were _not_ balanced along those dimensions. Based on this, Houtti et al. speculate that editors are at least _willing_ to prioritize articles from underrepresented categories, but that self-focus bias [1] leads them to more readily identify articles salient to their own experiences instead:
"On one hand, I'm surprised it [Menstruation article] isn't here, but then as one of the x-deficient 90% of editors, I wouldn't have even thought to add it."
Similarly, many editors might neglect articles from underrepresented topics not because they are averse to editing those articles, but because they more readily identify articles salient to their own experiences--ones that are more male and more western, among other things [1, 16].
If this is true, **content disparities could be reduced by simply making underrepresented articles more visible to editors**. Indeed, doing so might end up better aligning with editor values that are simply not reflected in the edit histories that task-routing algorithms rely on. This alignment could realistically lead to _more_ editing, as Nov (2007) found that ideology and values are strong motivators for Wikipedians to contribute.
While this does make sense in theory, whether guiding
editors towards under-represented articles would _actually_ cause a meaningful increase or decrease in editing is an empirical question. We therefore conducted two empirical studies on SuggestBot--a task-routing recommender system for Wikipedia [12]. We first analyzed articles recommended by SuggestBot in 2021 (Study 1) and found, among other things, that editors were _more_ likely to edit biographies of women than biographies of men. We then conducted a three-month controlled experiment on SuggestBot, where we replaced a subset of editors' recommendations with the most relevant articles from underrepresented categories (Study 2). We found that these alternative recommendations did not suffer from any significant decreases in uptake. Moreover, providing a higher number of recommendations from underrepresented categories substantially increased the share of recommendation-prompted editing on articles from those categories. Our paper contributes empirical findings that support the use of recommender systems to help reduce content disparities on Wikipedia and other peer-production platforms. In particular, we emphasize how edit history-based inference fails to acknowledge that editing behavior is largely determined by what content the editor discovers in the first place--an oversight that can lead systems to needlessly magnify self-focus bias. We discuss how this may generalize to other platforms that employ recommender systems driven by behavioral data.
We begin by covering the prior research in this area, provide an overview of the SuggestBot system, outline the methods and results of each study, and conclude with a discussion of our findings' implications.
## Related Work
Below we briefly outline the relevant tensions related to content gaps on Wikipedia (the context), the role of recommender systems on Wikipedia (our focus), and research about aligning recommender systems and goals related to fairness or equity (our tested intervention).
### Content Gaps on Wikipedia
Content gaps on Wikipedia are well-documented (see Redi et al. 2021 for a recent review) and while much absolute progress has been made--e.g., there are many more (high-quality) articles about women than ten years ago thanks to organized efforts to close these gaps [10, 11]--the distribution of content on Wikipedia continues to hold major representational biases.1 While these gaps are the result of many complex processes, generally it is understood that a major contributing factor is that of self-focus bias [13]. Self-focus bias is the concept that Wikipedians edit about content that is familiar and of interest to themselves, so a community of editors that are not representative of the world's population [12] will not produce an encyclopedia that is representative of the world's knowledge.
Footnote 1: For example, see [https://humaniki.wmcloud.org/](https://humaniki.wmcloud.org/) for statistics about the gender gap.
The Wikimedia community has responded to the issue of content gaps through initiatives such as adopting a universal code of conduct [14] to address harassment issues known to be a major barrier to gender equity [20], and building partnerships with publishers to provide editors with free access to reliable sources that they can incorporate to reduce barriers in accessibility of digital sources.2 Perhaps the most direct and visible approach has been through organizing campaigns to close these gaps--e.g., by creating biographies of women [11, 12], improving content on important topics [15], or contributing imagery of cultural heritage from around the world [1]. These campaigns help to attract and socialize newcomers [10] as well as focus attention of existing editors on closing these gaps. This collective-action approach has been taken by other peer-production communities such as OpenStreetMap, which for example has organized extensive humanitarian mapping initiatives [16] to help address their geographic content gaps [20].
Footnote 2: [https://en.wikipedia.org/wiki/Wikipedia:The_Wikipedia](https://en.wikipedia.org/wiki/Wikipedia:The_Wikipedia). Library
Self-focus bias would indicate that in the long-term, Wikipedia and other peer-production communities need a more representative community of editors in order to close content gaps. In the meantime, these campaigns represent an important effort to make content more representative. There are open questions, however, about how to support the current Wikipedia community in overcoming some of this self-focus bias to better address these gaps.
### Recommender Systems on Wikipedia
A separate set of initiatives largely aimed at making Wikipedia easier to edit (and therefore reducing the barriers to participation) has focused on building recommender systems for Wikipedia editors. These have included tools like personalized recommendations of articles to edit [12] or translate [23] based on edit history and reader needs, and structured tasks to assist newcomers in learning how to edit Wikipedia [1].
These recommender systems (and other sociotechnical tools aimed at supporting newcomers such as The Wikipedia Adventure [24] and Wikipedia Teahouse [13]) have largely been evaluated on their ability to increase edit counts or improve editor retention. While they may indirectly help close content gaps by reducing participation barriers, there has been less attention to how they might be more explicitly aligned with campaigns' efforts to close content gaps.
### Fairness in Recommender Systems
Within the recommender system and information retrieval literature, research has begun to examine how adjustments to standard algorithms can help to incorporate values such
as fairness. There are many complexities to this work [16] but of particular interest to Wikipedia are systems that seek to balance user personalization with fairness with respect to the items being recommended--i.e. helping editors find topics relevant to their interests, while also incorporating preferences for the overall distribution of items recommended across all the users, such as Mehrotra et al. 2018 and Sonboli et al. 2020. These systems suggest a means by which self-focus bias can be tailored to better align with the goals of campaigns. Most systems have been evaluated only in offline settings and not in a complex in-the-field setting like Wikipedia, in which a recommendation to improve an article is likely a much larger ask than suggesting a song or other piece of media to consume.
## Background: SuggestBot
SuggestBot is a recommender system on Wikipedia that recommends work tasks based on editor interest, as inferred from past edit history [14] for details). Editors can request a single set of recommendations, or subscribing to receive personalized recommendations at a configurable interval--e.g., every 2 weeks. In either case, SuggestBot provides a set of 30 articles (Figure 1), each annotated with metadata about the article's quality and popularity as well as an indication of the kind of work needing to be done on the article--e.g., "Add Sources".3
Footnote 3: These task types are manually assigned to articles by other Wikipedia editors in the course of their editing.
## Methods (Study 1)
Study 1 was an offline analysis of the efficacy of SuggestBot recommendations. The goal was to establish a baseline understanding of how editing behavior is impacted by factors relevant to content gaps and inform the subsequent experiment (Study 2).
### Dataset and Features
The period for Study 1 was 2021 in its entirety. We collected all SuggestBot recommendations during this period. To ensure we had adequate data for each editor, we removed any editors who made fewer than 100 edits during the study period from our dataset. We then removed any recommendations made while the editor was inactive. We considered an editor inactive if they made no edits to _any_ articles in the 30-day period following the recommendation. Our final dataset contained 82,650 recommendations (2755 sets of 30) across 375 users.
For our outcome variable, each recommendation was labeled as successful (binary) if the editor made 1 or more edits to the recommended article within 30 days of receiving the recommendation.
We next identified relevant article characteristics that were likely to impact editing behavior, to be used as control variables:
* **views**: the number of article views, by quartile within the dataset. (Quartiles 1-4)
* **top importance rating**: the highest importance rating assigned to the article by a WikiProject. (Unrated, Low, Mid, High, Top)
* **article year**: the period to which the article most closely pertains. (Unknown, Pre-1900s, 20th century, 21st century)
* **predicted class**: the article's quality class, as predicted by the ORES system. (Stub/Start, C/B, A/GA, FA)
* **assessed class**: the article's quality class, as manually assessed and tagged by Wikipedia editors. (None, Stub/Start, C/B, A/GA, FA)
* **task type**: the kind of work needing to be done, as indicated by SuggestBot. (Add sources, Cleanup, Expand, Unencyclopaedic, Merge, Wikify, Orphan, Stub)
We also included editor characteristics likely to affect editing behavior as control variables:
* **account age**: the age of the editor's account, by quartile within the dataset. (Quartiles 1-4)
* **total edit count**: the editor's total number of edits on English Wikipedia, by quartile within the dataset. (Quartiles 1-4)
Finally, we included features to represent various dimensions of content equity:
Figure 1: An example set of recommendations, posted by SuggestBot to a user’s Talk page. Roughly half of the recommendations are part of an experimental group, but the user is not given any indication as to which.
* **gender**: the gender of the article's subject, if it is a biography. (Not Biography, Male, Female, Other)
* **geography**: the region to which the article pertains (Global North, Global South, Both/Region-Neutral)
* **important topics**: whether the article is included in a WikiProject on an important topic.
Gender and geography are easily quantifiable and commonly cited dimensions along which English Wikipedia is known to have content disparities, so we included them as two of our central features of interest. We gathered gender4 and coordinate information on the articles in our dataset from Wikidata.
Footnote 4: Specifically, for articles about humans with the sex-or-gender property, we separated articles into (cisgender) Male, (cisgender) Female, and a final category that incorporated transgender and non-binary gender identities. Given the low proportion of non-binary gender identities represented on Wikipedia, we grouped them with the Female group for our experiment.
Our inclusion of important topics was motivated by its inclusion in Redi et al.'s (2021) taxonomy of knowledge gaps on Wikimedia projects. Though the boundaries of what should be considered an important topic have not been established by the Wikimedia community, we chose to operationalize it as topics directly relevant to the United Nation's Sustainable Development Goals,5 which worked out to 11 WikiProjects: Disability, Politics, Agriculture, Medicine, Education, Water, Sanitation, Energy, Environment, Climate change, and Human rights.
Footnote 5: [https://meta.wikimedia.org/wiki/Movement_Strategy/](https://meta.wikimedia.org/wiki/Movement_Strategy/) Recommendations/Identify_Topics_for_Impact#What
### Analysis
We fit generalized linear mixed models (GLMMs) with a binomial distribution to identify relationships between our variables of interest and our binary outcome variable. All features were encoded as categorical (unordered factors).
We first constructed a model containing only control variables. We started with the simplest model, which included a random effect for editor and no fixed effects. We iterated through each of our control variables, fitting a new GLMM that included that feature as a fixed effect. If any of the new models significantly improved model AIC (as determined by an ANOVA test), we adopted the model that most improved AIC. In essence, we iteratively added the control variable that most improved model AIC, then repeated this process until none of the remaining control variables significantly improved AIC when added as fixed effects.
Then, for each of our variables of interest, we fit a new GLMM, adding the variable of interest as a fixed effect alongside the selected control variables. We again used an ANOVA to compare the fit of each of these models to the control model.
## Results (Study 1)
### Control Model
The control model included three fixed effects--task type, account age, and predicted class. All other potential control features did not significantly improve the model, so we did not include them. We report the regression coefficients for this model in Table 1. Our control model shows that higher quality articles are edited more than lower quality articles, and that editors who have been registered for longer are less likely to edit recommended articles. It also shows that articles labeled with easier tasks (e.g., Add Sources) are edited more frequently than those labeled with harder tasks (e.g., Wikify). It is worth noting here that article merges are not recorded as edits, which explains the negative coefficient for that task type. We kept recommendations with the Merge task type in our analysis anyway, however, because editors may still work on recommended articles in ways that do not fit with the task type presented by SuggestBot.
### Gender
Adding a fixed feature for gender significantly improved the model (\(p<0.001\)). The model showed a positive regression coefficient (Table 2) for Female recommendations (\(p<0.001\)), with Male in the intercept, indicating a higher likelihood of a Female article recommendation leading to an edit. The coefficient for Non-binary recommendations was not significant due to low sample size.
### Geography
Adding a fixed effect for geography, once again, significantly improved model AIC (\(p=0.042\)). Here, however, the better fit seems to be driven by differences between Region-Neutral and non-Region-Neutral articles; with Global North in the intercept, the coefficient for Global South is not significant (\(p=0.182\)), and the coefficient for Both/Region-Neutral is significant (\(p=0.013\)).
### Important topics
Adding a fixed feature for important topics did not significantly improve model AIC (\(p=0.211\)). The coefficient for important topics is slightly negative but not significant (\(p=0.221\)).
\begin{table}
\begin{tabular}{l r}
**Fixed Effect** & **Regression Coefficient (t-statistic)** \\ \hline (Intercept) & -3.57 (-16.61) *** \\ task type: Cleanup & -0.31 (-3.13) ** \\ task type: Expand & -0.39 (-3.88) *** \\ task type: Merge & -1.21 (-8.87) *** \\ task type: Orphan & -0.35 (-3.34) *** \\ task type: Stub & 0.20 (2.37) *** \\ task type: Unencyclopaedic & -0.64 (-5.84) *** \\ task type: Wikify & -0.65 (-5.86) *** \\ account age: Q2 & -0.27 (-0.93) \\ account age: Q3 & -0.89 (-3.11) ** \\ account age: Q4 & -1.32 (-4.70) *** \\ predicted class: C/B & 0.24 (3.37) *** \\ predicted class: A/GA & 0.28 (2.48) * \\ predicted class: FA & 0.59 (3.49) *** \\ \hline \end{tabular}
\end{table}
Table 1: Regression coefficients for the control model. Includes a random effect for editor.
### Observational Study to Controlled Experiment
Study 1 gave us a baseline understanding of how Wikipedia-ans react to SuggestBot's recommendations. Overall, it paints an encouraging picture of editors' reactions to articles from underrepresented categories. However, observational studies are inherently limited. We may not have incorporated all possible control variables, and doing so would likely make our models unmanageably complex anyway. Moreover, even articles from underrepresented categories were still highly relevant to the editor--otherwise the standard SuggestBot algorithm would not have included them in the first place. For a recommender system to truly have impact, we would need to intentionally present editors with items deemed less relevant. Unfortunately, an observational study does not allow us to generalize to this new paradigm, where content equity has to come at some cost to item relevance. This led us to conduct a controlled experiment in Study 2.
## Methods (Study 2)
### Experimental Design
SuggestBot is an ensemble recommender system; it begins by generating three large (1000+ items) sets of candidate recommendations using three different algorithms. It then iterates through each set--starting with the most relevant item in the set and going in descending order--and selects the first item from the set that meets its basic filtering criteria (e.g., has not already been recommended to this user recently). It cycles through the 3 candidate sets, selecting one recommendation from each, until it has assembled 30 recommendations which are then served to the user.
Even though the recommendations are sorted in relevance order, SuggestBot must go through a lengthy filtering process because it must also select recommendations that are tagged with the appropriate task type. For example, SuggestBot always includes 6 "Add Sources" tasks in a recommendation set; it achieves this by adopting "is tagged with Add Sources" as a filtering criterion in 6 of its iterations. This means that, on average, SuggestBot filters through 461 recommendations before finding a suitable recommendation that meets its basic criteria.
We implemented our experiment by intervening in the filtering process, after the initial candidate recommendation sets were generated (Figure 2). More specifically, we modified SuggestBot's logic to, in a random 55% of iterations, adopt an additional filtering criterion corresponding to one of our experimental groups. It would only select a candidate article to be recommended if the article met all the original filtering criteria _and_ the new criterion. For our gender experimental group, for example, SuggestBot adopted the filtering criterion "is a biography about a woman"--i.e., it selected the most relevant article from the candidate set _that was a biography about a woman_.
As previously mentioned, adding new filtering criteria meant our treatment articles would, in the aggregate, have lower relevance scores than our unchanged articles. To isolate the effects of these relevance changes from the effects of changes in the article features we were concerned with, we also generated lower-relevance control groups. This would allow us to answer questions like _"did editors react differently to articles about women because they were less relevant, or because they were articles about women?"_
Therefore, if a recommendation was set to receive an additional filtering criterion, it was also given a 50% chance of being a lower-relevance control. In this case, SuggestBot would find the first article meeting all the original filtering criteria and the experimental criterion, but then select the
\begin{table}
\begin{tabular}{l r}
**Fixed Effect** & **Regression Coefficient (t-statistic)** \\ \hline gender: Female & 0.39 (3.31) *** \\ gender: Non-binary & -12.48 (-0.04) \\ gender: Not Biography & -0.19 (-2.57) * \\ \hline geography: Global South & -0.12 (-1.33) \\ geography: Region-neutral & -0.15 (-2.50) * \\ \hline important topics: True & -0.13 (-1.23) \\ \end{tabular}
\end{table}
Table 2: Regression coefficients for gender, geography, and important topics models. Each of the three models contains a random effect for editor, the control model’s fixed effects, and a fixed effect for the corresponding feature of interest. Features are unordered factors, so there is a separate coefficient for each level of each feature.
Figure 2: Diagram describing SuggestBot’s process of filtering recommendations, and how we intervened in the filtering process to produce treatment groups. Black elements represent additions for the experiment. “Add Experiment Criterion” was given a random 55% chance of being yes.
first article _after it_ that met all the original filtering criteria but did _not_ meet the added experimental criterion. Using gender as an example, the end result was that we created two groups of recommendations, both of which suffered similar relevance drops, one of which was composed entirely of articles about women (treatment), while the other contained no articles about women (reduced relevance control).
Overall, each recommended article had a probability of belonging to one of the following groups:
* 45%: unchanged/baseline
* 9.17%: gender treatment (female and non-binary articles)
* 9.17%: geography treatment (global south articles)
* 9.17%: important topics treatment (important topics articles)
* 9.17%: gender control (reduced relevance non-female or non-binary articles)
* 9.17%: geography control (reduced relevance non-global-north articles)
* 9.17%: important topics control (reduced relevance non-important-topics articles)
Sometimes, no suitable article could be found for inclusion in an experimental group. In those cases, SuggestBot reverted to selecting the most relevant article using only the basic filtering criteria, and the article was considered to be part of the unchanged/baseline group. Therefore, the actual group breakdown of recommendations was:
* 19,805 (49.5%): unchanged/baseline
* 3,781 (9.5%): gender treatment (female and non-binary articles)
* 3,284 (8.2%): geography treatment (global south articles)
* 3,361 (8.4%): important topics treatment (important topics articles)
* 3,533 (8.8%): gender control (reduced relevance non-female or non-binary articles)
* 3,181 (8.0%): geography control (reduced relevance non-global-north articles)
* 3,045 (7.6%): important topics control (reduced relevance non-important-topics articles)
We added text to each recommendation set's intro message to inform users that an experiment was being run, and allow them to opt out. Only a single user opted out of the experiment; they were given unchanged recommendations and removed from our dataset.
### Dataset
The experiment was conducted over approximately 3 months, from September 7th to December 31st, 2022. Our dataset included all SuggestBot recommendations served in this period while the receiving editor was active. We again defined active as having made at least one edit to _any_ article in the 30 days following recommendation. The final dataset contained 39,990 recommendations (1,333 sets of 30) across 281 unique users with a recommendation being labeled as successful if the editor subsequently edited the article within the next 30 days.
### Analysis
Our experimental intervention could have plausibly reduced engagement across _all recommendations_, even those in the baseline/unchanged group. We therefore first generated descriptive statistics to compare overall recommendation uptake in the context of previous years. We then verified that recommendations in our reduced relevance control groups did in fact have similar relevance to their corresponding treatment group recommendations. Finally, we once again fit generalized linear mixed models (GLMMs) to model the relationships between each of our experiment variables and the same binary outcome as in Study 1--whether the recommendation did or did not prompt work on the article.
For each of our features of interest (gender, geography, and important topics), we fit two GLMMs. One GLMM was fit solely on recommendations that were part of the intervention and contained a fixed effect representing whether a recommendation was in the treatment group or its corresponding lower-relevance control group. The goal of this first model was to examine whether the feature had an effect on editing _assuming relevance were held constant_. The other contained a fixed effect for whether a recommendation was in the treatment group or in the unchanged/baseline group, and was used to examine the effects of replacing SuggestBot's normal recommendations with items from underrepresented categories. Each model was fit using only the recommendations that fit into one of its corresponding groups.
All models included editor as a random effect. For each of the GLMMs, we used an ANOVA to determine whether adding the fixed effect significantly improved model fit.
## Results (Study 2)
### Descriptive Statistics
Experimental intervention did not reduce overall uptakeDuring our study period, SuggestBot recommendations prompted a total of 1090 edits from 599 of the 39,990 recommendations (1.5% uptake with approximately 2 edits per successful recommendation). Of those 1900 edits, 32% included changes to the article content, 23% included changes to internal links, 24% included changes to references, and 68% included changes to templates.6
Footnote 6: Note that a single edit often included multiple types of changes and the SuggestBot tasks are marked via templates so the high rate of template editing is not surprising in this context.
By comparison, uptake percentages for the same months in 2020 and 2021 were 1.9% and 2.0%, respectively. We suspected this year-over-year reduction was caused by global editing trends on English Wikipedia, rather than our experimental recommendations. For instance, the spikes in edit activity that occurred with the beginning of the COVID-19 pandemic (Ruprechter et al., 2021) have lessened in recent years.7 To confirm, we compared year-over-year recommendation uptake for the month of August (right before our study period), and found 1.9%, 1.6%, and 1.2% for 2020,
2021, and 2022, respectively. The fact that uptake _before_ our study period was already lower than previous years--and that it then _increased_ during the study period--confirmed that global editing trends were likely responsible for these changes. We therefore proceeded to examine what percentage of these edited recommendations belonged to underrepresented categories.
More edited recommendations were femaleOf 3,781 recommendations in the gender treatment group, 57 (1.5%) prompted editing; this was virtually identical to overall uptake. We were primarily concerned with the ratio of female-to-male articles, so only biographies were included in the following calculations. We found that 40.5% of edited biography recommendations were female during our study period--a substantially higher portion than the 28.8% and 30.0% in 2020 and 2021, respectively. In short, presenting editors with a greater share of female recommendations--41.7% female recommendations, instead of 22.3% and 23.0% in 2020 and 2021--also resulted in a greater share of _editing_ being done on female biographies as compared with male biographies.
More edited recommendations pertained to global southOf 3,284 recommendations in the geography treatment group, 50 prompted editing (1.5% uptake); once again, this was very similar to overall uptake. We were primarily concerned with the ratio of global-south-to-north articles, so only non-region-neutral articles were included in the following calculations. We found that 29.3% of edited recommendations pertained to the global south, compared with 22.0% in 2020 and 22.3% in 2021. This was a smaller increase than for gender, likely because our experiment changed the baseline share of global south recommendations by a much smaller amount than it did for female recommendations; the percentages of global south recommendations were 28.2%, 27.7%, and 30.9% in 2020, 2021, and 2022. Presenting marginally more global south recommendations, however, seems to have still resulted in more editing on global south articles.
More edited recommendations were from important topicsOf 3,361 recommendations in the important topics treatment group, 37 prompted editing (1.1%). However, we found that 11.4% of edited recommendations in our study period pertained to important topics. This was, again, higher than the two previous years--7.2% in 2020 and 7.5% in 2021. The percentage of important topics recommendations presented increased from 10.2% in 2020 and 9.7% in 2021, to 16.0% during our 2022 study period. Once again, recommending more important topics articles resulted in a greater share of _edited_ recommendations being important topics articles.
Treatment and reduced relevance control groups had substantial relevance dropsSuggestBot is an ensemble recommender system, so there was no single scale we could use to meaningfully compare recommendations' relevance. To verify that the reduced relevance control groups had approximately similar relevance to their corresponding treatment groups, we compared the number of candidate recommendations SuggestBot had to filter out before finding a suitable recommendation for each group. Candidate recom
\begin{table}
\begin{tabular}{l l r} \hline \hline
**Fixed Effect** & **Alternative** & **Regression Coefficient (t-statistic)** \\ \hline Gender Treatment & Baseline/Unchanged & -0.12 (-0.80) \\ Gender Treatment & Gender Control & 0.27 (1.37) \\ \hline Geography Treatment & Baseline/Unchanged & -0.12 (-0.72) \\ Geography Treatment & Geography Control & 0.02 (0.07) \\ \hline Important Topics Treatment & Baseline/Unchanged & -0.13 (-1.03) \\ Important Topics Treatment & Important Topics Control & 0.33 (1.37) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Regression coefficients for each of the 6 models in Study 2.
Figure 3: Percentage of edited recommendations that belonged to underrepresented categories. Previous years are in red and study period is in blue. Graphs show consistent increases across all categories, likely due to our intervention.
mendations were evaluated in decreasing order of relevance within each sub-recommender, so fewer filtered-out candidates generally indicated a more relevant recommendation (and vice versa).
Recall that, for baseline/unchanged recommendations, SuggestBot had to filter out 461 candidate articles before finding a suitable recommendation that met its basic criteria. On the other hand, the means for the gender, geography, and important topics treatment groups were 865, 697, and 789 candidates, respectively, indicating substantial drops in relevance as expected. The means for the lower relevance control groups were similar to those of their corresponding treatment groups: 834, 675, and 750, respectively, confirming that lower relevance control groups were roughly equivalent in relevance to their treatment group counterparts.
### Generalized Linear Mixed Models
Our descriptive statistics so far tell a straightforward story: presenting SuggestBot users with more underrepresented items increased the share of editing done on underrepresented items. We now present results from our analysis using generalized linear mixed models (GLMMs), which allowed us to introduce a random effect for editor and isolate the effects of the features we were interested in (Table 3).
#### Gender fixed effects not significant
Our first GLMM for gender compared female treatment group recommendations with baseline/unchanged recommendations. Adding a fixed effect representing those features did not significantly improve the model over one with just a random effect for editor (\(p=0.42\)), indicating that editors were open to editing biographies of women at about the same rate as their normal recommendations. We fit a second GLMM to compare female treatment group recommendations with their reduced-relevance controls and, as expected, also found no significant model improvement (\(p=0.17\)).
#### Geography fixed effects not significant
We fit a GLMM to compare global south treatment group recommendations with baseline/unchanged recommendations. Once again, adding a fixed effect for these features did not significantly improve our model (\(p=0.47\)), indicating that editors were open to editing articles pertaining to the global south at about the same rate as their normal recommendations. The GLMM comparing global south treatment recommendations with their reduced-relevance controls also was not significantly better than the random-effect-only alternative (\(p=0.94\)).
#### Important topics fixed effects not significant
Our results here are similar to those for the gender and geography experimental groups. Adding a fixed effect representing whether an article was in the important topics treatment or baseline/unchanged group did not significantly improve our model (\(p=0.3\)). Once again, this shows that editors are largely willing to work on these articles at the same rate. The model comparing important topics treatment recommendations with their reduced-relevance control recommendations was also not significantly better than the control model (\(p=0.17\)).
#### Summary
Our "null" results are an exciting outcome; editors show a willingness to edit content on underrepresented topics, even when those items are less relevant. While not significant, the regression coefficients comparing each treatment group with baseline show small decreases in predicted uptake, while the ones comparing each treatment group with its lower-relevance control show small increases. This substantiates our findings from Study 1; _relevance being equal_, recommendations from underrepresented categories have a slightly _higher_ chance of being edited. When relevance decreases substantially to provide more underrepresented items, we do see reduced uptake, but not by enough to significantly offset the increased editing on underrepresented items. We now discuss the implications of these results.
## Discussion
### A Different Kind of Filter Bubble
Personalized task-routing recommender systems like SuggestBot aim to provide the recommendations that are most likely to result in edits. Therefore, in theory, using edit history to personalize recommendations makes sense--we can figure out what an editor _would_ edit by looking at what they have already edited--and research has borne this out [10, 11]. However, this ignores that the process by which editors _discover_ articles also determines what they are likely to edit. In practice, therefore, recommender systems can falsely infer _editing_ preferences from biases in an editor's article discovery process.
Let us imagine, for example, a male editor who loves editing biographies. As he watches the 2022 FIFA World Cup final, he looks up each player's Wikipedia article and makes edits to a large subset of them. A recommender system might then recommend articles similar to those--perhaps other sports-related articles, or more biographies about men. In reality, however, the editor is perfectly happy to work on _any biographies he encounters_. Despite this, the recommender system reinforces the self-focus bias that crept in through the editor's article discovery process. This may cause editors to receive recommendations that are narrower than they would like, sometimes in ways that are noticeable to them.8
Footnote 8: See an example from a SuggestBot user: [https://en.wikipedia.org/wiki/User_talk:SuggestBot#](https://en.wikipedia.org/wiki/User_talk:SuggestBot#) Is_there_a_way_to_increase_%20temperature%20
This can apply to any platform that employs recommender systems, especially when those systems' algorithms are based on incomplete behavioral data. In fact, what we identify here is very similar to the "filter bubble" [21] phenomenon, whereby recommender systems narrow a user's content consumption. We can view the filter bubble problem as being caused by the system failing to acknowledge that more and more of the user's content discovery process becomes informed by the system itself. Similarly, Wikipedia recommender systems that use edit history do not incorporate the content discovery process at all, thereby unnecessarily narrowing editors' recommendations.
Yet it is clear that recommender systems are themselves a means of discovering content, so they can be leveraged to _counteract_ self-focus bias in the discovery process. The recommender system literature has explored ways of counteracting various biases over the years, moving from more general concepts of diversity to more targeted fairness objectives (Ekstrand et al., 2022). Indeed, our findings show that this can work; when we presented editors with more articles from underrepresented categories, they _edited_ more articles from those categories. If they were averse to editing these articles, we would have expected to see a significant drop in recommendation uptake--yet we did not. This signals that content gaps are not entirely driven by Wikipedia's editing preferences; biases in the article discovery process likely play a significant role. In fact, based on Houtti et al.'s (2022) analysis of English Wikipedia's values, it would not be at all surprising if Wikipedians' editing preferences were _in favor_ of articles from underrepresented topics. Our Study 1 results support this conclusion for gender and, while not significant, the coefficients in our Study 2 models comparing each treatment group with its lower-relevance control do as well.
### Incorporating Personal Diversity Tolerance
In Study 2, we implemented aggressive, consistent treatments to keep the recommender simple while ensuring that effects could be detected if they were of a reasonable size. Our analysis controlled for individual editors through a random effect but did not explore individual editing differences. Prior work provides more sophisticated means of increasing content equity that do take individual differences into account. Sonboli et al. (2020), for example, describe an algorithm that diversifies recommendations only along dimensions where the user has shown tolerance for diversity. Attempts to leverage recommender systems to reduce content gaps should take advantage of these methods to both minimize the risk of pushing away editors with low diversity tolerance and to maximize the equity benefits gained from editors with high diversity tolerance, even if that tolerance is only along particular dimensions.
### Recommender Systems Are _Part_ of a Solution
It is clear from our results that recommender systems could play a significant role in reducing content gaps, but can they eliminate gaps altogether? Sadly, no. We see recommender systems as powerful tools for _right now_, but our Study 2 models show that current Wikipedia editors are still slightly less likely to edit underrepresented topic articles than their baseline recommendations. In the long term, therefore, we would expect the benefits of more equity-oriented recommendations to top out and not be as useful for the underrepresented gaps that are even more distant in relevance--i.e. less familiar--to the preferences of the existing editor base.
Further, recommender systems do not help address Wikipedia's more subtle equity issues. Menking and Rosenberg (2021) for example, highlight how Wikipedia's demographics inform the encyclopedia's core epistemic practices, making it difficult for those within the community to make more than incremental changes in countering systemic biases. As with any online community, these deeper issues can only be addressed through the ongoing efforts to diversify its membership.
## Limitations
We now outline our work's main limitations. First, we only explored three dimensions of content equity (gender, geography, and important topics), and even the dimensions we did study were relatively high-level. For example, while we saw an increase in global south editing during our study period, about 20% of those edited recommendations pertained to a single country: India. On the gender side, _all_ of the edited biography recommendations were about cisender individuals; 56 recommended biographies about transgender and non-binary people prompted no edits. This was unsurprising given 1.5% uptake, but highlights that specific attention must be given to biographies of transgender and non-binary people if any measurable progress is to be made on improving them. Content equity on Wikipedia must be achieved across a multitude of dimensions, granularities, and their intersections that we do not study here. However, the consistency of our results along the dimensions we _did_ study gives us reason to be optimistic about how our findings generalize to the ones we did not study when they are given specific attention.
Similarly, we conducted our experiment on a single recommender system--SuggestBot--and in a single language community--English Wikipedia. While we suggest ways in which some of our findings could generalize to other platforms that employ recommender systems, more study in other contexts is needed to confidently claim generalizability.
## Conclusion
In two empirical studies on Wikipedia's SuggestBot, we found that editors were slightly _more_ likely to edit articles from underrepresented categories as long as item relevance was held constant. This suggests recommender systems could be used to improve content gaps on Wikipedia and other peer production platforms. From this, we discussed how interest-based recommender systems might unnecessarily narrow the content they provide users when the metrics from which they make inferences do not capture information about how the user _discovers_ content. This paper demonstrates recommender systems' potential in improving content gaps, but we also acknowledge that such systems are only one part of a larger set of solutions for making online knowledge more equitable.
## Broader Perspective, Ethics and Competing Interests
In this work, we conducted a live experiment on a recommender system for editors on English Wikipedia. The English Wikipedia editor community is very clear that Wikipedia should not be treated as a laboratory and care should be taken to avoid being disruptive.9 As such, we took
several steps to ensure that our research was not disruptive for that community:
* Consent and opt-out: the experimental recommendations were accompanied by a statement indicating that we were running an experiment, linked to a form for opting out (1 request), and monitored SuggestBot's discussion page for questions or concerns (there were none about the experiment).
* Minimize potential for harm: we designed the experiment to carefully balance need for statistical power with minimally disrupting the user experience (if the intervention turned out to be of negative impact). We left half of the recommendations unchanged and used lower-relevance but still "reasonable" recommendations for the interventions. We were also only providing recommendations and not editing Wikipedia itself.
* Review and approval: while English Wikipedia does not have a formal process for reviewing proposed research, we did receive IRB approval from the first author's institution and incorporated extensive feedback from researchers who have previously run experiments with SuggestBot.
* Public data: while we had access to SuggestBot to make the algorithmic changes, all recommendations and edits are public data, so we were not handling any private information.
Implementing changes to recommender systems on Wikipedia should only be taken in careful consultation with the affected editor communities as discussed by Johnson and Lescak (2022) (something we could not do to preserve the integrity of the experiment) and should probably not be hidden algorithmic tweaks but controllable end-user options (for example: Ekstrand et al. 2015). Otherwise, the interventions could back-fire if they are irrelevant or clash with editor motivation for editing, as discussed in the Introduction.
|
2304.11668
|
CoReFace: Sample-Guided Contrastive Regularization for Deep Face
Recognition
|
The discriminability of feature representation is the key to open-set face
recognition. Previous methods rely on the learnable weights of the
classification layer that represent the identities. However, the evaluation
process learns no identity representation and drops the classifier from
training. This inconsistency could confuse the feature encoder in understanding
the evaluation goal and hinder the effect of identity-based methods. To
alleviate the above problem, we propose a novel approach namely Contrastive
Regularization for Face recognition (CoReFace) to apply image-level
regularization in feature representation learning. Specifically, we employ
sample-guided contrastive learning to regularize the training with the
image-image relationship directly, which is consistent with the evaluation
process. To integrate contrastive learning into face recognition, we augment
embeddings instead of images to avoid the image quality degradation. Then, we
propose a novel contrastive loss for the representation distribution by
incorporating an adaptive margin and a supervised contrastive mask to generate
steady loss values and avoid the collision with the classification supervision
signal. Finally, we discover and solve the semantically repetitive signal
problem in contrastive learning by exploring new pair coupling protocols.
Extensive experiments demonstrate the efficacy and efficiency of our CoReFace
which is highly competitive with the state-of-the-art approaches.
|
Youzhe Song, Feng Wang
|
2023-04-23T14:33:24Z
|
http://arxiv.org/abs/2304.11668v1
|
# CoReFace: Sample-Guided Contrastive Regularization for Deep Face Recognition
###### Abstract
The discriminability of feature representation is the key to open-set face recognition. Previous methods rely on the learnable weights of the classification layer that represent the identities. However, the evaluation process learns no identity representation and drops the classifier from training. This inconsistency could confuse the feature encoder in understanding the evaluation goal and hinder the effect of identity-based methods. To alleviate the above problem, we propose a novel approach namely Contrastive Regularization for Face recognition (CoReFace) to apply image-level regularization in feature representation learning. Specifically, we employ sample-guided contrastive learning to regularize the training with the image-image relationship directly, which is consistent with the evaluation process. To integrate contrastive learning into face recognition, we augment embeddings instead of images to avoid the image quality degradation. Then, we propose a novel contrastive loss for the representation distribution by incorporating an adaptive margin and a supervised contrastive mask to generate steady loss values and avoid the collision with the classification supervision signal. Finally, we discover and solve the semantically repetitive signal problem in contrastive learning by exploring new pair coupling protocols. Extensive experiments demonstrate the efficacy and efficiency of our CoReFace which is highly competitive with the state-of-the-art approaches.
## 1 Introduction
Face recognition (FR) is a long-standing task and plays an important role in numerous applications. The evaluation scenarios of FR could be categorized into two types, i.e. verification and identification. Both of them are based on the similarity between face images. To better adapt to the realistic situations, the identities for evaluation are excluded from the training in open-set face recognition [14]. Recently classification methods achieve the state-of-the-art (SoTA) results in FR where the face images with identity labels are used to train a fine-grained classifier to discriminate different identities. While in evaluation, the classifier is usually dropped as the training-specific identity information contributes little to this process.
To achieve higher intra-class compactness, a series of margin-based methods are proposed [20, 33, 6], which put a margin to make the decision of the right class harder in training. However, these classification methods ignore the holistic feature space [7]. Some other works focus more on the inter-class separability which is also a key for the feature discriminability, and design different loss functions to perform regularization [35, 42, 38, 7]. However, they only investigate the image-identity or the identity-identity relationships during training, and do not fully constrain the image-image similarity which is essential in evaluation. As illustrated in Figure 1, the feature distribution in the identity-based training could achieve high intra-class compactness and inter-class separability with the help of identity proxy
Figure 1: (a) Current identity-based methods aim at intra-class compactness and inter-class separability in training. (b) However, the identity-centric training pays little attention to image-image relationship which is the foundation in evaluation process. The points with grey borderlines are from distinct identities, and other points with the same borderline color are from the same identity. Our CoReFace takes contrastive learning as regularization to directly constrain the relationship between images during training.
features. However, in the sample-based evaluation, the classifier is dropped. Furthermore, the face images in evaluation are of the identities which are different with the training. Thus, the feature distribution in evaluation might be not as discriminative as in training.
To address the above problem, in this paper, we propose a novel approach namely Contrastive Regularization for Face recognition (CoReFace). We constrain the image-image relationship by using contrastive learning to regularize the training process so as to make the goal of training consistently with the evaluation, and thus boost the performance of open-set face recognition. Contrastive learning pulls the semantically similar samples closer and pushes the others away in the representation space [10]. In the FR literature, the class-guided contrastive learning has been attempted which takes samples from the _same class_ to compose positive pairs. For instance, triplet loss is applied solely [25] or jointly [29, 31] with the classification methods. However, with the recent development of margin-based methods, these approaches might cause interference in joint training with other classification methods [13, 6]. On the other hand, sample-guided contrastive learning demonstrates a promising advancement in unsupervised learning [39, 37, 3]. They apply stochastic data augmentation on the _same image_ to compose positive pairs, which alleviates the limitation of the label requirement. It further provides a perspective that beyond class boundary. In our approach, we employ sample-guided contrastive learning as regularization to adjust the image-image relationship for more semantic and consistent feature distribution in training and evaluation.
However, it is non-trivial to integrate the sample-guided contrastive learning with the margin-based classification methods. First, as a fine-grained task, face recognition requires a huge number of high-quality images to learn the difference between identities. The commonly-used data augmentations in contrastive learning hinder the convergence of FR models [27, 17]. To make the sample-guided contrastive learning applicable to FR, we propose a new pipeline by using feature augmentation instead of data augmentation to generate positive pairs. Second, the sample-guided contrastive learning is usually designed to be solely applied. When jointly training with margin-based classification methods, we find their effectiveness become insignificant. To solve this problem, we design a novel contrastive loss function to effectively perform the regularization. Third, the scale of the negative sample pool plays a key role in contrastive learning [3, 11, 5, 8]. When we focus on a general situation with normal batch size and no extra encoder, a _Semantically Repetitive Signal_ (SRS) problem is discovered, i.e. some sample combinations repeatedly contribute to the optimization. This pushes the relative part of distribution with inappropriate magnitude. To alleviate this problem, we explore new strategies of pair coupling. The main contributions of this paper are summarized as follows:
* We propose a novel framework to apply regularization in FR by contrastive learning. Unlike previous regularization approaches which adjust the feature distribution with image-identity pairs, our method utilizes image-image relationships which is consistent between training and evaluation.
* We propose a contrastive loss function to perform effective regularization which incorporates an adaptive margin to strengthen the contrastive supervision signal, and a supervised contrastive mask to avoid the supervision collisions in joint training.
* We investigate the SRS problem in contrastive learning in the situation of limited negative samples, and explore different pair coupling protocols to alleviate this problem.
* We conduct extensive experiments on the widely-used benchmarks to demonstrate the superiority of our proposed framework over the existing approaches.
## 2 Related Works
### Margin-based Classification methods
In recent years, we have witnessed an arising trend on margin-based classification methods in FR [20, 33, 6, 34, 15]. Among them, the representation embeddings of the images and the classes are normalized before their multiplication [32, 2], and then the product degrades to the cosine value of the angle between the two vectors. During training, a margin parameter is taken to enlarge the distance between the matched image-identity pair and the unrelated ones. This improves the compactness of the intra-class with shorter distance between the representations of the same identity. The normalization relieves the misguidance of the feature norm in the Softmax loss by projecting the features onto a hypersphere, and the margin puts a strong constraint on the image-identity feature pairs on this hypersphere. While these methods achieve high intra-class compactness, they fail to exploit the holistic feature space [7]. To improve the feature distribution, our CoReFace puts constraints on the image-image relationship during training.
### Feature Regularization in FR
Feature distribution is the foundation of face recognition evaluation since both the two sub-tasks (verification and identification) rely on feature similarity between face images [20, 35]. To adjust the feature distribution in a holistic view, some methods resort to extra constraints to promote the performance of evaluation. They restrict the magnitudes of the representation features [45], or the Euclidean distance between the representations and the identity weights [35].
As the identity weights serve as the class proxies, a number of works argue that they could support the holistic feature distribution [41, 42, 7, 38]. By constraining the energy function, the Euclidean distances, or the angulars between identity weights, better distribution could be achieved.
Nevertheless, all of the above methods utilize the training-specific identity information to adjust the sample similarities indirectly. Their efficacy is designed for training with little assurance to generalize to the evaluation process where the identities are dropped. In this paper, we propose a novel constrastive regularization approach by designing a contrastive loss. Compared with existing approaches, we directly adjust the relationship of image features so as to make the training consistent with the evaluation, and thus improve the performance of FR under open-set situations.
### Contrastive Learning for FR
Contrastive learning aims at clustering semantic neighbors as distribution neighbors in the representation space [10]. Class-guided contrastive learning [25, 32] has been applied to FR [25], which takes the samples from the same class as semantic neighbors. However, they have shown to obstruct the performance in joint training [6, 13]. On the other hand, sample-guided contrastive methods take the outcome of data augmentation to compose positive pairs. They usually construct a large negative sample pool for comparison [3, 11, 5, 8]. With huge dataset and sufficient training, they show promising performance on unsupervised learning. However, it is hard to apply sample-guided constrastive learning in FR which would be trapped in the obstacles introduced by the commonly-used data augmentation [27].
In this paper, we design a new framework to reconcile the image quality degradation problem and keep regularization effective in the training stage. CoReFace takes feature augmentation to avoid the semantic damage of data augmentation. In addition, the proposed contrastive loss adopts an adaptive margin to supervise the well-performed classification methods, and adopts a supervised contrastive mask to prevent the conflict in joint training. We further discover the SRS problem in common FR training settings and explore pair-coupling protocols to relieve this problem.
## 3 Methodology
Figure 2 illustrates the framework of our CoReFace. We apply regularization with sample-guided contrastive learning to solve the neglect of image-image relationship in training and the inconsistency caused by the abandoning of the classifier in evaluation. First, to address the image quality degradation problem, we employ feature augmentation to replace the widely-used data augmentation for positive pair composition. We also drop the projection layer which is widely used with contrastive learning [3, 4, 9, 5]. In our scenario, contrastive learning aims at adjusting the feature representation distribution, instead of an information-limited projection. Second, we propose a novel contrastive loss by integrating an adaptive margin and a supervised contrastive mask. The adaptive margin is designed to keep the magni
Figure 2: Illustration of our CoReFace approach. To relieve the image quality degradation problem, we add a feature augmentation module between the backbone and the FC layer to generate positive pairs for sample-based contrastive learning. Our contrastive loss function is composed by an adaptive-margin-based loss and a supervised contrastive mask. The margin is adaptive with the training process and the backbone magnitude. The supervised contrastive mask avoids the conflict between samples from the same identity in contrastive learning. We also take a new pair coupling protocol in the similarity computation for contrastive learning to avoid the semantically repetitive signal problem. The contrastive loss is then used to regularize the training process with the image-image relationship.
tudes of the positive and the negative similarities close, and produce steady loss values during the joint training. The supervised contrastive mask (SCM) takes the class label to generate a mask which excludes the samples of the same class from the negative comparison pool. This avoids the conflict with the classification method. Third, we investigate the _Semantically Repetitive Signal_ (SRS) problem, i.e. some key pairs in contrastive learning are repeated. This distorts the feature distribution and disturbs the upcomming similarity calculation. We design new pair-coupling strategies to relieve this problem. Finally, we apply image-image regularization to FR by jointly training the classification method with our CoReFace loss function.
### Feature Augmentation
Data augmentations such as cropping with resizing, color distortion, cutout, and Gaussian blur are widely used to generate positive pairs in sample-guided contrastive learning in computer vision tasks. This would inevitably bring semantic damages to the samples and degrade the image qualities. It is applicable to take strong augmentation in the coarse-grained classification tasks since the difference between images is relatively large. However, FR is a fine-grained task and requires the face images to be semantically clear. The image quality degradation caused by data augmentation is not negligible [27, 17].
To solve the above problem and make the sample-guided contrastive learning applicable in FR, we augment the features (instead of the images) for positive pair composition. As illustrated in Figure 2, we pass the hidden embedding after the backbone through two dropout channels \(\sigma_{1}\) and \(\sigma_{2}\) with distinct masks. Dropout [28] randomly mutes some part of the input with a certain probability. It can be seen as a kind of augmentation between two adjacent layers [1, 8]. When dropout is applied on the input image, it could be thought as an extreme case of salt-and-pepper noise. In our approach, the dropout masks are randomly generated in every mini-batch and operate on all of the input samples. Other methods such as random noise [40] is also suitable in our framework.
With feature augmentation, we can compose the positive pair for contrastive learning while avoiding the image quality degradation problem. In addition, compared with data augmentation which is performed on the input sample, and passes the augmented samples to the whole model twice, our feature augmentation operates on the feature and saves nearly a half computation.
### CoReFace Loss Function
In our framework, contrastive loss is used to constrain the distribution of the features by providing the image-level distribution guidance to compensate the inconsistency between the identity-based training and the sample-based evaluation. However, we find that the prevalent contrastive methods fail to keep effective signals in experiment. They insistently produce zero loss values and contribute little to the training. This is probably because that the classification method dominates the training by taking the advantage of labels. Furthermore, an aggressive regularization which conflicts with other supervision signals cannot work appropriately either. To meet the above requirements, we design a novel contrastive loss function which is adaptively effective and harmonic in joint training. Our CoReFace loss can generate steady loss values and take the classification labels into consideration to avoid the collisions with the classification loss.
Both the sample-guided contrastive loss functions and the classification loss functions in FR are based on the cross-entropy loss function. The common forms of these two kinds of losses are as follows:
\[\mathcal{L}_{Cla}=-\log\frac{e^{s\cdot P(\mathbf{h}_{i},\mathbf{W}_{y_{i}})}}{e^{s \cdot P(\mathbf{h}_{i},\mathbf{W}_{y_{i}})}+\sum_{j=1,j\neq i}^{n}e^{s\cdot Q(\mathbf{h}_{ i},\mathbf{W}_{j})}}, \tag{1}\]
\[\mathcal{L}_{Con}=-\log\frac{e^{\mathrm{i}\text{i}\text{i}\text{i}\text{i} \text{j}\text{k}+\text{i}\text{j}\text{k}}}{\sum_{j=1}^{2N}\mathbb{1}_{[j \neq i]}\mathrm{e}^{\mathrm{sim}\text{i}\text{i}\text{i}\text{j}\text{j})/ \tau}}, \tag{2}\]
where \(P(\mathbf{h}_{i},\mathbf{W}_{y_{i}})\) and \(Q(\mathbf{h}_{i},\mathbf{W}_{j})\) are two different functions to modulate the positive and the negative pair production of the feature \(\mathbf{h}\in\mathbb{R}^{d}\), \(\mathbf{W}\in\mathbb{R}^{d\times n}\) is the weight of the classifier with \(d\) being the feature dimension and \(n\) being the number of classes, \(\mathrm{sim}(\mathbf{h}_{i},\mathbf{h}_{j})=\frac{\mathbf{h}_{i}^{\top}\mathbf{h}_{j}}{\|\mathbf{h }_{i}\|\|\mathbf{h}_{j}\|}\) is the cosine similarity, \(s\) and \(\tau\) are two scale parameters used in the classification loss function and the contrastive loss function respectively, and \(\mathbb{1}_{[j\neq i]}\in\{0,1\}\) is an indicator function evaluating to 1 iff \(j\neq i\).
**Adaptive Margin.** We follow the margin-based methods [20, 33, 6] to enlarge the similarity of the positive pair and the dissimilarity of the negative pairs by increasing the difficulty of the judgement with a margin parameter \(m\). Different from the classification methods which take the image-class pairs, our contrastive method takes image-image pairs. In this way, we alleviate the inconsistency between training and evaluation discussed above. Furthermore, we dynamically adjust the margin parameter during training to keep effective supervision on the distribution.
As the most similar negative pair and the positive pair influence the decision boundary the most, our contrastive loss updates the margin \(m\) with the difference between the similarities of them. The margin assures that the magnitudes of the exponential of the numerator and the denominator in softmax are close, and keeps the loss value steady. To solve the noises brought by the extreme data, we employ the Exponential Moving Average (EMA) [15]. Specifically, let \(m_{C}^{(k)}\) be the average of the margin of the \(k\)-th batch with \(m_{C}^{0}=0\), and \(\alpha\) be the momentum parameter which is empirically set to 0.99. For a pair \((\mathbf{h}_{i},\mathbf{h}_{j})\) where \(i<j\), \(m_{C}^{(k)}\) is updated as:
\[m_{C}^{(k)} =\alpha m^{(k)}+(1-\alpha)m_{C}^{(k-1)}, \tag{3}\] \[m^{(k)} =\frac{1}{N}\sum_{i=1}^{N}\left(\sin(\mathbf{h}_{i},\mathbf{h}_{N+i})-Maxneg _{i}\right),\] (4) \[Maxneg_{i} =\max(\sin(\mathbf{h}_{i},\mathbf{h}_{j})),j\in[1,2N],j\neq N+i. \tag{5}\]
where \(N\) is the number of samples.
Taking \(m\) as the difference between angles like ArcFace [6] is also a candidate approach. However, it changes the angle of the vector pairs directly, which need to include the triangle function and increases the complexity of the derivation. This results in nan value when being used as the contrastive loss. To sum up, our adaptive margin-based contrastive loss can be formulated as
\[\mathcal{L}_{C}=-\log\frac{e^{s(\sin(\mathbf{h}_{i},\mathbf{h}_{N+i})-m_{C})}}{e^{s( \sin(\mathbf{h}_{i},\mathbf{h}_{N+i})-m_{C})}+\sum_{j=1,j\neq i,j\neq N+i}^{n}e^{s\cdot \sin(\mathbf{h}_{i},\mathbf{h}_{j})}}. \tag{6}\]
**Supervised Contrastive Mask.** When we apply the contrastive regularization in the framework, the incompatibility between the classification methods and the contrastive learning becomes problematic. The naive format of contrastive learning loss in Eq 2 considers all feature pairs \((\mathbf{h}_{i},\mathbf{h}_{j})\) where \(i<j,j\neq N+i\) as negative, and splits them up in the representation space. This would conflict with the fact that some samples are from the same class in FR, i.e. \(y_{i}=y_{j}\), and thus their features should be similar. When cooperating with the classification loss, the two methods could disturb each other in the interpretation of the supervision signals.
To avoid the above conflict between the contrastive regularization and the classification loss, we ignore the relationship between images from the same class. Specifically, with the help of labels in the training process, we create a supervised contrastive mask (SCM) to exclude the distraction of these samples by setting their similarity score \(\sin(\mathbf{h}_{i},\mathbf{h}_{j})\) to \(0\), where \(i<j,j\neq N+i,\) and \(y_{i}=y_{j}\). Thus, the classification method takes advantage of the label, while the contrastive learning regularizes the feature distribution separately with identity-free signals.
### Pair-Coupling Protocol for SRS Problem
By integrating our novel contrastive loss into training, the learned representation space could be constantly regularized. However, we discover a semantically repetitive signal problem (SRS) in this process. Some part of the contrastive loss is unintentionally repeated since some key negative pairs are doubly or quadruply emphasized. This results in a distorted distribution where the features encountering SRS are abnormally drawn and pulled. To understand this problem, we investigate the pair-coupling protocols, i.e. how to compose the positive and negative pairs. Figure 3 shows four different protocols. Let \((\sigma_{i}\rightarrow\sigma_{j})\) represents a pair where the first and the second features are from the \(i\)-th and the \(j\)-th mask channels respectively, and \(i,j\in\{0,1\}\). A pair-coupling protocol is defined by the number of the mask channels of the two features in the ordered pairs. Taking the second feature as the subordinate of the first one, the number of the first feature mask channels controls the ways (_single_ or _double_) and the mask channel of the second one dominates the number of negative samples (\(N\) or \(2N\)).
_Double-way \(2N\) Protocol_ is widely used in sample-guided contrastive learning, which takes the other \(2N-1\) augmented features in the same mini-batch as the comparison pool of \(\mathbf{h}\), and all \(2N\) features from two augmentation channels are taken into the first position in a positive pair once. This protocol is completely symmetric, i.e. \((\sigma_{1}/\sigma_{2}\rightarrow\sigma_{1}/\sigma_{2})\). About \(2\times N\) negative pairs in each of the two ways could quadruple the number of comparisons between every image pair. When the most similar negative pairs of two given features are mirrored, the semantic effect of their relationship is doubled due to the repetitions. This property would result in a biased loss, which is undesired.
Figure 4 shows the repetitions of the key negative pairs from a well-trained classification model. The coordinates of a point represent the indexes of a feature and its most similar negative counterpart in a batch. The points are painted in blue or green depending on the feature channels of a pair.
Figure 4: The coordinates of a point are the indexes of a feature and its most similar negative feature in a mini-batch. When two ordered pairs are mirrored, the points overlap and are painted yellow. The blossom yellow points in (a) and (b) demonstrate the symmetric problem in the ways and number of negative samples separately. The y-index of \((\sigma_{1}\rightarrow\sigma_{1})\) points are increased by 128.
Figure 3: Four types of pair-coupling protocols. Every combination of two augmentation channels \(\sigma_{1}\) and \(\sigma_{2}\) represents the feature combinations of the images in a mini-batch. When there are more than one augmentation channel combinations, the feature combination is repeated.
When two points overlap, the position is painted yellow. In Figure 4(a), many pairs composed by features from different channels, \((\sigma_{1}\rightarrow\sigma_{2})\) and \((\sigma_{2}\rightarrow\sigma_{1})\), are mirrored. As they share the same key negative pairs, their contributions are almost the same. Contrastive loss function would take them to produce a partly doubled loss value. When this accompanies the whole training process, it becomes an unintentional hard example mining strategy and inappropriately guides the back-propagation. The same problem exists in the choice of the comparison pool \((\sigma_{1}\rightarrow\sigma_{1})\) and \((\sigma_{1}\rightarrow\sigma_{2})\) as shown in Figure 4(b).
To solve this problem, we cut down the symmetry in the pair-coupling process by proposing a _Single-way_\(N\)_Protocol._ Specifically, we only calculate the similarity of \((\sigma_{1}\rightarrow\sigma_{2})\) in a batch and ignore the other three compositions. In this way, no extra repeated loss would be calculated. This seems contradictory with the common contrastive learning setting that needs more negative samples for comparison [3, 11]. However, these methods are usually supported by complex data augmentations and a large comparison pool. The stochasticity and the rich candidates provide more possibilities for a given feature. While in FR, the data augmentation is destructive and abandoned, and a huge batch (of size 8,192) is generally not applicable [3].
After applying the supervised contrastive mask and the single-way \(N\) protocol, we update \(Maxneg_{i}\) and our contrastive loss function as
\[Maxneg_{i}=\max(\text{sim}(\mathbf{h_{i}},\mathbf{h_{j}})),j\in[N+1,2N], y_{i}\neq y_{j}, \tag{7}\] \[\mathcal{L}_{CoRe}=-\log\frac{e^{s(\text{sim}(\mathbf{h_{i}},\mathbf{h_{ N+i}})-m_{C})}}{e^{s(\text{sim}(\mathbf{h_{i}},\mathbf{h_{N+i}})-m_{C})}+\frac{2N}{2N}e^{s \text{sim}(\mathbf{h_{i}},\mathbf{h_{j}})}},\] (8) \[\mathcal{L}=\frac{1}{2}\left(\mathcal{L}_{Cla}(\mathbf{h_{i}})+ \mathcal{L}_{Cla}(\mathbf{h_{N+i}})\right)+\lambda\mathcal{L}_{CoRe}(\mathbf{h_{i}}, \mathbf{h_{N+i}}). \tag{9}\]
## 4 Experiments
### Datasets and Implementation Details
**Datasets.** We use MS1MV2 [6] for model training. MS1MV2 dataset contains about 5.8M face images of 85K individuals. We extensively evaluate our approach on eight benchmarks, including LFW [14], AgeDB [23], CFP-FP [26], CPLFW [43], CALFW [44], IJB-B [36], IJB-C [21], and MegaFace [16].
**Training Settings.** We follow the settings commonly used in recent works [34, 17, 18, 15, 22] to ensure the fairness of comparison. The face images are cropped and resized to \(112\times 112\) with five landmarks [6]. We employ ResNet100 [12] as the backbone model. ArcFace is employed as the classification loss. Our framework is implemented in Pytorch [24]. We train the models on 4 NVIDIA A100 GPUs with the batch size of 512. All models are trained using SGD algorithm with an initial learning rate of \(0.1\). We set the momentum to 0.9 and the weight decay to \(5\times 10^{-4}\). We divide the learning rate by \(10\) at the 8th, the 14th, and the 20th epochs, and stop the training after 24 epochs. We set the scale parameter \(s\) to 64 for both the classification loss and our loss, and set \(\lambda\) to 0.05. For fair comparison of evaluation results, all methods without specifications are implemented with ResNet100 and MS1MV2.
### Experiment Results
**Results on LFW, CFP-FP, AgeDB, CALFW and CPLFW.** Table 1 compares our CoReFace with other re
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline Methods (\%) & Venue & LFW & AgeDB & CFP-FP & CALFW & CPLFW \\ \hline CosFace & CVPR 2018 & 99.81 & 98.11 & 98.12 & 95.76 & 92.28 \\ ArcFace & CVPR 2019 & **99.83** & 98.28 & 98.27 & 95.45 & 92.08 \\ MV-Softmax & AAAI 2020 & 99.80 & 97.95 & 98.28 & 96.10 & 92.83 \\ CurricularFace & CVPR 2020 & 99.80 & 98.32 & 98.37 & **96.20** & 93.13 \\ SCF-ArcFace & CVPR 2021 & 99.82 & 98.30 & 98.40 & 96.12 & 93.16 \\ MagFace & CVPR 2021 & **99.83** & 98.17 & 98.46 & 96.15 & 92.87 \\ AdaFace & CVPR 2022 & 99.82 & 98.05 & 98.49 & 96.08 & **93.53** \\ \hline CoReFace & Ours & **99.83** & **98.37** & **98.60** & **96.20** & 93.27 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Verification accuracy (%) on LFW, AgeDB, CFP-FP, CALFW, and CPLFW. The **Best** results are emphasized in bold.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Methods (\%) & IJB-B(TAR@FAR) & IJB-C(TAR@FAR) & \\ \cline{2-5} & 1e-6 & 1e-5 & 1e-4 & 1e-6 & 1e-5 & 1e-4 \\ \hline Softmax & 46.73 & 75.17 & 90.06 & 64.07 & 83.68 & 92.40 \\ SphereFace [20] & 39.40 & 73.58 & 89.19 & 68.86 & 83.33 & 91.77 \\ CosFace [33] & 40.41 & 89.25 & 94.01 & 87.96 & 92.68 & 95.56 \\ ArcFace [6] & 38.68 & 88.50 & 94.09 & 85.65 & 92.69 & 95.74 \\ SCF-ArcFace [19] & – & **90.68** & 94.74 & – & 94.04 & 96.09 \\ Magface [22] & 42.32 & 90.36 & 94.51 & **90.24** & 94.08 & 95.97 \\ \hline CoReFace & **47.02** & 91.33 & **95.09** & 89.34 & **94.73** & **96.43** \\ \hline \hline \end{tabular}
\end{table}
Table 2: 1 verification on IJB-B and IJB-C.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Methods (\%) & Id & Ver \\ \hline CosFace [33] & 97.91 & 97.91 \\ ArcFace [6] & 98.35 & 98.48 \\ MV-Softmax [34] & 97.76 & 97.80 \\ CurricularFace [15] & **98.71** & 98.64 \\ BroadFace [18] & 98.70 & 98.95 \\ CircleLoss [30] & 98.50 & 98.73 \\ \hline CoReFace & 98.69 & **99.06** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Face identification and verification on MegaFace Challenge using FaceScrub as the probe set. Id refers to the rank-1 face identification accuracy with 1M distractors, and Ver refers to the face verification **TAR (@FAR=\(10^{-6}\))**.
cent approaches on diverse benchmarks, including LFW for unconstrained face verification, AgeDB and CALFW of various ages, CFP-FP and CPLFW with large pose variations. In our approach, ArcFace is employed as the classification loss. Compared to the original ArcFace, our CoReFace outperforms it on four out of the five datasets with remarkable margins and achieves the same performance on the last one. This is because that CoReFace incorporates contrastive regularization in representation learning, which can successfully address the aforementioned inconsistency problem between the identify-based training and the sample-based evaluation which is ignored in the existing approaches. Among all approaches, AdaFace takes the image quality into consideration during training. This could explain its superior performance on CPLFW where different poses may cause occlusions on faces and results in lower accuracy. Our method strikes the highest accuracies on the other four datasets. Especially when our CoReFace shares the top performance with ArcFace and CurricularFace on LFW and CALFW, we signicantly outperform them on the other datasets.
**Results on IJB-B and IJB-C.** The IJB-B dataset contains \(1,845\) subjects with 21.8K still images and 55K frames from \(7,011\) videos. The IJB-C dataset expands IJB-B, and contains about \(3,500\) identities with a total of 31.3K images and 117.5 unconstrained video frames. In the 1:1 verification task, there are about 10K positive matches and 8M negative matches in IJB-B, and 19K positive matches and 15M negative matches in IJB-C. Table 2 exhibits the performances of different methods for 1:1 verification on IJB-B and IJB-C. Our method achieves the highest **TARs** for two out of three different **FARs** on these two datasets separately. As IJB-B has fewer matches, it becomes the most challenging situation when \(\textbf{FAR}=10^{-6}\) and only about 8 negative matches are allowed. Compared with other methods whose **TARs** are lower than Softmax, our model demonstrates more competitive under such an extreme situation. When there is a higher **FAR** bound (\(10^{-4}\)) or the evaluation dataset is larger, CoReFace still outperforms the competitors.
**Results on MegaFace.** Finally, we demonstrate the efficacy of our method on the MegaFace Challenge. The gallery set of MegaFace contains 1M images of 690K subjects, and the probe set is FaceScrub, which contains 100K photos of 530 unique individuals. We follow [6] to remove the face images with wrong labels and evaluate our method on the refined dataset. Table 3 shows the performance of different methods. For the identification task, CoReFace achieves competitive performance which is only 0.02% lower compared to the highest one CurricularFace [15]. For the verification task, CoReFace outperforms all the other approaches with a clear margin. The BroadFace [18] also shows competitive performance by building a dynamic queue to gain extra training on the classification layer. Without complex structure reformation, CoReFace adds an image-image regularization to improve the feature distribution and boost the performance of open-set face recognition.
### Ablation Study
As LFW is an almost saturated dataset (the accuracy is about 99.8% with ResNet100), we report the performances on AgeDB, CFP-FP, CALFW, CPLFW, and their average in our ablation study.
**Effects of our CoReFace Loss Function.** We show the effectiveness of our contrastive loss by comparing it with other alternatives with different settings in Table 4. The _Contrastive Only_ group is apparently inferior to the classification methods, which demonstrates the necessity
\begin{table}
\begin{tabular}{c|l|c} \hline Setting Groups & Methods & Average \\ \hline \multirow{2}{*}{Single Supervision} & Classification-only & 93.60 \\ & Triplet-only & 91.03 \\ \hline \multirow{2}{*}{Contrastive Only} & NT-Xent & 63.61 \\ & SupCon & 67.87 \\ & CoReFace & 86.68 \\ \hline \multirow{2}{*}{Data Augmentation} & NT-Xent & 92.78 \\ & SupCon & 91.49 \\ \hline \multirow{2}{*}{Feature Augmentation} & NT-Xent & 93.60 \\ & SupCon & 93.60 \\ \multicolumn{2}{c|}{CoReFace} & **93.66** \\ \hline \end{tabular}
\end{table}
Table 4: Average verification performance (%) of different methods. All experiments are based on a pretrained ResNet50 ArcFace model with 90.45% average performance. To avoid the influence of hyper-parameter, \(\lambda=1\) is set for all experiments.
Figure 5: (a) The loss variation of different contrastive methods in joint training with R100. (b) The \(m_{C}\) value variation caused by CoReFace on different backbone models. Some methods keep their loss values nearly 0 and fail to supervise in training.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{SCM} & \multirow{2}{*}{Original} & \multirow{2}{*}{w/o \(L_{C}\)} & \multicolumn{4}{c}{w/ \(L_{C}\)} \\ \cline{4-7} & & & D-\(2N\) & \multicolumn{1}{c|}{D-\(N\)} & S-\(2N\) & S-\(N\) \\ \hline \(\mathcal{X}\) & 93.28 & 93.59 & 93.66 & 93.68 & 93.67 & **93.74** \\ \cline{2-7} ✓ & - & - & 93.65 & 93.70 & 93.69 & **93.75** \\ \hline \end{tabular}
\end{table}
Table 5: Ablation of different pair-coupling protocols and supervised contrastive mask. All experiments except _Original_ are implemented with the proposed framework. \(S\) and \(D\) mean _single-way_ and _double-way_ respectively. \(N\) and 2\(N\) represent the number of candidates for a given sample.
to take the classification loss as the fundamental in FR. Compared with the classification-only method, the performance degradation of NT-Xent and SupCon in _Data Augmentation_ group verifies the semantic damage caused by the widely-used image augmentations. The _Feature Augmentation_ group follows our framework and CoReFace shows an outstanding outcome.
Figure 5 further visualizes the contrastive loss values and the adaptive margin \(m_{C}\) during joint training. With the adaptive margin, our method produces stable and reasonable loss values. NT-Xent and SupCon in _Feature Augmentation_ group perform similarly compared with the Classification-only approach. Figure 5 illustrates how they fail to supervise steadily. The change of \(m_{C}\) with different backbones confirms the adaptation of our method which saves tedious hyper-parameter tuning for different model scales. In our CoReFace, \(m_{C}\) obviously keeps growing, and surpasses the one without \(m_{C}\) in R50 where it is only statistically calculated in training. This verifies that our contrastive loss can effectively enlarge the difference between the similarities of the positive pairs and the negative pairs.
**Effects of other components.** To verify the effect of our framework, we generally experiment three different settings, namely _Original, w/o \(L_{C}\)_, and _w/ \(L_{C}\)_ in Table 5. _Original_ means the traditional classification framework and the latter two take our framework. Among the four pair-coupling protocols, the D-\(2N\) protocol, the D-\(N\) protocol, and the S-\(2N\) protocol all contain some repetitive key negative pairs. Table 5 shows that the average performance of D-\(2N\) ranks the lowest as it contains more repetitive pairs while the single-way \(N\) protocol outperforms the others. These results verify our assumption that the symmetry in pair coupling interferes the performance. We also implement a series of experiments without the supervised contrastive mask under each pair-coupling protocols, which show that the masked version outperforms the others most time. Though the sampling in training is stochastic and results in a few conflicts, the mask still expels the supervision conflict between two losses.
**Efficiency of different frameworks.** Table 6 compares the training speed of different frameworks including **Original** classification framework, our feature-augmentation-based **CoReFace** framework, data-augmentation-based **Contrastive** framework, and **Triplet** frameworks. As can be seen in Table 6, after integrating the contrastive regularization into training, our framework only takes negligible extra time, i.e. 1.4%e for R50 and 3.3%e for R100 compared to the original classification method. Meanwhile, the common contrastive framework nearly doubles the processing time. As for Triplet, it fails to be applied on a 40G GPU with R100 and consumes a lot more time on CASIA-WebFace.
**Effects on the feature distribution.** We visualize the similarity between the positive and the negative pairs on the evaluation datasets in Figure 6. The angles of positive pairs in CoReFace is closer to 0 compared with ArcFace. For different datasets containing age variations and pose variations, our approach keeps an obvious margin. This demonstrates the effectiveness of our CoReFace for the distribution regularization by considering the image-image relationship.
## 5 Conclusion
We have presented our CoReFace to regularize the feature distribution with the image-image relationship, which makes the training consistent with the evaluation in open-set face recognition. To this end, the sample-guided contrastive learning is integrated in our framework. For positive pair composition in contrastive learning, we augment the embeddings instead of images and avoid the degradation caused by the widely-used data augmentations. By incorporating an adaptive margin and a supervised contrastive mask, our contrastive loss is able to generate steady loss values and avoid the collision with the classification supervision signals. Finally, the new pair-coupling protocol alleviates the similarity problem caused by the symmetry of pairs. Extensive experiments on the popular FR benchmarks and ablations demonstrate the effectiveness and efficiency of our proposed approach and the great potential of contrastive learning for regularization in face recognition. With the concise framework, our approach can be easily applied to the existing FR methods.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Time (batch/s) & Original & CoReFace & Contrastive & Triplet \\ \hline R50 & 0.2807 & 0.2811 & 0.5052 & 0.7104 \\ R100 & 0.4874 & 0.4890 & 0.8465 & - \\ \hline \end{tabular}
\end{table}
Table 6: Average process time for a batch in each method on one NVIDIA A100 GPU. We take triplets as samples for triplet loss. R100 with triplet loss needs more than 40GB video memory to train such a batch and fails in the training.
Figure 6: The angle distributions of ArcFace and CoReFace on four datasets. \(+\) and \(-\) denote the positive pairs and the negative pairs respectively.
|
2310.15591
|
Machine learning based nonlocal kinetic energy density functional for
simple metals and alloys
|
Developing an accurate kinetic energy density functional (KEDF) remains a
major hurdle in orbital-free density functional theory. We propose a machine
learning based physical-constrained nonlocal (MPN) KEDF and implement it with
the usage of the bulk-derived local pseudopotentials and plane wave basis sets
in the ABACUS package. The MPN KEDF is designed to satisfy three exact physical
constraints: the scaling law of electron kinetic energy, the free electron gas
limit, and the non-negativity of Pauli energy density. The MPN KEDF is
systematically tested for simple metals, including Li, Mg, Al, and 59 alloys.
We conclude that incorporating nonlocal information for designing new KEDFs and
obeying exact physical constraints are essential to improve the accuracy,
transferability, and stability of ML-based KEDF. These results shed new light
on the construction of ML-based functionals.
|
Liang Sun, Mohan Chen
|
2023-10-24T07:55:07Z
|
http://arxiv.org/abs/2310.15591v2
|
# Machine-Learning-Based Non-Local Kinetic Energy Density Functional for Simple Metals and Alloys
###### Abstract
Developing an accurate kinetic energy density functional (KEDF) remains a major hurdle in orbital-free density functional theory. We propose a machine-learning-based physical-constrained non-local (MPN) KEDF and implement it with the usage of the bulk-derived local pseudopotentials and plane wave basis sets in the ABACUS package. The MPN KEDF is designed to satisfy three exact physical constraints: the scaling law of electron kinetic energy, the free electron gas limit, and the non-negativity of Pauli energy density. The MPN KEDF is systematically tested for simple metals, including Li, Mg, Al, and 59 alloys. We conclude that incorporating non-local information for designing new KEDFs and obeying exact physical constraints are essential to improve the accuracy, transferability, and stability of ML-based KEDF. These results shed new light on the construction of ML-based functionals.
pacs: 71.15.Mb, 07.05.Mh, 71.20.Gj
## I Introduction
Kohn-Sham density functional theory (KSDFT) is a widely-used _ab initio_ method in materials science. [1; 2] However, its computational complexity of \(O(N^{3})\), where \(N\) is the number of atoms, poses significant challenges for large systems. Alternatively, orbital-free density functional theory (OFDFT) [3; 4] calculates the non-interacting electron kinetic energy \(T_{s}\) directly from the charge density instead of relying on the one-electron Kohn-Sham orbitals. As a result, OFDFT achieves a more affordable computational complexity of typically \(O(N\ln N)\) or \(O(N)\). [5; 6; 7; 8] Given that \(T_{s}\) is comparable in magnitude to the total energy, the accuracy of OFDFT largely depends on the approximated form of the kinetic energy density functional (KEDF). However, developing an accurate KEDF has been a major hurdle in the field of OFDFT for decades years.
Over the past few decades, continuous efforts have been devoted to developing analytical KEDFs. [4; 9] In general, KEDFs can be classified into two categories. The first category comprises local and semi-local components in KEDFs, where the kinetic energy density is a function of the charge density, the charge density gradient, the Laplacian of charge density, or even higher-order derivatives of the charge density. [10; 11; 12; 13; 14; 15] The second category consists of non-local forms of KEDFs, where the kinetic energy density is a functional of charge density, such that the kinetic energy density at each point in real space depends on the non-local charge density. [16; 17; 18; 19; 20] Typically, semi-local KEDFs are more computationally efficient, while non-local ones offer a higher accuracy. However, since most of the existing non-local KEDFs are constructed based on the Lindhard response function, which is accurate for nearly free electron gas, they are mainly adequate for simple metals. [16; 17] Some KEDFs were proposed to describe semiconductor systems, but they cannot work well for simple metals. [18; 19; 20] As a result, a KEDF that works for both simple metal and semiconductor systems is still lacking, and it is still unclear how to construct it systematically.
In recent years, machine learning (ML) techniques have been involved in the developments of computational physics. [21] In particular, the remarkable fitting ability of ML models has been demonstrated in various applications, including the fitting of potential energy surfaces in molecular dynamics [22; 23], as well as fitting exchange-correlation functionals [24; 25; 26; 27] and Hamiltonian matrices [28] within the framework of density functional theory (DFT). [1; 2] Additionally, there have been endeavors to construct ML-based KEDFs within the framework of OFDFT. [29; 30; 31; 32; 33; 34; 35] For example, Imoto et al. implemented a semi-local ML-based KEDF, which takes dimensionless gradient and dimensionless Laplacian of charge density as descriptors and puts the enhancement factor of kinetic energy density as the output of neural network (NN). [33] This model exhibits convergence and satisfies the scaling law, but it overlooks non-local information crucial for improving the accuracy of KEDFs. Ryczko et al. implemented a non-local ML-based KEDF, utilizing a voxel deep NN, but this model could not achieve convergence in OFDFT computations. [34] Thus, it is still a formidable task to construct an accurate, transferable, and computationally stable ML-based KEDF.
In this work, as the first step to construct an ML-based KEDF that works for both simple metal and semiconductor systems, we construct an ML-based Physical-constrained Non-local KEDF (MPN KEDF) for simple metals and their alloys, which (a) contains non-local
information, (b) obeys a series of exact physical constraints, and (c) achieves convergence via careful design of descriptors, NN output, post-processing, and loss function, etc. The performance of the MPN KEDF is systematically evaluated based by testing a series of simple metals, including lithium (Li), magnesium (Mg), aluminum (Al), and their alloys. In particular, incorporating non-local information and exact physical constraints is crucial to improving the accuracy, transferability, and stability of ML-based KEDFs. [31]
The rest of this paper is organized as follows. In Section II, we propose an ML-based KEDF that satisfies physical constraints and introduces numerical details of KSDFT and OFDFT calculations. In Section III, we analyze the performances of the MPN KEDF and discuss the results. Finally, the conclusions are drawn in Section IV.
## II Methods
### Pauli Energy and Pauli Potential
In general, the non-interacting kinetic energy \(T_{s}\) can be divided into two parts, [36]
\[T_{s}=T_{\rm vW}+T_{\theta}, \tag{1}\]
where
\[T_{\rm vW}=\frac{1}{8}\int\frac{\left|\nabla\rho(r)\right|^{2}}{\rho\left(r \right)}\,\mathrm{d}^{3}r \tag{2}\]
is the von Weizsacker (vW) KEDF, [12] a rigorous lower bound to the \(T_{s}\), with \(\rho(r)\) being the charge density. The second term \(T_{\theta}\) represents the Pauli energy, which takes the form of
\[T_{\theta}=\int\tau_{\rm TF}F_{\theta}\mathrm{d}^{3}r, \tag{3}\]
where the Thomas-Fermi (TF) kinetic energy density [10; 11] term is
\[\tau_{\rm TF}=\frac{3}{10}(3\pi^{2})^{2/3}\rho^{5/3}. \tag{4}\]
Additionally, \(F_{\theta}\) denotes the enhancement factor. The corresponding Pauli potential is given by
\[V_{\theta}(r)=\delta E_{\theta}/\delta\rho(r). \tag{5}\]
The Pauli energy and Pauli potential satisfy several exact physical constraints. For example, first, the scaling law
Figure 2: Illustration of the scaling law introduced in Eq. 6. The gray line represents the function of \(\lambda^{2}T_{s}[\rho]\), where \(\rho\) denotes the ground charge density of face-centered cubic (fcc) Al as obtained by the MPN KEDF. The red stars denote the kinetic energies of \(\rho_{\lambda}=\lambda^{3}\rho(\lambda r)\) computed using the MPN KEDF for different values of \(\lambda\), namely 0.25, 0.5, 1.0, 2.0, and 3.0. All the red stars fall on the gray line, indicating that the scaling law \(T_{s}[\rho_{\lambda}]=\lambda^{2}T_{s}[\rho]\) is exactly obeyed by the MPN KEDF.
Figure 1: Workflow of the MPN KEDF. \(F^{\rm NN}(r)\) is the enhancement factor obtained by the deep neural network (NN), and \(F^{\rm NN}|_{\rm FEG}\) denotes the enhancement factor under the free electron gas (FEG) limit. In order to ensure both the FEG limit and the non-negativity of Pauli energy density are satisfied, the enhancement factor of Pauli energy is defined as \(F^{\rm NN}_{\theta}=\mathrm{softplus}\left(F^{\rm NN}-F^{\rm NN}|_{\rm FEG}+ \ln\left(e-1\right)\right)\), where \(\mathrm{softplus}(x)=\ln(1+e^{x})\) is an activation function commonly used in machine learning with \(\mathrm{softplus}(x)|_{x=\ln(e-1)}=1\). The defined formulas are used to evaluate the kinetic energy and kinetic potential.
is
\[T_{\theta}[\rho_{\lambda}]=\lambda^{2}T_{\theta}[\rho], \tag{6}\]
where \(\rho_{\lambda}=\lambda^{3}\rho(\lambda r)\) and \(\lambda\) is a positive number. [36]
Second, in the free electron gas (FEG) limit, the TF KEDF is exact, and the vW part vanishes so that the enhancement factor in the FEG limit takes the form of
\[F_{\theta}(r)|_{\rm FEG}=1. \tag{7}\]
In addition, the Pauli potential returns to the potential of TF KEDF \(V_{\rm TF}(r)\)
\[V_{\theta}(r)|_{\rm FEG}=V_{\rm TF}(r)=\frac{1}{2}(3\pi^{2})^{2/3}\rho^{2/3}. \tag{8}\]
Third, the non-negativity ensures
\[F_{\theta}(r)\geq 0 \tag{9}\]
and
\[V_{\theta}(r)\geq 0. \tag{10}\]
In order to train the MPN KEDF, we collect the Pauli energy and Pauli potential data from KSDFT calculations performed on a set of selected systems. In detail, with the help of the Kohn-Sham orbitals and eigenvalues, in a spin degenerate system, the Pauli energy density can be analytically expressed by [36]
\[\tau_{\theta}^{\rm KS}=\sum_{i=1}^{M}f_{i}|\nabla\psi_{i}(r)|^{2}-\frac{| \nabla\rho|^{2}}{8\rho}, \tag{11}\]
while the Pauli potential has the form of
\[V_{\theta}^{\rm KS}=\rho^{-1}\left(\tau_{\theta}^{\rm KS}+2\sum_{i=1}^{M}f_{i} (\varepsilon_{M}-\varepsilon_{i})\psi_{i}^{*}\psi_{i}\right), \tag{12}\]
where \(\psi_{i}(r)\) denotes an occupied Kohn-Sham orbital with index \(i\), while \(\varepsilon_{i}\) and \(f_{i}\) are the corresponding eigenvalue and occupied number, respectively. In addition, \(M\) represents the highest occupied state, and \(\varepsilon_{M}\) is the eigenvalue of \(\psi_{M}(r)\), i.e., the chemical potential.
### Design Neural Network based on Exact Physical Constraints
The workflow of the MPN KEDF is summarized in Fig. 1. The major structure of the MPN KEDF is an NN composed of one input layer consisting of four nodes, three hidden layers with ten nodes in each layer, and an output layer with one node. The activation functions used in the hidden layers are chosen to be hyperbolic tangent functions, i.e., \(\tanh(x)\). In order to ensure that the calculated Pauli energy and potential obey the physical constraints mentioned above, the output of the NN is chosen as the enhancement factor \(F_{\theta}\) for each real-space grid point \(r\), which is denoted as \(F^{\rm NN}(r)\). Next, we elucidate how non-local information and exact physical constraints can be incorporated into the NN to improve its accuracy and reliability.
As shown in Fig.1, we define four descriptors \(\{\tilde{p},\tilde{p}_{\rm nl},\tilde{\xi},\tilde{\xi}_{\rm nl}\}\) (_vide infra_) as the input of the NN for the MPN KEDF. The first descriptor \(\tilde{p}(r)\) is semi-local, while the other three are non-local. First, the semi-local descriptor is the normalized dimensionless gradient of the charge density given by
\[\tilde{p}(r)=\tanh\Big{(}\chi_{p}p(r)\Big{)}, \tag{13}\]
where the parameter \(p(r)\) is evaluated via
\[p(r)=|\nabla\rho(r)|^{2}/\Big{[}2(3\pi^{2})^{1/3}\rho^{4/3}(r)\Big{]}^{2}. \tag{14}\]
Here, \(\chi_{p}\) is a hyper-parameter to control the distribution of \(\tilde{p}\).
Second, we propose a non-local descriptor of \(\tilde{p}\), which is defined as
\[\tilde{p}_{\rm nl}(r)=\int w(r-r^{\prime})\tilde{p}(r^{\prime}){\rm d}^{3}r^{ \prime}, \tag{15}\]
where \(w(r-r^{\prime})\) is the kernel function similar to the Wang-Teter [16] kernel function, satisfying
\[\int w(r-r^{\prime}){\rm d}^{3}r^{\prime}=0. \tag{16}\]
The kernel function is defined in reciprocal space as
\[w(\eta)=\left(\frac{1}{2}+\frac{1-\eta^{2}}{4\eta}\ln\left|\frac{1+\eta}{1- \eta}\right|\right)^{-1}-3\eta^{2}-1. \tag{17}\]
Here \(\eta=\frac{k}{2k_{\rm F}}\) is a dimensionless reciprocal space vector, while \(k_{\rm F}=(3\pi^{2}\rho_{0})^{1/3}\) is the Fermi wave vector with \(\rho_{0}\) being the average charge density.
The third and fourth non-local descriptors represent the distribution of charge density and take the form of
\[\tilde{\xi}(r)=\tanh\Bigg{(}\frac{\int w(r-r^{\prime})\rho^{1/3}(r^{\prime}){ \rm d}^{3}r^{\prime}}{\rho^{1/3}(r)}\Bigg{)}, \tag{18}\]
and
\[\tilde{\xi}_{\rm nl}(r)=\int w(r-r^{\prime})\tilde{\xi}(r^{\prime}){\rm d}^{3} r^{\prime}, \tag{19}\]
respectively.
In summary, the MPN KEDF is characterized by the above four descriptors: \(\{\tilde{p},\tilde{p}_{\rm nl},\tilde{\xi},\tilde{\xi}_{\rm nl}\}\), with \(\chi_{p}=0.2\) being an empirical parameter adopted in all calculations. Next, we propose three physical constraints that are met by our ML-based MPN KEDF.
First, the scaling law of non-interacting electron kinetic energy is ensured when we design the above descriptors.
In detail, under the scaling translation \(\rho(r)\rightarrow\rho_{\lambda}=\lambda^{3}\rho(\lambda r)\), the descriptors \(\{\tilde{p}(r),\tilde{p}_{\text{nl}}(r),\tilde{\xi}(r),\tilde{\xi}_{\text{nl}}(r)\}\) become \(\{\tilde{p}(\lambda r),\tilde{p}_{\text{nl}}(\lambda r),\tilde{\xi}(\lambda r), \tilde{\xi}_{\text{nl}}(\lambda r)\}\), i.e., the descriptors are invariant under the scaling transformation, and the detailed derivation can be found in Supporting Information (SI). Since the \(T_{\text{vW}}\) term satisfies the scaling law, we have
\[T_{\text{MPN}}[\rho_{\lambda}]= T_{\text{vW}}[\rho_{\lambda}]+\lambda^{5}\int\tau_{\text{TF}}(\lambda r) \tag{20}\] \[F_{\theta}^{\text{NN}}\left(\tilde{p}(\lambda r),\tilde{p}_{\text {nl}}(\lambda r),\tilde{\xi}(\lambda r),\tilde{\xi}_{\text{nl}}(\lambda r) \right)\text{d}^{3}r\] \[= \lambda^{2}\bigg{[}T_{\text{vW}}[\rho]+\int\tau_{\text{TF}}( \lambda r)\] \[F_{\theta}^{\text{NN}}\left(\tilde{p}(\lambda r),\tilde{p}_{ \text{nl}}(\lambda r),\tilde{\xi}(\lambda r),\tilde{\xi}_{\text{nl}}(\lambda r )\right)\text{d}^{3}(\lambda r)\bigg{]}\] \[= \lambda^{2}T_{\text{MPN}}[\rho].\]
In order to verify the scaling law, we obtain the ground-state charge density \(\rho(r)\) of fcc Al with the MPN KEDF, then the kinetic energy of \(\rho_{\lambda}=\lambda^{3}\rho(\lambda r)\) with various \(\lambda\) (0.25, 0.5, 1.0, 2.0, and 3.0) are calculated by the MPN KEDF. As displayed in Fig. 2, all of the \(T_{\text{MPN}}[\rho_{\lambda}]\)s computed by the MPN KEDF fall on the line of \(f(\lambda)=\lambda^{2}T_{s}[\rho]\), demonstrating that the MPN KEDF obeys the scaling law.
The second and third constraints, i.e., the FEG limit and the non-negativity of Pauli energy density, are introduced through post-processing of the deep neural network. In the FEG limit, all four descriptors become zero, and hence we define the output of the NN in this limit as \(F^{\text{NN}}|_{\text{FEG}}\). In addition, the enhancement factor of Pauli energy is defined as
\[F_{\theta}^{\text{NN}}=\text{softplus}\left(F^{\text{NN}}-F^{\text{NN}}|_{ \text{FEG}}+\ln\left(e-1\right)\right), \tag{21}\]
where \(F^{\text{NN}}\) is the output of NN, and
\[\text{softplus}(x)=\ln(1+e^{x}) \tag{22}\]
is an activation function commonly used in machine learning, satisfying
\[\text{softplus}(x)\geq 0 \tag{23}\]
and
\[\text{softplus}(x)|_{x=\ln(e-1)}=1. \tag{24}\]
Figure 4: (a) Total energies (in eV/atom) and (b) formation energies (in eV) of 59 alloys, including 20 Li-Mg alloys, 20 Mg-Li alloys, 10 Li-Al alloys, and 9 Li-Mg-Al alloys. Different colors indicate the formation energies from different KEDFs (TF\(\lambda\)vW, LKT, WT, and MPN), while different shapes of markers indicate different alloys.
Figure 3: MAREs of bulk properties of Li, Mg, and Al systems, i.e., (a) the bulk moduli (\(B\) in GPa), (b) the equilibrium volumes (\(V_{0}\) in Å\({}^{3}\)/atom), and (c) the total energies of given systems (\(E_{0}\) in eV/atom). The MARE defined in Eq. 29 is obtained by comparing OFDFT to KS-BLPS results. We use body-centered cubic (bcc), fcc, simple cubic (sc), and cubic diamond (CD) structures of Li. We also adopt hexagonal close-packed (hcp), fcc, bcc, and sc structures of Mg. For Al systems, we take fcc, hcp, bcc, and sc structures.
By construction, the non-negativity constraint is satisfied as
\[F_{\theta}^{\rm NN}\geq 0, \tag{25}\]
and in the FEG limit where the charge density is a constant, we have
\[F_{\theta}^{\rm NN}|_{\rm FEG} =\mathrm{softplus}\left(F^{\rm NN}|_{\rm FEG}-F^{\rm NN}|_{\rm FEG} +\ln\left(e-1\right)\right) \tag{26}\] \[=1,\]
thereby ensuring that the FEG limit is also exactly satisfied. We note that the selection of kernel function and descriptors guarantees that once the FEG limit of Pauli energy is met, the FEG limit of Pauli potential is automatically satisfied, as discussed in Section III of SI.
Fig. 1 summarizes the workflow of the MPN KEDF, which involves the abovementioned physical constraints. First, for each real-space grid point, the descriptors of charge density \(\rho(r)\) (\(\{\tilde{p},\tilde{p}_{\rm nl},\tilde{\xi},\tilde{\xi}_{\rm nl}\}\)) are entered into NN to get the corresponding enhancement factor \(F^{\rm NN}(r)\). Second, the descriptors of FEG (\(\{\tilde{p}=0,\tilde{p}_{\rm nl}=0,\tilde{\xi}=0,\tilde{\xi}_{\rm nl}=0\}\)) are fed into the NN, and the enhancement factor of FEG \(F^{\rm NN}|_{\rm FEG}\) is obtained. Third, to ensure both the FEG limit and the non-negativity of Pauli energy density are satisfied, the enhancement factor of Pauli energy is defined as \(F_{\theta}^{\rm NN}=\mathrm{softplus}\left(F^{\rm NN}-F^{\rm NN}|_{\rm FEG}+ \ln\left(e-1\right)\right)\). Finally, the kinetic energy and kinetic potential are calculated by the MPN KEDF using the defined formulas.
### Training Details
Before training the MPN KEDF, the loss function is defined as
\[L= \frac{1}{N}\sum_{r}\left[\left(\frac{F_{\theta}^{\rm NN}-F_{\theta }^{\rm KS}}{F_{\theta}^{\rm KS}}\right)^{2}+\left(\frac{V_{\theta}^{\rm MPN}- V_{\theta}^{\rm KS}}{V_{\theta}^{\rm KS}}\right)^{2}\right] \tag{27}\] \[+\left[F^{\rm NN}|_{\rm FEG}-\ln(e-1)\right]^{2}.\]
Where \(N\) is the number of grid points, and \(\tilde{F}_{\theta}^{\rm KS}\) (\(\tilde{V}_{\theta}^{\rm KS}\)) represents the mean of \(F_{\theta}^{\rm KS}\) (\(V_{\theta}^{\rm KS}\)). The first term helps NN to learn information from the Pauli energy, while the second term emphasizes the significance of reproducing the correct Pauli potential. We emphasize that fitting the Pauli potential is crucial in determining the optimization direction and step length during the OFDFT calculations, and \(V_{\theta}^{\rm MPN}\) can be obtained through the back propagation of NN, as derived in SI. The last term is a penalty term to reduce the magnitude of the FEG correction, which improves the stability of the MPN KEDF.
The training set consists of eight metallic structures, namely bcc Li, fcc Mg, fcc Al, as well as five alloys: Li\({}_{3}\)Mg (mp-976254), LiMg (mp-1094889), Mg\({}_{3}\)Al (mp-978271), \(\beta^{\prime\prime}\) MgAl\({}_{3}\)[37], LiAl\({}_{3}\) (mp-10890), where the numbers in brackets are the Materials Project IDs [38]. We performed KSDFT calculations to obtain the ground charge density and calculate the corresponding descriptors. Additionally, the Pauli energy and potential are calculated using Eqs. 11 and 12, respectively. These calculations are performed on a \(27\times 27\times 27\) grid, resulting in a total of 157,464 grid points in the training set.
### Numerical Details
We have employed the ABACUS 3.0.4 packages [39] to carry out OFDFT and KSDFT calculations, while for OFDFT with the Wang-Govind-Carter (WGC) KEDF [17], we have utilized the PROFESS 3.0 package. [7] The MPN KEDF is implemented in ABACUS using the libtorch package, [40], and the libnpy package is adopted to dump the data. Table S1 lists the plane-wave energy cutoffs employed in both OFDFT and KSDFT calculations, as well as the Monkhorst-Pack \(k\)-point samplings [41] utilized in KSDFT. For both OFDFT and KSDFT calculations, we used the Perdew-Burke-Ernzerhof (PBE) [42] and bulk-derived local pseudopotentials (BLPS) [43]. Additionally, we used the Gaussian smearing method with a smearing width of 0.1 eV in our KSDFT calculations.
In order to calculate the ground-state bulk properties, we first optimize the crystal structures until the stress tensor elements are below \(5\times 10^{-7}\) Hartree/Bohr\({}^{3}\), then compress and expand the lattice constant of the unit cell from \(0.99a_{0}\) to \(1.01a_{0}\), where \(a_{0}\) is the equilibrium lattice constant. Once the energy-volume curve is obtained, the bulk modulus \(B\) of a given system is fitted by Murnaghan's equation of state.[44]
We compare the results obtained by the MPN KEDF to those obtained from OFDFT calculations with traditional KEDFs. Specifically, we have employed semi-local KEDFs such as the TF\(\lambda\)vW [45] and the Luo-Karasiev-Trickey (LKT) KEDFs [13], as well as the non-local ones including the Wang-Teter (WT) [16] and WGC KEDFs. The parameter \(\lambda\) of TF\(\lambda\)vW has been set to be 0.2, and the parameter \(a\) of the LKT KEDF is set to be 1.3, as in the original work [13]. In addition, we set \(\alpha\)=\(\frac{5+\sqrt{5}}{6}\), \(\beta\)=\(\frac{5-\sqrt{5}}{5}\) and \(\gamma\)=2.7 in the WGC KEDF [17], as well as \(\alpha\)=\(\frac{5}{6}\), \(\beta\)=\(\frac{5}{6}\) in the WT KEDF [16].
The formation energy \(E_{\rm f}\) of Li-Mg-Al alloy is defined as
\[E_{\rm f}=\frac{1}{N}\left(E_{\rm total}-n_{\rm Li}E_{\rm Li}-n_{\rm Mg}E_{\rm Mg }-n_{\rm Al}E_{\rm Al}\right), \tag{28}\]
where \(E_{\rm total}\) is the total energy of the alloy, and \(E_{\rm Li}\), \(E_{\rm Mg}\), and \(E_{\rm Al}\) denote the equilibrium energy of the bcc Li, hcp Mg, and fcc Al structures, respectively. Furthermore, \(n_{\rm Li}\), \(n_{\rm Mg}\), and \(n_{\rm Al}\) depict the number of Li, Mg, and Al atoms, respectively. \(N=n_{\rm Li}+n_{\rm Mg}+n_{\rm Al}\) denotes the total number of atoms of the alloy.
The Mean Absolute Relative Error (MARE) and Mean Absolute Error (MAE) of property \(x\) are respectively
defined as
\[\text{MARE}=\frac{1}{N}\sum_{i}^{N}\Big{|}\frac{x_{i}^{\text{OF}}-x_{i}^{\text{KS} }}{x_{i}^{\text{KS}}}\Big{|}, \tag{29}\]
\[\text{MAE}=\frac{1}{N}\sum_{i}^{N}\big{|}x_{i}^{\text{OF}}-x_{i}^{\text{KS}} \big{|}. \tag{30}\]
Here \(N\) is the number of data points, \(x_{i}^{\text{OF}}\) and \(x_{i}^{\text{KS}}\) are properties obtained from OFDFT and KSDFT calculations, respectively.
## III Results and Discussion
In order to examine the precision and transferability of the MPN KEDF, we prepared two testing sets. The first set comprises 4 structures of Li (bcc, fcc, sc, and CD), 4 structures of Mg (hcp, fcc, bcc, and sc), and 4 structures of Al (fcc, hcp, bcc, and sc). We evaluated the properties of these bulk systems, including the bulk moduli, the equilibrium volumes, and the equilibrium energies using various KEDFs. For the second testing set, we selected 59 alloys obtained from the Materials Project database [38], including 20 Li-Mg alloys, 20 Mg-Li alloys, 10 Li-Al alloys, and 9 Li-Mg-Al alloys, and the detailed information of these alloys are listed in Table S2.
Notably, most of the structures in the two testing sets do not appear in the training set, allowing for an unbiased comparison. We systematically compared the total energies, the formation energies, and the charge densities of alloys within the second testing set. We also trained another semi-local ML-based KEDF with descriptors as \(\{\tilde{p},\tilde{q}\}\) with \(\tilde{q}=\tanh{(0.1q)}\), where \(q=\nabla^{2}\rho/[4(3\pi^{2})^{2/3}\rho^{5/3}]\). However, we found this semi-local ML-based KEDF cannot achieve convergence in all tested systems.
### Simple Metals
Fig. 3 displays the MAREs of bulk properties of Li, Mg, and Al systems. Compared to the non-local WT and WGC KEDFs, the semi-local KEDFs (the TF\(\chi\)vW and LKT KEDFs) yield larger MAREs across all the properties in all three systems, indicating that the non-local information is crucial to enhance the accuracy of KEDF. Comparatively, the MPN KEDF yields MAREs slightly larger than those of the WT and WGC KEDFs but does not exceed those of semi-local ones. Notably, the MPN KEDF achieves a lower MARE of 1.37% for the bulk modulus of Mg, outperforming WT and WGC KEDFs, which exhibit MAREs of 2.32% and 3.73%, respectively. On the other hand, the poorest results obtained by the MPN KEDF are the bulk modulus of Al, where it exhibits a MARE of 7.75%, nearly three times than those from the WT (2.41%) and WGC (2.26%) KEDFs. This may be caused by the fact that we did not include more Al structures with different densities in the training set. However, even in this case, the MAREs obtained by the TF\(\chi\)vW (40.72%) and LKT (16.69%) KEDFs are almost five and two times higher than that of the MPN KEDF. As a result, we conclude that the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline mean MARE of charge denstiy (\%) & LiMg & MgAl & LiAl & LiMgAl & Total \\ \hline TF\(\chi\)vW & 12.40 & 16.11 & 13.26 & 15.66 & 14.30 \\ LKT & 5.26 & 7.44 & 11.61 & 6.98 & 7.34 \\ WT & 1.06 & 2.57 & 4.98 & 2.04 & 2.38 \\ MPN & 2.41 & 3.12 & 5.81 & 2.89 & 3.30 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean MAREs (Eq. 29) of charge densities of 59 alloys, including 20 Li-Mg alloys, 20 Mg-Li alloys, 10 Li-Al alloys, and 9 Li-Mg-Al alloys. MAREs are obtained by comparing various KEDFs (TF\(\lambda\)vW, LKT, WT, and MPN) in OFDFT to KS-BLPS results.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline MAE of total energy (eV/atom) & LiMg & MgAl & LiAl & LiMgAl & Total \\ \hline TF\(\chi\)vW & 0.540 & 1.330 & 0.873 & 0.999 & 0.934 \\ LKT & 0.040 & 0.156 & 0.351 & 0.124 & 0.145 \\ WT & 0.013 & 0.059 & 0.082 & 0.031 & 0.043 \\ MPN & 0.078 & 0.163 & 0.146 & 0.106 & 0.123 \\ \hline MAE of formation energy (eV) & LiMg & MgAl & LiAl & LiMgAl & Total \\ \hline TF\(\chi\)vW & 0.022 & 0.061 & 0.103 & 0.036 & 0.051 \\ LKT & 0.041 & 0.189 & 0.397 & 0.138 & 0.166 \\ WT & 0.005 & 0.050 & 0.077 & 0.023 & 0.035 \\ MPN & 0.015 & 0.027 & 0.056 & 0.024 & 0.028 \\ \hline \hline \end{tabular}
\end{table}
Table 1: MAEs (Eq. 30) of the total energies and formation energies of 59 alloys obtained by comparing various KEDFs (TF\(\lambda\)vW, LKT, WT, and MPN) in OFDFT to KS-BLPS results. The systems include 20 Li-Mg alloys, 20 Mg-Li alloys, 10 Li-Al alloys, and 9 Li-Mg-Al alloys.
MPN KEDF yields reasonable accuracy when compared to other non-local KEDFs.
It is noteworthy that the energy difference between the fcc and hcp structures of bulk Al is small, which is 0.025 eV/atom as predicted by KSDFT, and it is sensitive to the accuracy of KEDF. [46] Both TF\(\lambda\)vW and LKT KEDFs, as semi-local KEDFs, fail to distinguish this subtle energy difference and predict it as 0.000 eV/atom. In contrast, the non-local WT and WGC KEDFs yield non-zero energy differences of 0.018 and 0.016 eV/atom, respectively. Moreover, the MPN KEDF predicts the energy difference to be 0.021 eV/atom, which is close to the result of 0.025 eV/atom obtained by KSDFT and is more accurate than the WT and WGC KEDFs. This result emphasizes the importance of involving the non-local information again, which enables the MPN KEDF to distinguish the subtle difference between similar crystal structures.[46]
### Alloys
Fig. 4 illustrates the total energies and the formation energies of 59 alloys obtained by different KEDFs in OFDFT calculations, and their corresponding MAEs are listed in Table. 1. Notably, the WGC KEDF failed to achieve convergence for nine alloys; therefore, we have excluded the WGC results from Table. 1. Regarding the prediction of total energy shown in Fig. 4(a), the TF\(\lambda\)vW KEDF consistently underestimates the values compared to those obtained by KSDFT, resulting in a large MAE of 0.934 eV/atom. In contrast, the LKT KEDF performs
Figure 5: Charge densities of four typical alloys. (a) Li\({}_{3}\)Mg (mp-976254, 4 atoms) from the training set. The MARE of charge density obtained by the TF\(\lambda\)vW, LKT, WT, and MPN KEDFs are 13.85%, 5.28%, 1.03%, and 2.73%, respectively. (b) Li(Mg\({}_{4}\)Al\({}_{3}\))\({}_{4}\) (mp-1185175), the largest system in the testing set, containing 87 atoms. The MARE of charge density obtained by the TF\(\lambda\)vW, LKT, WT, and MPN KEDFs are 16.15%, 7.76%, 2.54%, and 3.08%, respectively. (c) Mg\({}_{3}\)Al (mp-1094666, 16 atoms) from the testing set, in which the MPN KEDF yields the lowest MARE among the testing set. The MARE of charge density obtained by the TF\(\lambda\)vW, LKT, WT, and MPN KEDFs are 16.73%, 6.42%, 1.65%, and 1.57%, respectively. (d) LiAl (mp-1191737, 48 atoms) from the testing set, in which the MPN KEDF obtains the largest MARE among the testing set. The MARE of charge density obtained by the TF\(\lambda\)vW, LKT, WT, and MPN KEDFs are 13.51%, 15.41%, 6.98%, and 8.16%, respectively. The labels \(a_{1}\) and \(a_{2}\) denote the first and second lattice vectors of the corresponding structures, respectively.
better with a reduced MAE of 0.145 eV/atom, while the non-local WT KEDF yields a lower MAE of 0.043 eV/atom. The MPN KEDF yields a higher MAE (0.123 eV/atom) than the WT KEDF but still outperforms the TF\(\lambda\)vW and LKT KEDFs. As for the formation energies shown in Fig. 4(b), we observe that the LKT KEDF consistently yields larger values compared to KSDFT, resulting in a high MAE of 0.166 eV, which is much larger than the MAEs obtained by the TF\(\lambda\)vW KEDF (0.051 eV) and WT KEDFs (0.035 eV). Remarkably, the MPN KEDF exhibits an even lower MAE (0.028 eV) than the WT KEDF. Overall, these results demonstrate the promising potential of the MPN KEDF in predicting the energies of complex alloy systems with a high accuracy.
In order to further evaluate the accuracy of the MPN KEDF, we computed the charge densities of 59 alloys and calculated the mean MAREs, listed in Table 2. As expected, the semi-local TF\(\lambda\)vW and LKT KEDFs failed to reproduce the charge density obtained by KSDFT, exhibiting mean MAREs of 14.30% and 7.34%, respectively. These MAREs are considerably higher than the mean MARE obtained by the non-local WT KEDF (2.38%). Impressively, the MPN KEDF yields a mean MARE of 3.30%, which is slightly higher than that of the WT KEDF but significantly lower than those of the TF\(\lambda\)vW and LKT KEDFs. We note that the above 59 alloys are not present in the training set, and there are even no Li-Mg-Al alloys in the training set, so the above results not only indicate a high accuracy but also excellent transferability of the MPN KEDF.
Fig. 5 shows the charge densities of four typical structures, one taken from the training set and the other three from the testing set. The first structure is Li\({}_{3}\)Mg (mp-976254) from the training set, containing four atoms. The MPN KEDF yields a MARE of 2.73%, which is slightly larger than that obtained by the WT KEDF (1.03%) but significantly lower than those obtained by the TF\(\lambda\)vW (13.85%) and LKT KEDFs (5.28%), demonstrating the efficiency of the training process.
The second structure is Li(Mg\({}_{4}\)Al\({}_{3}\))\({}_{4}\) (mp-1185175) with 87 atoms, which is the largest system among the testing set. Notably, the MPN KEDF achieves convergence to yield a smooth ground-state density, which is close to the result obtained by KSDFT, indicating an excellent stability in optimizing the electron charge density. In contrast, the WGC KEDF fails to reach convergence for this structure.
The last two crystal structures are Mg\({}_{3}\)Al (mp-1094666, 16 atoms) and LiAl (mp-1191737, 48 atoms) from the testing set, for which the MPN KEDF yields the lowest MARE and largest MARE among the testing set, respectively. For the Mg\({}_{3}\)Al structure, the MPN KEDF exhibits a better accuracy than the WT KEDF, yielding a MARE of 1.57%, lower than the 1.65% obtained by the WT KEDF. For the LiAl structure, although the MPN KEDF yields the largest MARE of 8.16%, it is still much lower than those obtained by the TF\(\lambda\)vW (13.51%) and LKT KEDFs (15.41%). Overall, the MPN KEDF outperforms the semi-local KEDFs in terms of accuracy and achieves comparable accuracy to the other non-local KEDFs. Additionally, the stability of the MPN KEDF is evidenced by reaching convergence and obtaining smooth charge densities for all alloys in the testing set.
## IV Conclusions
In this work, based on the framework of deep neural networks, we proposed an ML-based physical-constrained non-local (MPN) KEDF. Our proposed method relied on four descriptors, i.e., \(\{\tilde{p},\tilde{p}_{\text{nl}},\tilde{\xi},\tilde{\xi}_{\text{nl}}\}\), in which \(\tilde{p}\) was a semi-local descriptor, and the other three captured the non-local information of charge density. Importantly, the MPN KEDF was subject to three crucial physical constraints, including the scaling law of Eq. 6, the FEG limit shown in Eq. 7 and the non-negativity of Pauli energy density. The MPN KEDF was implemented in the ABACUS package. [39]
We systematically evaluated the performance of various KEDFs on simple metals, including bulk Li, Mg, and Al, by calculating their bulk properties, i.e., the bulk moduli, the equilibrium volumes, and the equilibrium energies. Additionally, we tested 59 alloys consisting of 20 Li-Mg alloys, 20 Mg-Li alloys, 10 Li-Al alloys, and 9 Li-Mg-Al alloys. Overall, our results demonstrated that the MPN KEDF exceeded the accuracy of semi-local KEDFs and approached the accuracy of non-local KEDFs for all of the tested systems. Additionally, the proposed MPN KEDF exhibited satisfactory transferability and stability during density optimization.
In the future, our proposed approach sheds new light on generating KEDFs for semiconductors or molecular systems, and may also serve as a reference for developing ML-based exchange-correlation functionals.
###### Acknowledgements.
The work of L.S. and M.C. was supported by the National Science Foundation of China under Grand No. 12074007 and No. 12122401. The numerical simulations were performed on the High-Performance Computing Platform of CAPT and the Bohrium platform supported by DP Technology.
|
2309.00799
|
An Elementary Construction of Modified Hamiltonians and Modified
Measures of 2D Kahan Maps
|
We show how to construct in an elementary way the invariant of the KHK
discretisation of a cubic Hamiltonian system in two dimensions. That is, we
show that this invariant is expressible as the product of the ratios of affine
polynomials defining the prolongation of the three parallel sides of a hexagon.
On the vertices of such a hexagon lie the indeterminacy points of the KHK map.
This result is obtained analysing the structure of the singular fibres of the
known invariant. We apply this construction to several examples, and we prove
that a similar result holds true for a case outside the hypotheses of the main
theorem, leading us to conjecture that further extensions are possible.
|
Giorgio Gubbiotti, David McLaren, G. R. W. Quispel
|
2023-09-02T02:36:13Z
|
http://arxiv.org/abs/2309.00799v2
|
# An elementary construction of modified Hamiltonians and modified measures of 2D kahan maps
###### Abstract.
We show how to construct in an elementary way the invariant of the KHK discretisation of a cubic Hamiltonian system in two dimensions. That is, we show that this invariant is expressible as the product of the ratios of affine polynomials defining the prolongation of the three parallel sides of a hexagon. On the vertices of such a hexagon lie the indeterminacy points of the KHK map. This result is obtained analysing the structure of the singular fibres of the known invariant. We apply this construction to several examples, and we prove that a similar result holds true for a case outside the hypotheses of the main theorem, leading us to conjecture that further extensions are possible.
2020 Mathematics Subject Classification: 39A36; 14H70
## 1. Introduction
In recent years a lot of interest has arisen regarding the problem of finding good discretisations of continuous systems. By good discretisation, here we mean a discretisation which preserves the properties of its continuous counterpart as much as possible. Within this framework a procedure called _Kahan-Hirota-Kimura (KHK) discretisation_ became popular. This discretisation method, defined for quadratic ordinary differential equations (ODEs), was first discovered by Kahan [20, 21]. It was rediscovered independently by Hirota and Kimura, who used it to produce integrable discretisations of the Euler top [18] and the Lagrange top [23]. More recently, the KHK method has been generalised to birational discretisation of ODEs of higher order and/or degree. These novel methods are called polarisation methods (see [25] and reference therein).
The results of Hirota and Kimura attracted the attention of Petrera, Suris, and collaborators, who extended the work to a significant number of other integrable quadratic ODEs [27, 28, 29]. This in turn led to the work of Celledoni, Owren, Quispel, and collaborators, [6, 8] who showed that the KHK method is the restriction of a Runge-Kutta method to quadratic differential equations. That is, given a quadratic system
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})\,,\quad\mathbf{x}\colon\mathbb{R}^{N} \to\mathbb{R},\quad\mathbf{f}\colon\mathbb{R}^{N}\to\mathbb{R}^{N}, \tag{1.1}\]
Introduction
The _KHK_ discretisation of a cubic Hamiltonian system is a classical problem in which the Hamiltonian vector field \(H\) is a Hamiltonian vector field. The Hamiltonian vector field \(H\) is a Hamiltonian vector field \(H\) with cubic Hamiltonian \(H\).
by its singularities1. In particular, we show that these singularities lie on the vertices of a hexagon and the invariant can be written as the product of the ratios of affine polynomials defining the prolongation of the three parallel sides of a hexagon. More importantly, these lines are the singular fibres of the pencil associated with the invariant. Our result is based on a previous investigation of the geometry of the two-dimensional integrable KHK discretisation given in [32]. Furthermore, our main result permits us to write down the KHK invariant knowing only the base points, plus trivial operations. In this sense with our result we show how to do a _KHK discretisation for dummies_.
Footnote 1: Note that, in Theorem 1.1 no integrability is assumed. In this paper we restrict to the two-dimensional (integrable) case.
The structure of the paper is as follows: in Section 2 we give the preliminary definitions we will use throughout the paper and prove our main result: Theorem 2.1. Section 3 is devoted to examples of the general construction. We also present an example belonging to a different class of integrable KHK discretisations presented in [7], but lying outside Theorem 2.1. In such a case, we show that a similar result holds, even though the invariant is not the product of ratios of parallel lines. In Section 4 we summarise our results and discuss open questions, motivated both by the general results and the considered examples.
## 2. Main result
In this section we state the preliminary definitions and then proceed to state and prove the main result of this paper contained in Theorem 2.1
### Preliminaries
Consider a pencil of curves in the affine plane \(\mathbb{C}^{2}\):
\[p\left(x,y;e_{0},e_{1}\right)=e_{0}h_{0}\left(x,y\right)+e_{1}h_{1}\left(x,y \right),\quad[e_{0}:e_{1}]\in\mathbb{P}^{1}. \tag{2.1}\]
Then, we recall the following definitions:
**Definition 2.1**.: Given a pencil of plane curves \(p\left(x,y;e_{0},e_{1}\right)\) if a point \(\left(x_{0},y_{0}\right)\in\mathbb{C}^{2}\) is such that \(h_{0}\left(x_{0},y_{0}\right)=h_{1}\left(x_{0},y_{0}\right)=0\), then it is called a _base point_ of the pencil (2.1).
**Definition 2.2**.: Given a pencil of plane curves \(p\left(x,y;e_{0},e_{1}\right)\) if a point \(\left(x_{0},y_{0};e_{0}^{\prime},e_{1}^{\prime}\right)\in\mathbb{C}^{2}\times \mathbb{P}^{1}\) is such that
\[p\left(x_{0},y_{0};e_{0}^{\prime},e_{1}^{\prime}\right)=\frac{\partial p}{ \partial x}\left(x_{0},y_{0};e_{0}^{\prime},e_{1}^{\prime}\right)=\frac{ \partial p}{\partial y}\left(x_{0},y_{0};e_{0}^{\prime},e_{1}^{\prime}\right)=0. \tag{2.2}\]
then it is called a _singular point_ for the pencil (2.1).
Intuitively, a base point is a point lying on _each curve_ of the pencil (2.1). On the other hand, a singular point lies on the curve _and_ its gradient vanishes. This means that, in general, for cubic pencils the singular points lie only on specific members of a pencil, called _singular fibres_. More formally:
**Definition 2.3**.: Given a pencil of plane curves \(p\left(x,y;e_{0},e_{1}\right)\) if the curve \(p_{s}(x,y):=p\left(x,y;e_{0}^{\prime},e_{1}^{\prime}\right)\), with \(\left(e_{0}^{\prime},e_{1}^{\prime}\right)\in\mathbb{P}^{1}\) contains a singular point then it is called a _singular fibre_ of the pencil (2.1).
If the pencil \(p\) is a pencil of elliptic curves, on a singular fibre either the _genus drops to zero_ or the polynomial is _factorisable_. A general classification of the singular fibres of elliptic curves is due to Kodaira [24]. In addition, all the possible arrangements of singular fibres on an elliptic fibration have been classified in [26]. This classification is reported in the monograph [36], where the different elliptic fibrations are distinguished using the associated Dynkin diagram, of the \(A\), \(D\), \(E\) series, see [36, Proposition 5.15] The application of this theory to discrete integrable systems has been discussed in the monograph [10], and more recently in [14].
In the literature on the algebro-geometric structure of integrable systems the notion of singular fibres has appeared in several cases. For instance, in [33] a classification of the singular fibres of the QRT biquadratics, (see [34, 35]) was presented. In [4] it was noted that for minimal elliptic curves of degree higher than three the singular fibre is unique. Finally, in [3] the notion of singular fibre was used to build several de-autonomisations of QRT maps, see [17].
Consider now a birational map \(\mathbf{\Phi}\colon\mathbb{C}^{2}\to\mathbb{C}^{2}\). As usual an _invariant_ is a scalar function \(h=h(x,y)\) constant under iteration of the birational map \(h(\mathbf{\Phi}(x,y))=h(x,y)\). In the case of a rational invariant \(h=h_{0}/h_{1}\), with \(h_{i}\in\mathbb{C}[x,y]\), the associated pencil \(p=e_{0}h_{0}+e_{1}h_{1}\) is _covariant_ with respect to the map \(\mathbf{\Phi}\). So, in general we have a one-to-one correspondence between covariant pencils of curves and rational invariants, and we will go from one to the other alternatively throughout the paper.
Birational maps are not always defined on \(\mathbb{C}^{2}\). Using projective geometry it is possible to give a meaning to the cases when a denominator goes to zero, but there are still undetermined points, defined as follows:
**Definition 2.4**.: Consider a birational map \(\mathbf{\Phi}\colon\mathbb{C}^{2}\to\mathbb{C}^{2}\). A point \(\left(x_{0},y_{0}\right)\in\mathbb{C}^{2}\) such that all the entries of \(\mathbf{\Phi}\) or its inverse \(\mathbf{\Phi}^{-1}\) is of the form \(0/0\) is called an _indeterminacy point_.
In the integrable case the set of indeterminacy points of the map and the set of base points of the associated covariant pencil are the same, see [4, 10, 40]. In the non-integrable case the analysis of the singularities proves the non-integrability of the birational map, see [9]. In particular, for non-integrable systems the analysis of singularities can prove that the algebraic entropy of the system is positive, (meaning that the system is non-integrable [2]), and that no invariant exists [39].
### Main theorem and its proof
We state and prove the following result:
**Theorem 2.1**.: _Consider a cubic Hamiltonian \(H=H\left(x,y\right)\). Then, the invariant (1.7) is representable as the ratio of two products of three affine polynomials:_
\[\widetilde{H}\left(x,y\right)=\frac{\ell\left(x,y;\mu_{1},b_{2}\right)\ell \left(x,y;\mu_{2},b_{6}\right)\ell\left(x,y;\mu_{3},b_{4}\right)}{\ell\left(x,y;\mu_{1},b_{5}\right)\ell\left(x,y;\mu_{2},b_{3}\right)\ell\left(x,y;\mu_{ 3},b_{1}\right)} \tag{2.3}\]
_where:_
\[\ell\left(x,y;\mu,b\right)=y-\mu x-b. \tag{2.4}\]
_Remark 2.1_.: The lines in (2.3) are three pairs of parallel lines:
\[\ell\left(x,y;\mu_{1},b_{2}\right) \parallel\ell\left(x,y;\mu_{1},b_{5}\right), \tag{2.5b}\] \[\ell\left(x,y;\mu_{2},b_{3}\right) \parallel\ell\left(x,y;\mu_{2},b_{6}\right),\] (2.5c) \[\ell\left(x,y;\mu_{3},b_{1}\right) \parallel\ell\left(x,y;\mu_{3},b_{4}\right). \tag{2.5a}\]
These lines intersect pairwise in the following six points in the finite part of the plane \(\mathbb{C}^{2}\):
\[B_{1}=\left(\frac{b_{1}-b_{6}}{\mu_{2}-\mu_{3}},\frac{b_{1}\, \mu_{2}-\mu_{3}\,b_{6}}{\mu_{2}-\mu_{3}}\right), B_{2}=\left(\frac{b_{1}-b_{2}}{\mu_{1}-\mu_{3}},\frac{b_{1}\,\mu_{1}- \mu_{3}\,b_{2}}{\mu_{1}-\mu_{3}}\right),\] \[B_{3}=\left(-\frac{b_{2}-b_{3}}{\mu_{1}-\mu_{2}},-\frac{b_{2}\, \mu_{2}-\mu_{1}\,b_{3}}{\mu_{1}-\mu_{2}}\right), B_{4}=\left(-\frac{b_{3}-b_{4}}{\mu_{2}-\mu_{3}},-\frac{b_{3}\, \mu_{3}-\mu_{2}\,b_{4}}{\mu_{2}-\mu_{3}}\right),\] \[B_{5}=\left(\frac{b_{4}-b_{5}}{\mu_{1}-\mu_{3}},\frac{b_{4}\, \mu_{1}-\mu_{3}\,b_{5}}{\mu_{1}-\mu_{3}}\right), B_{6}=\left(-\frac{b_{5}-b_{6}}{\mu_{1}-\mu_{2}},-\frac{b_{5}\, \mu_{2}-\mu_{1}\,b_{6}}{\mu_{1}-\mu_{2}}\right). \tag{2.6}\]
In general, any combinations of three or more points of the previous list are not collinear. A set of points with such property is said to be _in general position_.
The proof of theorem 2.1 is based on the following technical lemmas:
**Lemma 2.2** ([32]).: _Consider a cubic Hamiltonian \(H=H\left(x,y\right)\). Then, the invariant (1.7) is represented by the ratio of the following polynomials:_
\[C\left(x,y,h\right) =(y-\mu_{1}x)(y-\mu_{2}x)(y-\mu_{3}x)+c_{5}x^{2}+c_{6}xy+c_{7}y^{ 2}+c_{8}x+c_{9}y, \tag{2.7b}\] \[D\left(x,y,h\right) =d_{1}x^{2}+d_{2}xy+d_{3}y^{2}+d_{4}x+d_{5}y+b_{1}b_{3}b_{5}-b_{2 }b_{4}b_{6}. \tag{2.7a}\]
_The explicit form of the coefficients \(c_{i}\) and \(d_{i}\) is given in equation (A.1) in Appendix A._
_Remark 2.2_.: The free parameters in (2.7) and (A.1) depend on the original parameters of the cubic Hamiltonian \(H\) using the formulas contained in Appendix A of [32]. To give the proof of Theorem 2.1 we don't need to use this explicit expression, but it is sufficient that given the polynomials (2.7) it is possible to uniquely determine the corresponding KHK discretisation. This in turn implies that the result of Theorem 2.1 holds independently of the KHK structure of the underlying continuous system.
_Remark 2.3_.: Since \(\deg D=2\) (2.7b) it follows that the base points of a KHK map lie on a conic section, i.e. on a curve of genus zero [37]. Moreover, by explicit computation, the set \(\mathcal{D}=\{D=0\}\) is the common denominator of the maps \(\mathbf{\Phi}\) and \(\mathbf{\Phi}^{-1}\). From the explicit form of the parameters \(d_{i}\) from Appendix A it can be proved that the real part of this conic curve is either an ellipse or an hyperbola, but not a parabola. In Section 3 we will see examples of base points lying both on (real) ellipses and hyperbolas. Finally, we observe that the fact that \(\deg D=2\) implies that when adding the line at infinity \(\mathbb{P}^{2}=\mathbb{C}^{2}\cup\{t=0\}\), there are always _three base points at infinity_ coming from the solutions of:
\[C^{h}(x:y:0,h)=0,\quad C^{h}(x:y:t,h)=t^{3}C\left(\frac{x}{t},\frac{y}{t},h \right). \tag{2.8}\]
**Lemma 2.3**.: _The pencil of cubic curves:_
\[p\left(x,y;e_{0},e_{1}\right)=e_{0}C\left(x,y,h\right)+e_{1}D\left(x,y,h\right), \tag{2.9}\]
_where the functions \(C\) and \(D\) are given by equation (2.7), admits two singular fibres for the following values of \(\left[e_{0}:e_{1}\right]\in\mathbb{P}^{1}\):_
\[\left[e_{0}^{\prime}:e_{1}^{\prime}\right]=\left[\Delta:b_{2}b_{4}b_{6}\right],\quad\left[e_{0}^{\prime\prime}:e_{1}^{\prime\prime}\right]=\left[\Delta:b_{ 1}b_{3}b_{5}\right], \tag{2.10}\]
_where \(\Delta\) is given by equation (A.2). Moreover, the corresponding singular curves in the pencil (2.9) factorise into affine polynomials as follows:_
\[p\left(x,y;e_{0}^{\prime\prime}:e_{1}^{\prime}\right) =\Delta\ell\left(x,y;\mu_{1},b_{2}\right)\ell\left(x,y;\mu_{2},b_ {6}\right)\ell\left(x,y;\mu_{3},b_{4}\right), \tag{2.11b}\] \[p\left(x,y;e_{0}^{\prime\prime}:e_{1}^{\prime\prime}\right) =\Delta\ell\left(x,y;\mu_{1},b_{5}\right)\ell\left(x,y;\mu_{2},b_ {3}\right)\ell\left(x,y;\mu_{3},b_{1}\right). \tag{2.11a}\]
Proof.: The proof is achieved by direct computation using computer algebra software, e.g. Maple. In principle, we have to solve the system (2.2) where \(p\) is given by the pencil (2.9). This approach is quite cumbersome from the computational point of view, as it involves the solution of nonlinear algebraic equations. We propose the following approach, which proved to be easier to implement. Take a general affine polynomial with unspecified coefficients:
\[L=\alpha x+\beta y+\gamma. \tag{2.12}\]
Using polynomial long division with respect to \(x\) we can write:
\[p\left(x,y;e_{0},e_{1}\right)=Q\left(x,y;e_{0},e_{1}\right)L\left(x,y\right)+ R\left(y;e_{0},e_{1}\right). \tag{2.13}\]
If we impose \(R\equiv 0\), then we will have \(R\mid p\). We can obtain such conditions by setting to zero the all coefficients with respect to the various powers of \(y\) in \(R\). For instance the coefficient of \(y^{3}\) is:
\[\Delta e_{0}(\beta\mu_{3}+\alpha)(\beta\mu_{2}+\alpha)(\beta\mu_{1}+\alpha)=0. \tag{2.14}\]
So, we can choose three different values for \(\alpha\). This already suggests that there will be three different affine factors.
We start by choosing \(\alpha=-\beta\mu_{1}\). Plugging it back into \(R\) we obtain the following value for \(\left[e_{0}:e_{1}\right]\):
\[\left[e_{0}:e_{1}\right]=\left[\beta\left(b_{2}-b_{5}\right)\Delta:\beta \left(b_{1}b_{2}b_{3}b_{5}-b_{2}b_{4}b_{5}b_{6}\right)-\gamma\Delta\right]. \tag{2.15}\]
This finally yields the following values for \(\gamma\):
\[\gamma=-\beta b_{2},-\beta b_{5},-\frac{\mu_{1}\beta(b_{1}b_{3}-b_{4}b_{6})}{ \left(b_{1}-b_{4}\right)\mu_{2}+\left(b_{3}-b_{6}\right)\mu_{3}}. \tag{2.16}\]
Plugging (2.16) back into (2.15) we obtain the two solutions presented in (2.10), plus a third one:
\[\left[e_{0}^{\prime\prime\prime}:e_{1}^{\prime\prime\prime}\right]=\left[ \begin{matrix}\Delta(b_{2}-b_{5})\left(\left(b_{1}-b_{4}\right)\mu_{2}+\left( b_{3}-b_{6}\right)\mu_{3}\right):\\ \left(b_{1}b_{3}-b_{4}b_{6}\right)\left(-\Delta\mu_{1}+b_{2}b_{5}(b_{1}-b_{4}) \mu_{2}+b_{2}b_{5}(b_{3}-b_{6})\mu_{3}\right)\end{matrix}\right]. \tag{2.17}\]
While plugging (2.10) into the pencil (2.9) we obtain the two singular fibres (2.11), this third one does not give rise to a singular fibre.
Repeating the same argument with the other possible values of \(\alpha\) in (2.14) we obtain the same result. This concludes the proof.
Proof of Theorem 2.1.: Consider the pencil built with the two polynomials in (2.11):
\[P=\varepsilon_{0}p\left(x,y;e_{0}^{\prime}:e_{1}^{\prime}\right)+\varepsilon_ {1}p\left(x,y;e_{0}^{\prime\prime}:e_{1}^{\prime\prime}\right), \tag{2.18}\]
where \([\varepsilon_{0}:\varepsilon_{1}]\in\mathbb{P}^{1}\). The following invertible change of parameters:
\[[e_{0}:e_{1}]=[-\left(\varepsilon_{0}+\varepsilon_{1}\right)\Delta:b_{1}b_{3} b_{5}\varepsilon_{1}+b_{2}b_{4}b_{6}\varepsilon_{0}]\,, \tag{2.19}\]
transforms the pencil (2.9) into the pencil (2.18). Using the result of Lemma 2.2 we have that the pencil (2.18) is covariant on the KHK discretisation of a cubic Hamiltonian vector field in the variables \(\left(x,y\right)\) (1.7). This in turn implies that the ratio (2.3) is an invariant for the KHK discretisation of a cubic Hamiltonian vector field. This concludes the proof of the theorem.
_Remark 2.4_.: An alternative proof of Theorem 2.1 can be obtained through the theory of Darboux polynomials [5]. Indeed, consider the KHK discretisation associated with the most general cubic Hamiltonian in \(\left(x,y\right)\):
\[\frac{x^{\prime}-x}{h} =a_{2}x^{\prime}x+a_{3}(x^{\prime}y+xy^{\prime})+a_{4}y^{\prime}y +a_{6}(x^{\prime}+x)+a_{7}(y^{\prime}+y)+a_{9}, \tag{2.20b}\] \[\frac{y^{\prime}-y}{h} =-a_{1}x^{\prime}x-a_{2}(x^{\prime}y+xy^{\prime})-a_{3}y^{\prime }y-a_{5}(x^{\prime}+x)-a_{6}(y^{\prime}+y)-a_{8}. \tag{2.20a}\]
The coefficients \(a_{i}\) are linked to the coefficients \(b_{i}\) and \(\mu_{i}\) through the results of [32], which we report in formula (B.1) presented in Appendix B. Now, consider the two polynomials given in Equation (2.11) evaluated on \(\left(x^{\prime},y^{\prime}\right)\) from Equations (2.20a) and (2.20b):
\[p\left(x^{\prime},y^{\prime};e_{0}^{\prime}:e_{1}^{\prime}\right) =-\frac{b_{14}b_{25}b_{36}P_{1}P_{2}P_{3}}{\mu_{12}\mu_{13}\mu_{2 3}Q^{3}}p\left(x,y;e_{0}^{\prime}:e_{1}^{\prime}\right), \tag{2.21b}\] \[p\left(x^{\prime},y^{\prime};e_{0}^{\prime\prime}:e_{1}^{\prime \prime}\right) =-\frac{b_{14}b_{25}b_{36}P_{1}P_{2}P_{3}}{\mu_{12}\mu_{13}\mu_{2 3}Q^{3}}p\left(x,y;e_{0}^{\prime\prime}:e_{1}^{\prime\prime}\right). \tag{2.21a}\]
where \(b_{ij}=b_{i}-b_{j}\), \(\mu_{ij}=\mu_{i}-\mu_{j}\) and the polynomials \(P_{i}\) and \(Q\) are given in formula (C.1) presented in Appendix C. This implies that the polynomials (2.11) are Darboux polynomials with the same cofactor. From the general theory of Darboux polynomials this implies that their ratio is an invariant.
In Figure 1 we show an example of pencil (2.9) where we highlight the two singular curves \(C\) and \(D\).
Theorem 2.1 implies a simple algorithm to construct the invariant of the given KHK discretisation (1.3) of a cubic Hamiltonian vector field (1.4). Using the correspondence between base points of a pencil and the indeterminacy points of the corresponding map we obtain that given such a map the corresponding indeterminacy points will lie on the vertices of a hexagon. Considering the lines obtained prolonging the edges of the hexagon we
the invariant (2.3). In the next section we will see several examples of this phenomenon.
Before moving to the example section, we give an interpretation of the content of Theorem 2.1 in the context of Oguiso and Shioda's classification of 74 types of singular fibre configurations of rational elliptic surfaces [26]:
**Corollary 2.4**.: _Consider a cubic Hamiltonian \(H=H\left(x,y\right)\) for generic values of the parameters. Then, the singular fibres configurations of the pencil of elliptic curves associated to the invariant (1.7) are of type \(A_{2}^{2}\oplus A_{1}\)._
Figure 1. An example of pencil (2.9) with \(b_{1}=-b_{4}=2\), \(b_{2}=b_{3}=-b_{5}=-b_{6}=1\)\(\mu_{1}=0\),\(\mu_{1}=1\), and \(\mu_{2}=-1\). In red and purple are shown the two singular curves factorised in three lines. The base points are highlighted in black.
Proof.: From Lemma 2.2 and the proof of Theorem 2.1 we know that the pencil of elliptic curves associated to the invariant (1.7) has two different representations, one given by Equation (2.9) and one given by Equation (2.18).
From Remark 2.3 we have that the fibre \([e_{0}:e_{1}]=[0:1]\) is singular. Compactifying again to \(\mathbb{P}^{2}\) we have that the zero locus \(\mathcal{D}\) is reducible:
\[\mathcal{D}=\left\{\,tD^{h}(x:y:t,h)=0\right\}=\left\{\,t=0\right\}\cup\left\{ \,D^{h}(x:y:t,h)=0\right\}, \tag{2.22}\]
where
\[D^{h}(x:y:t,h)=t^{2}D\left(\frac{x}{t},\frac{y}{t}\right). \tag{2.23}\]
Now, for generic \(D\), by the properties of homogenenous polynomials in two variables we have that:
\[\left|\left\{\,t=0\right\}\cap\left\{\,D^{h}(x:y:t,h)=0\right\}\right|=2, \tag{2.24}\]
i.e. the singular fibre associated to \(\mathcal{D}\) is of type \(A_{1}\) (two non-tangential intersections).
From Equation (2.18) we have two singular fibres at \([\varepsilon_{0}:\varepsilon_{1}]=[1:0],[0:1]\). In both cases, we have three lines, which for generic values of the parameters intersect in three different points. That is, they form two singular fibres of type \(A_{2}\). This concludes the proof of the corollary.
_Remark 2.5_.: The rational elliptic surface with singular fibres configuration of type \(A_{2}^{2}\oplus A_{1}\) is listed in [36, Table 8.2] as number 20. Note that, for particular values of the parameters, cases whose singular fibre configuration _contains_\(A_{2}^{2}\oplus A_{1}\) are possible, e.g. number 40 or number 61.
## 3. Examples
In this section we show in some concrete examples how to construct the invariant from the indeterminacy points of a given map. We will also show that a similar result holds in the case of KHK discretisation obtained from quadratic Hamiltonians with an affine gauge function.
### Henon-Heiles potential
Consider the so-called Henon-Heiles (HH) potential [16]:
\[H=\frac{y^{2}+x^{2}}{2}+yx^{2}-\frac{y^{3}}{3}. \tag{3.1}\]
The corresponding system of Hamiltonian equations is:
\[\dot{x}=x^{2}-y^{2}+y,\quad\dot{y}=-2xy-x. \tag{3.2}\]
It is well known that in the continuous case the HH potential is factorisable in three lines forming a triangle. These three lines govern the behaviour of the complete HH system \(H^{\prime}=T+H\), where \(T=T\left(p_{x},p_{y}\right)\) is the standard kinetic energy in the conjugate momenta of \(x\) and \(y\), \(p_{x}\) and \(p_{y}\) respectively. For a complete discussion on this topic we refer to [38].
In [8] it was shown that the continuous triangle was preserved by the KHK discretisation of (3.2). Here, following Section 2, we will show that there exist
two more sets of lines which give a factorised representation of the invariant of the discrete systems. We will comment on how these two invariants are pushed to infinity in the continuum limit \(h\to 0\).
The KHK discretisation of equation (3.2) is:
\[\frac{x^{\prime}-x}{h}=x^{\prime}x-y^{\prime}y+\frac{y^{\prime}+y}{2},\quad \frac{y^{\prime}-y}{h}=-xy^{\prime}-x^{\prime}y-\frac{x+x^{\prime}}{2}. \tag{3.3}\]
The indeterminacy points of the associated map are the following six:
\[B_{1}=\left(\frac{\sqrt{3}}{4}+\frac{1}{2h},\frac{1}{4}-\frac{ \sqrt{3}}{2h}\right),\,B_{2}=\left(\frac{1}{h},-\frac{1}{2}\right),\,B_{3}= \left(-\frac{\sqrt{3}}{4}+\frac{1}{2h},\frac{1}{4}+\frac{\sqrt{3}}{2h}\right),\] \[B_{4}=\left(\frac{\sqrt{3}}{4}-\frac{1}{2h},\frac{1}{4}+\frac{ \sqrt{3}}{2h}\right),\,B_{5}=\left(-\frac{1}{h},-\frac{1}{2}\right),\,B_{6}= \left(-\frac{\sqrt{3}}{4}-\frac{1}{2h},\frac{1}{4}-\frac{\sqrt{3}}{2h}\right) \tag{3.4}\]
The indeterminacy points are numbered in clock-wise direction and lie on the vertices of a regular hexagon. Following remark 2.3 we observe that these base points lie on the circle:
\[x^{2}+y^{2}=\frac{1}{4}-\frac{1}{h^{2}}, \tag{3.5}\]
of radius \(r=\sqrt{1/4-1/h^{2}}\).
Following the algorithm presented at the end of Section 2 we introduce the following set of lines:
\[\overline{B_{1}B_{2}}=y-\frac{\left(3h-2\sqrt{3}\right)x}{h\sqrt{3 }-2}+\frac{\sqrt{3}h^{2}-4\sqrt{3}+4h}{2h\left(h\sqrt{3}-2\right)}, \tag{3.6b}\] \[\overline{B_{2}B_{3}}=y+\frac{\left(3h+2\sqrt{3}\right)x}{h\sqrt{3 }+2}+\frac{\sqrt{3}h^{2}-4\sqrt{3}-4h}{2h\left(h\sqrt{3}+2\right)},\] (3.6c) \[\overline{B_{3}B_{4}}=y-\frac{h+2\sqrt{3}}{4h},\] (3.6d) \[\overline{B_{4}B_{5}}=y-\frac{\left(3h+2\sqrt{3}\right)x}{h\sqrt{3 }+2}+\frac{\sqrt{3}h^{2}-4\sqrt{3}-4h}{2h\left(h\sqrt{3}+2\right)},\] (3.6e) \[\overline{B_{5}B_{6}}=y+\frac{\left(3h-2\sqrt{3}\right)x}{h\sqrt{3 }-2}+\frac{\sqrt{3}h^{2}-4\sqrt{3}+4h}{2h\left(h\sqrt{3}-2\right)},\] (3.6f) \[\overline{B_{6}B_{1}}=y+\frac{h-2\sqrt{3}}{4h}. \tag{3.6a}\]
We then build the invariant (2.3) as:
\[\widetilde{H}=\frac{\overline{B_{1}B_{2}}\,\overline{B_{3}B_{4}}\,\overline{B_ {5}B_{6}}}{\overline{B_{2}B_{3}}\,\overline{B_{4}B_{5}}\,\overline{B_{6}B_{1}}}. \tag{3.7}\]
Taking the continuum limit \(h\to 0\) we have:
\[\widetilde{H}=-1-\sqrt{3}h-\frac{3}{2}h^{2}+-\left(\frac{4}{\sqrt{3}}H+\frac{1 9}{36}\sqrt{3}\right)h^{3}+O\left(h^{4}\right), \tag{3.8}\]
so we see that we recover the continuum first integral (3.1).
Given the hexagon formed by \(B_{i}\) we can construct three additional lines:
\[\overline{B_{1}B_{4}}=y-1+\sqrt{3}x,\quad\overline{B_{3}B_{6}}=y-1-\sqrt{3}x, \quad\overline{B_{2}B_{5}}=y+\frac{1}{2}, \tag{3.9}\]
These lines form the original triangle of the continuous potential HH system. The we can form the polynomial:
\[P=\overline{B_{1}B_{4}}\,B_{3}\,B_{6}\,\overline{B_{2}B_{5}}, \tag{3.10}\]
and prove by direct computation that it is the Darboux polynomial with the same cofactor as the numerator and denominator of (3.7), see Remark 2.4. This implies that in the potential HH case we can construct the two additional following invariants:
\[\widetilde{H}_{1}=\frac{\overline{B_{1}B_{4}}\,\overline{B_{3}B_{6}}\, \overline{B_{2}B_{5}}}{\overline{B_{2}B_{3}}\,\overline{B_{4}B_{5}}\, \overline{B_{6}B_{1}}}.\quad\widetilde{H}_{2}=\frac{\overline{B_{1}B_{4}}\, \overline{B_{3}B_{6}}\,\overline{B_{2}B_{5}}}{\overline{B_{1}B_{2}}\, \overline{B_{3}B_{4}}\,\overline{B_{5}B_{6}}}. \tag{3.11}\]
Taking the continuum limit \(h\to 0\) we have:
\[\widetilde{H}_{1}=-\widetilde{H}_{2}=-\frac{9\sqrt{3}}{2h^{3}}\left(H-\frac{1 }{6}\right)+O\left(\frac{1}{h^{2}}\right), \tag{3.12}\]
where we used the fact \(P=-3H+1/2\). So we see that also in this case we recover the continuum first integral (3.1) through the continuum limit.
A graphical representation of the situation is given in Figure 2. In particular we see that the lines in 3.9 are independent of \(h\), so they are preserved by the continuum limit. On the other hand the lines in 3.6 as \(h\to 0\) are pushed to infinity. This explains why in the continuous HH system in the finite part of the plane only the triangle defined by (3.1) is present. Finally, from a direct computation we see that the singular fibres configuration of the pencil associated to the invariant (3.7) is of type \(A_{2}^{3}\oplus A_{1}\), and there is a singular fibres of type \(A_{0}\) represented by a nodal cubic, i.e. it is the elliptic fibration number \(61\) from [36, Table 8.2]. That is, the structure is more special than the generic one, described in Corollary 2.4, and explains the additional triangle-like structure observed.
### The most general factorisable example
In this subsection we consider a generalisation of the HH example. That is, we consider the most general cubic Hamiltonian \(H\) factorisable in three affine factors. Up to canonical transformations, this Hamiltonian is:
\[H=(x-x_{0})\left(y-y_{0}\right)\left(Ax+By+C\right), \tag{3.13}\]
where \(x_{0}\), \(y_{0}\), \(A\), \(B\), and \(C\) are arbitrary constants. The corresponding system of Hamiltonian equations is:
\[\dot{x}=(x-x_{0})(Ax+2By+C-By_{0}),\quad\dot{y}=-(y-y_{0})(2Ax+By+C-Ax_{0}). \tag{3.14}\]
The factorisation structure preserves a triangle-like configuration like in the HH case. We will discuss how this structure transforms after the KHK discretisation.
The KHK discretisation of equation (3.14), constructed with the rule (1.2) is
\[\frac{x^{\prime}-x}{h} =\frac{B}{2}\left[(2y-y_{0})x^{\prime}-2(y-y_{0}+y^{\prime})x_{0}- x(y_{0}-2y^{\prime})\right]\] \[-\frac{x_{0}}{2}\left[A(x+x^{\prime})+2C\right]+\frac{1}{2}(2Ax+C )x^{\prime}+\frac{C}{2}x, \tag{3.15b}\] \[\frac{y^{\prime}-y}{h} =-\frac{A}{2}\left[(2x-x_{0})y^{\prime}-2(x-x_{0}+x^{\prime})y_{ 0}-y(x_{0}-2x^{\prime})\right]\] \[+\frac{y_{0}}{2}\left[B(y+y^{\prime})+2C\right]-\frac{1}{2}(2Ax+C )y^{\prime}-\frac{C}{2}y, \tag{3.15a}\]
and possesses the following indeterminacy points:
\[\begin{split} B_{1}=\left(x_{0},+\frac{y_{0}}{2}-\frac{1}{B}\left( \frac{Ax_{0}+C}{2}-\frac{1}{h}\right)\right),\quad B_{2}=\left(\frac{x_{0}}{2} -\frac{1}{A}\left(\frac{By_{0}+C}{2}-\frac{1}{h}\right),y_{0}\right),\\ B_{3}=\left(\frac{x_{0}}{2}-\frac{1}{A}\left(\frac{By_{0}+C}{2}- \frac{1}{h}\right),\frac{y_{0}}{2}-\frac{1}{B}\left(\frac{Ax_{0}+C}{2}+\frac{1 }{h}\right)\right),\\ B_{4}=\left(x_{0},\frac{y_{0}}{2}-\frac{1}{B}\left(\frac{Ax_{0}+C }{2}+\frac{1}{h}\right)\right),\quad B_{5}=\left(\frac{x_{0}}{2}-\frac{1}{A} \left(\frac{By_{0}+C}{2}+\frac{1}{h}\right),y_{0}\right),\\ B_{6}=\left(\frac{x_{0}}{2}-\frac{1}{A}\left(\frac{By_{0}+C}{2}+ \frac{1}{h}\right),\frac{y_{0}}{2}-\frac{1}{B}\left(\frac{Ax_{0}+C}{2}-\frac{ 1}{h}\right)\right)\end{split} \tag{3.16}\]
The indeterminacy points are numbered in clock-wise direction and lie on the vertices of a hexagon. Following remark 2.3 we observe that these base points lie on the ellipse:
\[\begin{split}\frac{x^{2}}{B^{2}}+\frac{y^{2}}{A^{2}}+\frac{xy}{AB} &+\frac{1}{B^{2}}\left(\frac{C}{A}-x_{0}+\right)x+\frac{1}{A^{2}} \left(\frac{C}{B}-y_{0}\right)y+\left(\frac{x_{0}+y_{0}}{2}\right)^{2}\\ &=\frac{1}{h^{2}A^{2}B^{2}}+\frac{C}{2AB}\left(\frac{x0}{B}+ \frac{y0}{A}\right)-\frac{1}{4}\frac{C^{2}}{A^{2}B^{2}}.\end{split} \tag{3.17}\]
In the same way as in the previous section we introduce the following set of lines:
\[\overline{B_{1}B_{2}} =x+\frac{By}{A}-\frac{Ahx_{0}+Bhy_{0}-Ch+2}{2Ah}, \tag{3.18b}\] \[\overline{B_{2}B_{3}} =x-\frac{Ahx_{0}-Bhy_{0}-Ch+2}{2Ah},\] (3.18c) \[\overline{B_{3}B_{4}} =y+\frac{Ahx_{0}-Bhy_{0}+Ch+2}{2Bh},\] (3.18d) \[\overline{B_{4}B_{5}} =x+\frac{By}{A}-\frac{Ahx_{0}+Bhy_{0}-Ch-2}{2Ah},\] (3.18e) \[\overline{B_{5}B_{6}} =x-\frac{Ahx_{0}-Bhy_{0}-Ch-2}{2Ah},\] (3.18f) \[\overline{B_{6}B_{1}} =y+\frac{Ahx_{0}-Bhy_{0}+Ch-2}{2Bh}. \tag{3.18a}\]
We then build the invariant (2.3) as:
\[\widetilde{H}=\frac{\overline{B_{1}B_{2}}\,\overline{B_{3}B_{4}}\,\overline{B _{5}B_{6}}}{\overline{B_{2}B_{3}}\,\overline{B_{4}B_{5}}\,\overline{B_{6}B_{ 1}}}. \tag{3.19}\]
Taking the continuum limit \(h\to 0\) we have:
\[\begin{split}\widetilde{H}=-1-\left(Ax_{0}+By_{0}+C\right)h& +\left(Ax_{0}+By_{0}+C\right)^{2}\frac{h^{2}}{2}+\\ &+\left[2ABH\left(x,y\right)-\kappa\right]h^{3}+O\left(h^{4} \right),\end{split} \tag{3.20}\]
where \(\kappa=\kappa\left(A,B,C,x_{0},y_{0}\right)\) is a constant. So, also in this case the continuum first integral (3.13) arises at the third order in \(h\).
In addition, we have the following lines:
\[\overline{B_{1}B_{4}}=x-x_{0},\quad\overline{B_{3}B_{6}}=Ax+By+C,\quad\overline{ B_{2}B_{5}}=y-y_{0}, \tag{3.21}\]
which are three factors of the original Hamiltonian (3.13). Then we can form the polynomial:
\[P=\overline{B_{1}B_{4}\,B_{3}B_{6}\,B_{2}B_{5}}, \tag{3.22}\]
and prove by direct computation that it is the Darboux polynomial with the same cofactor as the numerator and denominator of (3.19), see Remark 2.4. This implies that we can construct the two additional following invariants:
\[\widetilde{H}_{1}=\frac{\overline{B_{1}B_{4}\,B_{3}B_{6}\,B_{2}B_{5}}}{B_{2}B _{3}\,B_{4}B_{5}\,B_{6}B_{1}}.\quad\widetilde{H}_{2}=\frac{\overline{B_{1}B_ {4}\,B_{3}B_{6}\,B_{2}B_{5}}}{\overline{B_{1}B_{2}\,B_{3}B_{4}\,B_{5}B_{6}}}. \tag{3.23}\]
Taking the continuum limit \(h\to 0\) we have:
\[\widetilde{H}_{1}=-\widetilde{H}_{2}=-A^{2}BH\left(x,y\right)h^{3}+O\left(h^{4 }\right), \tag{3.24}\]
So we see that also in this case we recover the continuum first integral (3.13) through the continuum limit.
To summarise, in the most general factorisable case the three factorised lines are preserved independently from \(h\). This explains why in the continuous factorised system in the finite part of the plane only the triangle defined by (3.13) is present. On the other hand, the two families of lines (3.18), alongside of the base points (3.16) are pushed to the line at infinity as \(h\to 0\). See Figure 3 for a graphical representation. Like in the case of the HH potential, it is possible to see that the singular fibres configuration of the pencil associated to the invariant (3.19) is of type \(A_{2}^{3}\oplus A_{1}\), and there is a singular fibres of type \(A_{0}\) represented by a nodal cubic, i.e. it is the elliptic fibration number \(61\) from [36, Table 8.2]. So, the structure is more special than the generic one, described in Corollary 2.4, and explains the additional triangle-like structure observed.
### A non-factorisable example
In the past two subsections we gave some examples of continuum Hamiltonians factorisable in three affine polynomials. In this subsection we show what happens in the case such factorisation is not possible. Consider the following Hamiltonian:
\[H=y\left(x^{2}-y^{2}-1\right). \tag{3.25}\]
The polynomial \(P=x^{2}-y^{2}-1\in\mathbb{C}\left[x,y\right]\) is not factorisable. So, the Hamiltonian (3.25) is made of a linear factor and an irreducibile quadratic one. The corresponding system of Hamiltonian equations is:
\[\dot{x}=x^{2}-3y^{2}-1,\quad\dot{y}=-2xy. \tag{3.26}\]
In Figure 4 we show the level curves of the continuous Hamiltonian (3.25), where it is clear that no triple linear factorisation occurs.
Following the rule (1.2) we have the following KHK discretisation
\[\frac{x^{\prime}-x}{h}=xx^{\prime}-3yy^{\prime}-1,\quad\frac{y^{\prime}-y}{h}= -xy^{\prime}-x^{\prime}y, \tag{3.27}\]
which possesses the following indeterminacy points:
\[\begin{split}& B_{1}=\left(-\frac{1}{2}\left(h+\frac{1}{h}\right), \frac{\sqrt{\delta}}{6h}\right),\quad B_{2}=\left(\frac{1}{2}\left(h+\frac{1}{h }\right),\frac{\sqrt{\delta}}{6h}\right),\quad B_{3}=\left(\frac{1}{h},0\right),\\ & B_{4}=\left(\frac{1}{2}\left(h+\frac{1}{h}\right),-\frac{\sqrt{ \delta}}{6h}\right),\quad B_{5}=\left(-\frac{1}{2}\left(h+\frac{1}{h}\right),- \frac{\sqrt{\delta}}{6h}\right),\quad B_{6}=\left(-\frac{1}{h},0\right),\end{split} \tag{3.28}\]
where
\[\delta=3\left(1-h^{2}\right)\left(h^{2}+3\right). \tag{3.29}\]
Following remark 2.3 we observe that these base points lie on the ellipse:
\[h^{2}x^{2}+3h^{2}y^{2}=1. \tag{3.30}\]
_Remark 3.1_.: Note that \(\delta>0\) if \(-1<h<1\) which justifies taking the square roots in (3.28). Since we are interested in the limit \(h\to 0^{+}\) this is no restriction. If one wishes to consider different values of \(h\) one can consider the base points as lying on a hexagon on the plane in \(\Pi=\mathbb{R}\times\mathrm{i}\mathbb{R}\subset\mathbb{C}^{2}\).
Figure 4. The level curves \(H=\varepsilon\) with \(H\) given by equation (3.25) and 32 different values of \(\varepsilon\). It it possible to note that there is only a linear factor (the line \(y=0\)) and that the base points are pushed to the line at infinity in \(\mathbb{P}^{2}\).
In the same way as in the previous section we introduce the following set of lines:
\[\overline{B_{1}B_{2}} =y-\frac{\sqrt{\delta}}{6h}, \tag{3.31b}\] \[\overline{B_{2}B_{3}} =y+\frac{1}{3}\frac{\sqrt{\delta}}{1-h^{2}}\,x-\frac{1}{3h}\frac{ \sqrt{\delta}}{1-h^{2}},\] (3.31c) \[\overline{B_{3}B_{4}} =y-\frac{1}{3}\frac{\sqrt{\delta}}{1-h^{2}}\,x+\frac{1}{3h}\frac{ \sqrt{\delta}}{1-h^{2}},\] (3.31d) \[\overline{B_{4}B_{5}} =y+\frac{\sqrt{\delta}}{6h},\] (3.31e) \[\overline{B_{5}B_{6}} =y+\frac{1}{3}\frac{\sqrt{\delta}}{1-h^{2}}\,x+\frac{1}{3h}\frac{ \sqrt{\delta}}{1-h^{2}},\] (3.31f) \[\overline{B_{6}B_{1}} =y-\frac{1}{3}\frac{\sqrt{\delta}}{1-h^{2}}\,x-\frac{1}{3h}\frac{ \sqrt{\delta}}{1-h^{2}}. \tag{3.31a}\]
We then build the invariant (2.3) as:
\[\widetilde{H}=\frac{\overline{B_{1}B_{2}}\,\overline{B_{3}B_{4}}\,\overline{ B_{5}B_{6}}}{\overline{B_{2}B_{3}B_{4}B_{5}}\,\overline{B_{6}B_{1}}}. \tag{3.32}\]
Taking the continuum limit \(h\to 0^{+}\) we have:
\[\widetilde{H}=-1-4H\left(x,y\right)h^{3}+O\left(h^{4}\right), \tag{3.33}\]
so we see that we recover the continuum first integral (3.25).
Like in the previous cases, given the hexagon formed by \(B_{i}\) we can construct three diagonal lines:
\[\overline{B_{1}B_{4}}=x+\frac{3(h^{2}+1)}{\sqrt{\delta}}\,y,\quad\overline{B_ {3}B_{6}}=y,\quad\overline{B_{2}B_{5}}=y-\frac{1}{3}\frac{\sqrt{\delta}}{h^{2} +1}\,x. \tag{3.34}\]
Considering their product:
\[P=\overline{B_{1}B_{4}}\,\overline{B_{3}B_{6}}\,\overline{B_{2}B_{5}}, \tag{3.35}\]
we find that this polynomial is not a Darboux polynomial for the map (3.27). In particular we have:
\[P=\left(H+y\right)+O(h), \tag{3.36}\]
which does not reduce to the continuous Hamiltonian, but to its factorisable part: \(H+y=\left(x-y\right)\left(x+y\right)\).
To prove that the only linearly factorisable singular fibres are numerator and denominator of (3.32) we consider the associated pencil:
\[p=e_{0}\overline{B_{1}B_{2}}\,\overline{B_{3}B_{4}}\,\overline{B_{5}B_{6}}+e_ {1}\overline{B_{2}B_{3}}\,\overline{B_{4}B_{5}}\,\overline{B_{6}B_{1}}. \tag{3.37}\]
Excluding the trivial singular fibres at \((e_{0}:e_{1})=(0:1)\) and \((e_{0}:e_{1})=(1:0)\) this pencil has the following singular fibres:
\[p_{1,s}=y\left[\left(1+\frac{h^{2}}{3}\right)x^{2}-\left(1-h^{2}\right)y^{2}- 1-\frac{h^{2}}{3}\right] \tag{3.38a}\]
\[p_{2,s} =\frac{\sqrt{3\delta}}{6}h^{3}\sqrt{3+h^{2}}\left[4h^{3}+\mathrm{i} \sqrt{\delta}\left(3+h^{2}\right)\right]\left[1-\left(x^{2}+3y^{2}\right)h^{2}\right]\] \[+\frac{h^{3}}{3}\left(3+h^{2}\right)\left[\mathrm{i}h^{3}\sqrt{ \delta}-\frac{\delta}{4}\left(3+h^{2}\right)\right]p_{1,s} \tag{3.38c}\] \[p_{3,s} =\sqrt{3\delta}h^{3}\sqrt{3+h^{2}}\left[4h^{3}-\mathrm{i}\sqrt{ \delta}\left(3+h^{2}\right)\right]\left[1-\left(x^{2}+3y^{2}\right)h^{2}\right]\] \[-\frac{9h^{3}}{2}\left(3+h^{2}\right)\left[\delta\left(3+h^{2} \right)+4\mathrm{i}h^{3}\sqrt{\delta}\right]p_{1,s}. \tag{3.38b}\]
The first singular fibre, equation (3.38a) is a deformation of order \(h^{2}\) of the original Hamiltonian (3.25). It is possible to check that the quadratic polynomial is not factorisable, i.e. such a singular fibre is of type \(A_{1}\). In the same way, the two cubic curves in equations (3.38b) and (3.38c) do not admit any affine factors, but rather are nodal cubics, i.e. singular fibres of type \(A_{0}\). At infinity except from the common \(\mathcal{D}\) fibre of type \(A_{1}\), see Remark 2.3, there is no other new singular fibre.
To summarise, with this example we showed that when the continuum cubic Hamiltonian is not factorisable the corresponding KHK discretisation admits, in general, only two singular fibres factor in the product of three affine polynomials. Other singular fibres, are either union of a line and a conic or nodal cubics. In particular this means that the complete singular fibre configuration is of type \(A_{2}^{2}\oplus A_{1}^{2}\), i.e. number 40 from [**SchuttShioda2019mordel**]. These considerations underline the differences with the factorisable cases discussed in the previous sections. See Figure 5 for a graphical representation.
### Quadratic irreducible Hamiltonians and non-convex hexagons
In this subsection we consider an example which shows that the base points can be arranged in interesting non-convex hexagonal shapes. The system we consider is the following:
\[H_{2}=\left(x-2\right)\left(x^{2}+y^{2}-1\right), \tag{3.39}\]
which was discussed in [5, Example 1]: Like in the previous example the cubic Hamiltonian is made of an quadratic irreducible term and a linear factor. The corresponding system of Hamiltonian equations is:
\[\dot{x}=2(x-2)y,\quad\dot{y}=-3x^{2}-y^{2}-4x+1. \tag{3.40}\]
Following the rule (1.2) we have the following KHK discretisation
\[\frac{x^{\prime}-x}{h}=(x-2)y^{\prime}+y(x^{\prime}-2),\quad\frac{y^{\prime}- y}{h}=(-3x+2)x^{\prime}-y^{\prime}y+2x+1. \tag{3.41}\]
possessing the following indeterminacy points:
\[\begin{split} B_{1}&=\left(2,\frac{1}{h}\right),\quad B _{2}=\left(\frac{10h^{3}+\sqrt{\delta_{2}}-6h}{2h\left(4h^{2}-3\right)},-\frac{2 h\sqrt{\delta_{2}}-h^{2}+3}{2h\left(4h^{2}-3\right)}\right),\\ B_{3}&=\left(\frac{10h^{3}+\sqrt{\delta_{2}}-6h}{2 h\left(4h^{2}-3\right)},\frac{2h\sqrt{\delta_{2}}-h^{2}+3}{2h\left(4h^{2}-3 \right)}\right),\quad B_{4}=\left(2,-\frac{1}{h}\right),\\ B_{5}&=\left(\frac{-10h^{3}+\sqrt{\delta_{2}}+6h}{2 h\left(4h^{2}-3\right)},\frac{2h\sqrt{\delta_{2}}+h^{2}-3}{2h\left(4h^{2}-3 \right)}\right),\\ B_{6}&=\left(\frac{-10h^{3}+\sqrt{\delta_{2}}+6h}{2 h\left(4h^{2}-3\right)},\frac{2h\sqrt{\delta_{2}}+h^{2}-3}{2h\left(4h^{2}-3 \right)}\right)\end{split}, \tag{3.42}\]
where
\[\delta_{2}=-3\left(1-h^{2}\right)\left(3-7h^{2}\right). \tag{3.43}\]
Following remark 2.3 we observe that these base points lie on the conic:
\[3h^{2}x^{2}-h^{2}y^{2}-8h^{2}x+4h^{2}+1=0, \tag{3.44}\]
whose real part represents an hyperbola.
_Remark 3.2_.: Differently from Remark 3.1\(\delta_{2}\) is positive if only if \(3/7<h^{2}<1\). This implies that to take the limit \(h\to 0\) we will go through a region where the base points lie in the complex space \(\mathbb{C}^{2}\). However, since the proof of Theorem 2.1 is based on algebraic geometry, we can still apply it. To draw pictures in this subsection we will assume that the base points lie within this range, so that they are points in the real plane.
In the same way as in the previous section we introduce the following set of lines:
\[\overline{B_{1}B_{2}} =y+\frac{2h\sqrt{\delta_{2}}+7h^{2}-3}{6h(1-h^{2})+\sqrt{\delta_{ 2}}}x-\frac{1}{h}\frac{4\sqrt{\delta_{2}}h^{2}+8h^{3}+\sqrt{\delta_{2}}}{6h(1- h^{2})+\sqrt{\delta_{2}}}, \tag{3.45b}\] \[\overline{B_{2}B_{3}} =x-\frac{10h^{3}+\sqrt{\delta_{2}}-6h}{2h\big{(}4h^{2}-3\big{)}},\] (3.45c) \[\overline{B_{3}B_{4}} =y-\frac{2h\sqrt{\delta_{2}}+7h^{2}-3}{6h(1-h^{2})+\sqrt{\delta_ {2}}}x+\frac{1}{h}\frac{4\sqrt{\delta_{2}}h^{2}+8h^{3}+\sqrt{\delta_{2}}}{6h(1 -6h^{2})+\sqrt{\delta_{2}}},\] (3.45d) \[\overline{B_{4}B_{5}} =y-\frac{2h\sqrt{\delta_{2}}-7h^{2}+3}{6h(6h^{2}-1)+\sqrt{\delta_ {2}}}x+\frac{1}{h}\frac{4\sqrt{\delta_{2}}h^{2}-8h^{3}+\sqrt{\delta_{2}}}{6h(6 h^{2}-1)+\sqrt{\delta_{2}}},\] (3.45e) \[\overline{B_{5}B_{6}} =x-\frac{10h^{3}-\sqrt{\delta_{2}}-6h}{2h\big{(}4h^{2}-3\big{)}},\] (3.45f) \[\overline{B_{6}B_{1}} =y+\frac{2h\sqrt{\delta_{2}}-7h^{2}+3}{6h(h^{2}-1)+\sqrt{\delta_ {2}}}x-\frac{1}{h}\frac{4\sqrt{\delta_{2}}h^{2}-8h^{3}+\sqrt{\delta_{2}}}{6h( h^{2}-1)+\sqrt{\delta_{2}}} \tag{3.45a}\]
We then build the invariant (2.3) as:
\[\widetilde{H}=\frac{\overline{B_{1}B_{2}}\,\overline{B_{3}B_{4}}\,\overline{B _{5}B_{6}}}{\overline{B_{2}B_{3}}\,\overline{B_{4}B_{5}}\,\overline{B_{6}B_{1 }}}. \tag{3.46}\]
Taking the continuum limit \(h\to 0^{+}\) we have:
\[\widetilde{H}=-1-4\mathrm{i}h+8h^{2}+\frac{4}{3}\mathrm{i}(3H_{2}+10)h^{3}+O \big{(}h^{4}\big{)}, \tag{3.47}\]
so we see that we recover, up to the addition of an inessential constant, the continuum first integral (3.39). Computing the limit we used that when \(h\to 0\)\(\delta_{2}<0\).
Like in the previous example we can consider the singular fibres of the pencil associated to \(\widetilde{H}\) (3.46). These singular fibres are again union of three lines, union of a line and a conic, and nodal cubics. So, the singular fibres configuration is
again of type \(A_{2}^{2}\oplus A_{1}^{2}\), i.e. number 40 from [36, Table 8.2]. In this case we do not present the explicit expression of these curves since it is rather cumbersome and it does not add any further information.
To summarise, this example adds to the previous one the fact that there exist cases when the "hexagon" formed by the base points is a non-convex polygon, and that the base points can become complex in a neighbourhood of zero. In Figure 6 we give a graphical representation of this occurrence.
### The degenerate case: the conic curve
The results of this section do not follow from the general results presented in Theorem 2.1, but rather form an
Figure 6. The non-factorisable case (3.39) with \(h=7/10\): the lines \(\overline{B_{1}B_{2}}\), \(\overline{B_{3}B_{4}}\), and \(\overline{B_{5}B_{6}}\) in red, the lines \(\overline{B_{2}B_{3}}\), \(\overline{B_{4}B_{5}}\), and \(\overline{B_{6}B_{1}}\) in blue. The hyperbola (3.44) is displayed in purple, while the non-convex hexagon formed by the base points is highlighted in cyan.
extension for another case. As it will be more clear later this might be a bridge for further developments of the results presented in this paper to other cases of interest.
In this example we consider the case when the Hamiltonian is the most general quadratic Hamiltonian:
\[H=\frac{1}{2}ax^{2}+bxy+\frac{1}{2}cy^{2}+dx+ey, \tag{3.48}\]
where an additional constant term was omitted because it is inessential for the equation of motion. If the skew-symmetric matrix \(J\) is constant, then the associated Hamiltonian system is _linear_. However, we can impose that the associated Hamiltonian system is quadratic by considering a system of the form:
\[\dot{\mathbf{x}}=G(x,y)J\nabla H(x,y). \tag{3.49}\]
with an affine gauge function \(G\). In a similar way as it was discussed in [13], up to canonical transformations we can always put \(G=x\). So, the associated Hamilton equations are:
\[\dot{x}=x\left(bx+cy+e\right),\quad\dot{y}=-x\left(ax+by+d\right). \tag{3.50}\]
Following rule (1.2) we construct the following discretisation:
\[\frac{x^{\prime}-x}{h}=\left(bx+\frac{c}{2}y+\frac{e}{2}\right)x ^{\prime}+\frac{c}{2}xy^{\prime}+\frac{e}{2}x, \tag{3.51b}\] \[\frac{y^{\prime}-y}{h}=-\left(ax+\frac{b}{2}y+\frac{d}{2}\right) x^{\prime}-\frac{b}{2}xy^{\prime}-\frac{d}{2}x. \tag{3.51a}\]
An invariant can be constructed following [7]:
\[\widetilde{H}=\frac{H+\Delta_{2}h^{2}x^{2}/8}{1+\Delta_{1}h^{2}x^{2}/4},\quad \Delta_{1}=ac-b^{2},\,\Delta_{2}=-ae^{2}+2bde-cd^{2}, \tag{3.52}\]
since Theorem 1.1 does not apply. The associated pencil is:
\[p(x,y;e_{0}:e_{1}) =e_{0}\left(\frac{a}{2}x^{2}+bxy+\frac{c}{2}y^{2}+dx+ey+\frac{ \Delta_{2}}{8}h^{2}x^{2}\right)\] \[+e_{1}\left(1+\frac{\Delta_{1}}{4}h^{2}x^{2}\right), \tag{3.53}\]
and it has vanishing genus. That is, the curve \(p=0\) is a conic like the level surfaces of \(H\). From Definition 2.2 the singular points of the pencil (3.53) are obtained for:
\[\left[e_{0}^{\prime}:e_{1}^{\prime}\right]=\left[2ch^{2}:e^{2}h^{2}-4\right], \quad\left[e_{0}^{\prime\prime}:e_{1}^{\prime\prime}\right]=\left[2\Delta_{1} :-\Delta_{2}\right]. \tag{3.54}\]
In both cases the pencil factorises as follows:
\[p(x,y;e_{0}^{\prime}:e_{1}^{\prime}) =c\Delta_{1}\ell\left(x,y;-\frac{beh-cdh+2b}{2c},-\frac{eh+2}{ch}\right)\] \[\qquad\cdot\ell\left(x,y;\frac{beh-cdh-2b}{2c},-\frac{eh-2}{ch} \right), \tag{3.55a}\]
\[p(x,y;e_{0}^{\prime\prime}:e_{1}^{\prime\prime}) =c\Delta_{1}\ell\left(x,y;\frac{-b+\sqrt{-\Delta_{1}}}{c},\frac{cd- \left(b-\sqrt{-\Delta_{1}}\right)e}{c\sqrt{-\Delta_{1}}}\right)\] \[\qquad\qquad\cdot\ell\left(x,y;-\frac{b+\sqrt{-\Delta_{1}}}{c}, \frac{cd-\left(b+\sqrt{-\Delta_{1}}\right)e}{c\sqrt{-\Delta_{1}}}\right). \tag{3.55b}\]
On the other hand the indeterminacy points of the map (3.51) are:
\[B_{1} =\left(-\frac{2}{h\sqrt{-\Delta_{1}}},\frac{2b-h\left(be-cd\right) }{ch\sqrt{-\Delta_{1}}}-\frac{eh-2}{ch}\right), \tag{3.56b}\] \[B_{2} =\left(\frac{2}{h\sqrt{-\Delta_{1}}},-\frac{2b+h\left(be-cd\right) }{ch\sqrt{-\Delta_{1}}}-\frac{eh+2}{ch}\right),\] (3.56c) \[B_{3} =\left(-\frac{2}{h\sqrt{-\Delta_{1}}},\frac{2b+h\left(be-cd\right) }{ch\sqrt{-\Delta_{1}}}-\frac{eh+2}{ch}\right),\] (3.56d) \[B_{4} =\left(\frac{2}{h\sqrt{-\Delta_{1}}},-\frac{2b-h\left(be-cd\right) }{ch\sqrt{-\Delta_{1}}}-\frac{eh-2}{ch}\right). \tag{3.56a}\]
This is consistent with the general theory of conic pencils. Considering the lines:
\[\overline{B_{1}B_{2}} =\ell\left(x,y;-\frac{b+\sqrt{-\Delta_{1}}}{c},\frac{cd-\left(b+ \sqrt{-\Delta_{1}}\right)e}{c\sqrt{-\Delta_{1}}}\right), \tag{3.57b}\] \[\overline{B_{2}B_{3}} =\ell\left(x,y;-\frac{beh-cdh+2b}{2c},-\frac{eh+2}{ch}\right),\] (3.57c) \[\overline{B_{3}B_{4}} =\ell\left(x,y;\frac{-b+\sqrt{-\Delta_{1}}}{c},\frac{cd-\left(b- \sqrt{-\Delta_{1}}\right)e}{c\sqrt{-\Delta_{1}}}\right),\] (3.57d) \[\overline{B_{4}B_{1}} =\ell\left(x,y;\frac{beh-cdh-2b}{2c},-\frac{eh-2}{ch}\right), \tag{3.57a}\]
we can write down the invariant (3.50) as:
\[\widehat{H}=\frac{\overline{B_{1}B_{2}}\,\overline{B_{3}B_{4}}}{\overline{B_{ 2}B_{3}}\,\overline{B_{4}B_{1}}}. \tag{3.58}\]
That is, in this case the invariant is _expressible as the ratio of four lines_. In this case the lines are not necessarily pairwise parallel, as is evident from the expression of the angular coefficients of the lines in (3.57), as shown in Figure 7. Moreover, note that the invariant (3.58) is a multiple of (3.52):
\[\widehat{H}=\frac{h^{2}c^{2}}{\Delta_{1}}\widetilde{H}. \tag{3.59}\]
This implies that the continuum limit of \(\widehat{H}\) is the Hamiltonian \(H\) (3.48) at order \(h^{2}\).
## 4. Conclusions
In this paper we have shown how to construct in an elementary way the invariant of the KHK discretisation of a two-dimensional Hamiltonian system.
This construction was possible because of the particular structure of the KHK birational map, as highlighted in [32], and the concept of singular fibre of a pencil of curves. Our main result, Theorem 2.1, tells us that such an invariant can be written down as the product of the ratios of affine polynomials defining the prolongation of the three parallel sides of a hexagon. From this result, in Corollary 2.4, we identified the singular fibre configuration of the generic KHK discretisation of a Hamiltonian cubic system to be of type \(A_{2}^{2}\oplus A_{1}\), i.e. number \(20\) from [36, Table 8.2]. Then, we noticed that Theorem 2.1 enables us to construct the invariant of a KHK discretisation of a Hamiltonian cubic system simply by looking at its indeterminacy points. We presented several examples of this construction. In particular, in those examples we observed that the KHK discretisation of a Hamiltonian cubic system is not a necessary condition for the KHK discretisation of a Hamiltonian cubic system.
Figure 7. The pencil of conics (3.53) with parameters \(a=1\), \(b=c=2\), \(d=e=1/2\), and \(h=1/5\). In total \(24\) different combinations of \([e_{0}:e_{1}]\in\mathbb{P}^{1}\) are considered. The lines \(\overline{B_{1}B_{2}}\), \(\overline{B_{3}B_{4}}\) are in red, while the lines \(\overline{B_{2}B_{3}}\), \(\overline{B_{4}B_{1}}\) are in blue.
in some cases the configuration of singular fibres is bigger than in the general cases, presenting examples with singular fibres configuration of type \(A_{2}^{3}\oplus A_{1}\) (number 61) and \(A_{2}^{2}\oplus A_{1}^{2}\) (number 40). In the first case, the additional triple of lines allowed the construction of multiple representations of the invariant in terms of ratios of linearly factorised cubic polynomials. Finally, we showed an example of conic curves which is outside the hypotheses of Theorem 2.1, but where a similar final result is obtained.
The conic example is built using the ideas of [7], although the conic case was not considered there. That example belong to the class of the discrete Nahm systems which are some of the most studied KHK discretisation since their appearance in [27], see for instance [7, 12, 13, 14, 31, 41] and their interpretation in terms of generalised Manin transform in [22, 30].
We hope that our result will be useful in shedding light on why integrability is preserved or not preserved by the KHK discretisation, and we hope that it will be possible to extend our result to other known integrable systems, both in the plane or in higher dimensions. Regarding the last topic, we observe that recently some construction of integrable systems in three dimensions where singular fibres play a fundamental role appeared in the literature [1, 11].
## Acknowledgments
This work was made in the framework of the Project "Meccanica dei Sistemi discreti" of the GNFM unit of INDAM. In particular, GG acknowledges support of the GNFM through Progetto Giovani GNFM 2023: "Strutture variazionali e applicazioni delle equazioni alle differenze ordinarie" (CUP_E53C22001930001).
The figures in this paper are eps produced in python using the libraries numpy [15] and mathplotlib[19].
## Appendix A Explicit form of the coefficients in eq.(2.7)
Here are the formulas referred to in the statement of Lemma 2.2:
(A.1a) \[d_{1} =b_{1}\mu_{1}\mu_{2}-b_{2}\mu_{2}\mu_{3}+b_{3}\mu_{1}\mu_{3}-b_{4} \mu_{1}\mu_{2}+b_{5}\mu_{2}\mu_{3}-b_{6}\mu_{1}\mu_{3},\] (A.1b) \[d_{2} =-b_{1}\mu_{1}-b_{1}\mu_{2}+b_{2}\mu_{2}+b_{2}\mu_{3}-b_{3}\mu_{1 }-b_{3}\mu_{3}+b_{4}\mu_{1}\] \[+b_{4}\mu_{2}-b_{5}\mu_{2}-b_{5}\mu_{3}+b_{6}\mu_{1}+b_{6}\mu_{3},\] (A.1c) \[d_{3} =b_{1}-b_{2}+b_{3}-b_{4}+b_{5}-b_{6},\] (A.1d) \[d_{4} =b_{1}b_{3}\mu_{1}+b_{1}b_{5}\mu_{2}-b_{2}b_{4}\mu_{2}-b_{2}b_{6} \mu_{3}+b_{3}b_{5}\mu_{3}-b_{4}b_{6}\mu_{1},\] (A.1e) \[d_{5} =-b_{1}b_{3}-b_{1}b_{5}+b_{2}b_{4}+b_{2}b_{6}-b_{3}b_{5}+b_{4}b_{6},\] (A.1f) \[c_{5} =\frac{\begin{pmatrix}b_{1}b_{2}b_{3}b_{5}\mu_{2}\mu_{3}-b_{1}b_{ 2}b_{4}b_{6}\mu_{1}\mu_{2}+b_{1}b_{3}b_{4}b_{5}\mu_{1}\mu_{2}\\ +b_{1}b_{3}b_{5}b_{6}\mu_{1}\mu_{3}-b_{2}b_{3}b_{4}b_{6}\mu_{1}\mu_{3}-b_{2}b_{ 4}b_{5}b_{6}\mu_{2}\mu_{3}\end{pmatrix}}{\Delta},\]
(A.1g) \[c_{6}=-\frac{\begin{pmatrix}b_{1}b_{2}b_{3}b_{5}\mu_{2}+b_{1}b_{2}b_{3} b_{5}\mu_{3}-b_{1}b_{2}b_{4}b_{6}\mu_{1}-b_{1}b_{2}b_{4}b_{6}\mu_{2}\\ +b_{1}b_{3}b_{4}b_{5}\mu_{1}+b_{1}b_{3}b_{4}b_{5}\mu_{2}+b_{1}b_{3}b_{5}b_{6} \mu_{1}+b_{1}b_{3}b_{5}b_{6}\mu_{3}\\ -b_{2}b_{3}b_{4}b_{6}\mu_{1}-b_{2}b_{3}b_{4}b_{6}\mu_{3}-b_{2}b_{4}b_{5}b_{6} \mu_{2}-b_{2}b_{4}b_{5}b_{6}\mu_{3}\end{pmatrix}}{\Delta},\] (A.1h) \[c_{7}=\frac{\begin{pmatrix}b_{1}b_{2}b_{3}b_{5}-b_{1}b_{2}b_{4}b_ {6}+b_{1}b_{3}b_{4}b_{5}\\ +b_{1}b_{3}b_{5}b_{6}-b_{2}b_{3}b_{4}b_{6}-b_{2}b_{4}b_{5}b_{6}\end{pmatrix}}{ \Delta},\] (A.1i) \[c_{8}=\frac{\begin{pmatrix}b_{1}b_{2}b_{3}b_{4}b_{5}-b_{1}b_{2}b_ {3}b_{4}b_{6}+b_{1}b_{2}b_{3}b_{5}b_{6}\\ -b_{1}b_{2}b_{4}b_{5}b_{6}+b_{1}b_{3}b_{4}b_{5}b_{6}-b_{2}b_{3}b_{4}b_{5}b_{6} \end{pmatrix}}{\Delta},\] (A.1j) \[c_{9}=-\frac{\begin{pmatrix}b_{1}b_{2}b_{4}b_{5}b_{6}+b_{1}b_{3}b_ {4}b_{5}b_{6}-b_{2}b_{3}b_{4}b_{5}b_{6}\end{pmatrix}}{\Delta},\]
and
(A.2) \[\Delta=b_{2}b_{4}b_{6}-b_{1}b_{3}b_{5}.\]
## Appendix B Explicit form of coefficients in equation (2.20)
Here are the formulas referred to in Remark 2.4:
(B.1a) \[a_{1}=\frac{b_{25}b_{36}\mu_{12}^{2}\mu_{3}^{3}-b_{14}b_{36}\mu_ {23}^{2}\mu_{1}^{3}+b_{14}b_{25}\mu_{13}^{2}\mu_{2}^{3}}{hD},\] (B.1b) \[a_{2}=\frac{-b_{25}b_{36}\mu_{12}^{2}\mu_{3}^{2}+b_{14}b_{36}\mu_ {23}^{2}\mu_{1}^{2}-b_{14}b_{25}\mu_{13}^{2}\mu_{2}^{2}}{hD},\] (B.1c) \[a_{3}=\frac{b_{25}b_{36}\mu_{12}^{2}\mu_{3}-b_{14}b_{36}\mu_{23} ^{2}\mu_{1}+b_{14}b_{25}\mu_{13}^{2}\mu_{2}}{hD},\] (B.1d) \[a_{4}=\frac{-b_{25}b_{36}\mu_{12}^{2}+b_{14}b_{36}\mu_{23}^{2}-b_ {14}b_{25}\mu_{13}^{2}}{hD},\] (B.1e) \[a_{5}=\frac{\begin{Bmatrix}(b_{1}+b_{4})b_{25}b_{36}\mu_{3}^{2} \mu_{12}^{2}-(b_{2}+b_{5})b_{14}b_{36}\mu_{1}\mu_{23}^{2}\\ -(b_{3}+b_{6})b_{14}b_{25}\mu_{2}\mu_{13}^{2}\end{Bmatrix}}{2hD},\] (B.1f) \[a_{6}=\frac{\begin{Bmatrix}(b_{1}+b_{4})b_{25}b_{36}\mu_{12}^{2}-( b_{2}+b_{5})b_{14}b_{36}\mu_{23}^{2}+(b_{3}+b_{6})b_{14}b_{25}\mu_{13}^{2}\\ 2hD\end{Bmatrix}}{2hD},\] (B.1g) \[a_{7}=\frac{b_{1}b_{4}b_{25}b_{36}\mu_{3}^{2}\mu_{12}^{2}-b_{2}b_ {5}b_{14}b_{36}\mu_{1}\mu_{23}^{2}+b_{3}b_{6}b_{14}b_{25}\mu_{2}\mu_{13}^{2}}{ hD},\] (B.1h) \[a_{8}=\frac{b_{1}b_{4}b_{25}b_{36}\mu_{12}^{2}-b_{2}b_{5}b_{14}b_ {36}\mu_{1}\mu_{23}^{2}+b_{3}b_{6}b_{14}b_{25}\mu_{2}\mu_{13}^{2}}{hD},\] (B.1i) \[a_{9}=\frac{-b_{1}b_{4}b_{25}b_{36}\mu_{12}^{2}+b_{2}b_{5}b_{14}b_ {36}\mu_{23}^{2}-b_{3}b_{6}b_{14}b_{25}\mu_{13}^{2}}{hD},\]
where
(B.2) \[D=\frac{1}{2}b_{14}b_{25}b_{36}\mu_{12}\mu_{13}\mu_{23}\]
and \(b_{ij}=b_{i}-b_{j}\), \(\mu_{ij}=\mu_{i}-\mu_{j}\). These formulas, with \(h=1\), were first presented in Appendix B in [32]. We note that since \(a_{i}=O(1)\), the coefficients \(b_{i}\) and \(\mu_{i}\) depend on \(h\).
## Appendix C Explicit form of the polynomials in equation (2.21)
Here are the polynomials forming the cofactors of the Darboux polynomial in Remark 2.4:
(C.1a) \[P_{1} =\left[\left(b_{4}\mu_{2}-b_{3}\mu_{3}\right)\mu_{12}+\left(b_{5} \mu_{2}-b_{6}\mu_{1}\right)\mu_{23}\right]x\] \[+\left[\left(b_{3}-b_{4}\right)\mu_{12}-\left(b_{5}-b_{6}\right) \mu_{23}\right]y+b_{4}b_{6}\mu_{12}-b_{3}b_{6}\mu_{13}+b_{3}b_{5}\mu_{23},\] (C.1b) \[P_{2} =\left[\left(b_{1}\mu_{1}-b_{2}\mu_{3}\right)\mu_{23}+\left(b_{3} \mu_{3}-b_{4}\mu_{2}\right)\mu_{13}\right]x\] \[-\left[\left(b_{1}-b_{2}\right)\mu_{23}+\left(b_{3}-b_{4}\right) \mu_{13}\right]y-b_{1}b_{4}\mu_{12}+b_{1}b_{3}\mu_{13}-b_{2}b_{4}\mu_{23},\] (C.1c) \[P_{3} =\left[\left(b_{1}\mu_{1}-b_{2}\mu_{3}\right)\mu_{12}+\left(b_{5} \mu_{2}-b_{6}\mu_{1}\right)\mu_{13}\right]x\] \[-\left[\left(b_{1}-b_{2}\right)\mu_{12}+\left(b_{5}-b_{6}\right) \mu_{13}\right]y+b_{1}b_{5}\mu_{12}-b_{2}b_{6}\mu_{13}+b_{2}b_{5}\mu_{23},\] (C.1d) \[Q =\left(b_{14}\mu_{1}\mu_{2}+b_{36}\mu_{1}\mu_{3}-b_{25}\mu_{2} \mu_{3}\right)x^{2}\] \[-\left[\left.b_{14}\left(\mu_{1}+\mu_{2}\right)-b_{25}\left(\mu_{ 2}+\mu_{3}\right)+b_{36}\left(\mu_{1}+\mu_{3}\right)\right]xy\] \[+\left(b_{12}+b_{34}+b_{56}\right)y^{2}\] \[+\left[\left(b_{1}b_{3}-b_{4}b_{6}\right)\mu_{1}+\left(b_{1}b_{5} -b_{2}b_{4}\right)\mu_{2}-\left(b_{2}b_{6}-b_{3}b_{5}\right)\mu_{3}\right]x\] \[-\left(b_{1}b_{3}+b_{1}b_{5}-b_{2}b_{4}-b_{2}b_{6}+b_{3}b_{5}-b_{ 4}b_{6}\right)y+b_{1}b_{3}b_{5}-b_{2}b_{4}b_{6}.\]
|
2304.08339
|
Development of Nb-GaAs based superconductor semiconductor hybrid
platform by combining in-situ dc magnetron sputtering and molecular beam
epitaxy
|
We present Nb thin films deposited in-situ on GaAs by combining molecular
beam epitaxy and magnetron sputtering within an ultra-high vacuum cluster. Nb
films deposited at varying power, and a reference film from a commercial
system, are compared. The results show clear variation between the in-situ and
ex-situ deposition which we relate to differences in magnetron sputtering
conditions and chamber geometry. The Nb films have critical temperatures of
around $9 \textrm{K}$. and critical perpendicular magnetic fields of up to
$B_{c2} = 1.4 \textrm{T}$ at $4.2 \textrm{K}$. From STEM images of the GaAs-Nb
interface we find the formation of an amorphous interlayer between the GaAs and
the Nb for both the ex-situ and in-situ deposited material.
|
Clemens Todt, Sjoerd Telkamp, Filip Krizek, Christian Reichl, Mihai Gabureac, Rüdiger Schott, Erik Cheah, Peng Zeng, Thomas Weber, Arnold Müller, Christof Vockenhuber, Mohsen Bahrami Panah, Werner Wegscheider
|
2023-04-17T15:01:18Z
|
http://arxiv.org/abs/2304.08339v2
|
Development of Nb-GaAs based superconductor semiconductor hybrid platform by combining in-situ dc magnetron sputtering and molecular beam epitaxy
###### Abstract
We present Nb thin films deposited in-situ on GaAs by combining molecular beam epitaxy and magnetron sputtering within an ultra-high vacuum cluster. Nb films deposited at varying power, and a reference film from a commercial system, are compared. The results show clear variation between the in-situ and ex-situ deposition which we relate to differences in magnetron sputtering conditions and chamber geometry. The Nb films have critical temperatures of around 9 K and critical perpendicular magnetic fields of up to \(B_{c2}=1.4\) T at 4.2 K. From STEM images of the GaAs-Nb interface we find the formation of an amorphous interlayer between the GaAs and the Nb for both the ex-situ and in-situ deposited material.
niobium, dc magnetron sputtering, semiconductor superconductor hybrid materials, in-situ superconductor growth pacs:
## I Introduction
Superconductor (SC) semiconductor (SE) hybrid (SSH) devices have re-emerged [1; 2; 3] fueled by the hope of finding anyons in solid state systems and their subsequent application for fault tolerant quantum computing [4; 5; 6; 7; 8]. Most notably the search for the Majorana Fermion in solid state systems has attracted attention [9; 10]. This promoted numerous experiments in Andreev interaction with Quantum Hall states [11; 12; 13] and topological superconductivity [14; 15].
The achievement of epitaxial growth of thin film Al on III-V SEs [16; 17] sparked experiments in SSH devices [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. The crucial element of material synthesis is the in-situ deposition, enabling an undisturbed SC-SE combination, crucial for a transparent interface to electron transport [17]. The formation of sub-gap states and a electrostatic barrier, degrading the performance of the hybrid system, is associated to the surface oxide, formed when the SC is deposited ex-situ [18].
Therefore, nanowires and two dimensional electron systems (2DES) based on InAs and InSb with an epitaxial Al layer have become the established material platform exhibiting a pronounced proximity effect [17; 18; 35]. Furthermore, Al is typically available in molecular beam epitaxy (MBE) systems owing to its use in III-V semiconductor growth.
The superconducting properties of epitaxial Al films limit the temperature and magnetic field range of Al-based SSH experiments to \(T_{c}\) of around 1.6 K at film thicknesses between 5 nm and 10 nm [20; 25; 27; 28; 19]. The reported perpendicular critical fields \(B_{c2}\) range from 30 mT [21] up to 164 mT [28] at dilution fridge temperatures.
The search for an alternative to Al is the subject of a multitude of recent studies [35; 36; 37; 38; 39; 40; 41]. A wide range of elemental superconductors has been deposited onto nanowires including Pb [37; 42], In [43], Ta [36], V [44] and Sn [38; 42]. Nb is of particular interest [35; 36; 40] as it has the highest bulk critical temperature and magnetic field of all the elemental SCs [45]. Pb appears to be the best alternative so far [37; 42] owing to its favourable lattice match to InAs [46] and relatively high Tc [37].
Exciting research proposals [47; 48; 49] call for building increasingly complex SSH devices and networks. In this application lithographically patterned 2DES-SC SSH represent a promising approach [50]. The 2DES in InAs [32] and InSb [51] can be grown to reasonably high mobilities but lack far behind 2DES based on GaAs [52]. The drawback of GaAs is the \(\Phi_{B}>0.7\) eV Schottky barrier [53] which is expected to suppress the proximity effect [54]. Nonetheless induced superconducting gaps in bulk n-GaAs employing in-situ deposited Al have been measured [55; 56]. In this context our recently developed shallow GaAs 2DESs [52] are posing an interesting unexplored potential for SSHs.
The interaction between the SE and SC is not limited to the proximity effect. A type-II superconductor can shape the magnetic field in the SE underneath via its vortices [57] forming the basis of exciting experimental proposals [58; 59; 60]. Vortex interaction mediated experiments have been previously attempted [2; 61] most no
tably by Geim et al. [62; 63]. The authors investigated Pb on a GaAs 2DES and concluded that a SC with a small vortex is needed together with a low electron density 2DES as close as possible to the surface [64]. These requirements are rooted in a geometrical argument, considering the size of the magnetic field variation at the depth of the 2DES versus the size of the quasiparticle in the 2DES as a function of magnetic field. The small vortex size [65] can be achieved in Nb and the lowest electron densities have been reached in (Al)GaAs based systems 2DES [66].
In this work, we present the first results from our chamber for DC magnetron sputtering of SC on our MBE grown III-V SEs without breaking the vacuum. In the initial experiment, we compare Nb deposited in-situ at varying power in the UHV dc magnetron sputtering system and ex-situ deposited in a commercial system (AJA Int.). The samples are of high purity but display significant differences in surface roughness and crystallite orientation which can be related to the growth regime.
We compare the superconducting properties of the Nb film, by investigating the resistive transition as a function of temperature and magnetic field. STEM images of the Nb-GaAs interface reveal an amorphous interlayer at the interface for both the in-situ and ex-situ depositions.
## II MBE and magnetron sputtering cluster
The layout of the UHV cluster, consisting of two molecular beam epitaxy (MBE) machines and the SC deposition system, is presented in fig.1 a). The first MBE chamber is optimized for high mobility 2DES in (Al)GaAs [67; 68; 69; 70; 52] while the other covers a wider range of III-V materials based on As and Sb [71; 72; 73; 74; 75; 76; 51; 77; 78; 50; 79; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76]. The UHV magnetron sputtering chamber is connected via a UHV tunnel to enable in-situ deposition of SCs on MBE grown SEs as well as preventing contamination of the MBE systems.
The incorporation of oxygen in superconducting films is generally believed to have a detrimental effect on the superconducting properties such as the critical temperature [77]. Therefore, the system was designed to minimize contaminants, specifically the incorporation of oxygen. The UHV magnetron sputtering chamber is supplied with purified gas and solely pumped by a cryo pump in order to obtain elementally clean films, see supplementary for details [78]. After bakeout the system achieved the mass spectra presented at the bottom fig.1 b). The black line represents the pumped state at \(p<1\times 10^{-10}\) mbar while the red line at \(p=1\times 10^{-9}\) mbar was taken 18hrs after the deposition of Nb. The pressure is dominated by peaks from the different ionization states of Ar and its isotopes (36, 38). A peak associated with water three orders of magnitude smaller than Argon can be identified after deposition indicated by the arrow. Continued use of the chamber reduced the water peak below detection limit and therefore we assume that it originated from residual water in the gas lines.
The kinetic rather than thermal nature of magnetron sputtering offers an alternative path to the evaporation for SC deposition on SEs. The film growth via evaporation is primarily controlled by substrate temperature and rate [79]. In order to produce connected films of low melting point elements such as In, Pb or even Al on SE surfaces, the substrate has to be typically cooled below room temperature [17; 37; 80; 81] adding technical complexity. SC with higher melting points such as Ta and
Figure 1: a) Layout of the UHV cluster, consisting of two MBE chambers used for semiconductor growth and the magnetron sputtering chamber for superconductor deposition. b) Mass spectrum of the superconductor deposition chamber. The red line indicates a measurement 18hrs after a Nb deposition while the black line is a measurement taken after pumping the system for a week. The arrow indicates the minute water peak that appears after deposition.
Nb can be grown at higher substrate temperatures [40]. However, to evaporate these low vapor pressure metals [82] they have to be heated to high temperatures causing the chamber to release contamination from the chamber walls and heat up the substrate surface. Magnetron sputtering, on the other hand, is a comparatively cold deposition method [83].
The method appeals additionally with the possibility to grow nitrides, a simple material exchange, wide variety of compounds from mixed targets and co-sputtering as well as a moderate pressure during deposition which limits outgassing [83]. This opens up the possibility to deposit a wide range of compound materials such as A15 and B1 phase SCs as well as more exotic variants like MgB\({}_{2}\)[84].
### Sample preparation
Both the ex-situ and in-situ Nb films were deposited onto 720 nm of MBE grown n\({}^{++}\) GaAs. The ex-situ wafer was removed from the MBE chamber after growth. Before loading it into the AJA magnetron sputtering system the wafer was etched in a 1:1 solution of HCl (32 %):H\({}_{2}\)O at temperature until the surface was hydrophilic to remove the oxide. The wafer was then transferred in air to the load lock within 5 min. The in-situ wafer was moved in our UHV tunnel from the MBE chamber to the UHV magnetron sputtering chamber under a residual pressure of \(<5\times 10^{-9}\) mbar.
### Nb depositions
In order to investigate the possible operating conditions in our sputtering system, a characterisation of the Nb sputtering rate for various power and pressure combinations was made. This exploration served as a starting point to determine which sputtering conditions could be compared to the commercial AJA system and could yield films with good superconducting parameters.
The dependence of the Nb deposition rate on pressure and set power for our system was investigated using a quartz crystal balance that can be moved into the wafer position and results are presented in fig.2. The rate increases with pressure up to 20 mbar at which point the Nb growth is limited by diffusion from the Ar gas. With increasing pressure the voltage decreases and the current increases as expected from a denser and more conductive plasma. The rate is linearly dependent on the set power at a given pressure.
The deposition in our system with 2 inch UHV magnetrons from Angstrom Sciences is controlled by pressure, power and substrate heating. The parameters used in this study are summarised in table 1. The guns are mounted such that we can vary the substrate target distance under a fixed angle of 32\({}^{\circ}\) and the substrate is not rotated. For this study we chose to fix the distance at the minimum of 110 mm. The commercial system employs a 4 inch target 100 mm away from the substrate in a planar orientation and a constant substrate rotation. The pressure was chosen such that the pressure-distance product is 1 mbar for both setups.
Due to the difference in target size between the systems it is not possible to attain the same rate at the same current and voltage values for both setups. The ex-situ system can attain a low voltage of 214 V at a high rate while the in-situ machine is limited to 404 V before the plasma becomes unstable at 9 mbar.
## III Structural and Elemental Analysis
### Afm
Fig.3 shows the surface morphologies of the Nb films measured by AFM. All samples have randomly distributed elongated grains, roughly 100 nm long and 20 nm wide. The elongated grains are not oriented with respect to the substrate or the source. Since randomly oriented elongated Niobium grains are also observed on a silicon
Figure 2: a) Current and b) Nb deposition rate dependence on pressure. The voltage is indicated by the color of the dot and the scale bar. The working conditions for the presented in-situ films are indicated by the larger dots.
substrate by Imamura et. al. [86], a direct relationship between this effect and the GaAs substrate seems unlikely.
The root mean square roughness values obtained from the AFM data using Gwyddion [87] are listed in the last column of table 1. The in-situ films are distinctively rougher than the ex-situ film with little difference between the in-situ films.
### Xrd
XRD measurements were performed with a PANalytical X'PERT PRO MPD diffractometer in Bragg-Brentano reflection geometry and Cu K\(\alpha_{1}\) radiation. The measurements of the Nb films are plotted in fig.4. The data was acquired under a \(2^{*}\) offset of the sample tilt relative to the symmetric geometry to minimize the signal from the GaAs substrate. The signal from the (001) oriented GaAs substrate still appears as broad background centered at 66.1\({}^{*}\), which is coming from thermal diffuse scattering. Apart from the substrate signals, no significant differences are observed between the measurements with and without offset.
Comparison of our XRD data with literature [88; 89] shows that our in-situ Nb films are missing the reflections associated with the (211) orientation parallel to the substrate surface [88; 89]. The ex-situ film, on the other hand, only shows the 110 and 220 reflections indicating that the crystallites, making up the uniform film, have a preferential orientation of the (100) planes with respect to the substrate.
The relative peak heights of the 110 family of reflections and the 200 reflection vary between the in-situ samples while the 310 signal appears unchanged. Depending on deposition voltage the orientation distribution either the (110) or (200) oriented crystallites appears to vary. However, it is not a direct trend as the largest variation is observed for the in-situ - B sample.
### Elemental analysis
Rutherford Back Scattering (RBS) measurement of the Nb films were undertaken. The RBS was performed using 2 MeV Helium ions under a back-scattering angle of 167.5\({}^{*}\). The particle induced X-ray emission (PIXE) from the sample was measured in parallel. No significant contamination of the Nb film could be conclusively detected within the capabilities of RBS and PIXE, see supplementary for details [78].
### Discussion
The possible origin of the observed structural variations between the in-situ and ex-situ films could be related to different deposition parameters, namely voltage, substrate surface and or geometric differences between the two sputtering systems.
It appears, that the large kinetic energy difference related to different powers used for the in-situ samples do not make a significant difference in their structure that could be identified with the implemented characterization methods. A step up in deposition voltage between the in-situ and ex-situ samples does exist. However, we observed very little structural change when changing the voltage from 404V to 708V for samples in-situ - A,B and C. Given that both systems work at the same pressure distance product, this does not point to the deposition voltage as the root cause for our observed structural differences.
To aid in understanding our findings and bring our deposition conditions into context with those from published literature which have been appended to table 1.
Dobrovolskiy et al. [85] have reached the Stranski-Krastanov regime growth regime [83] and produced epitaxial Nb films at 850\({}^{\circ}\)C on Al\({}_{2}\)O\({}_{3}\). The key differences between the reference film by Dobrovolskiy et al. and our material is the Al\({}_{2}\)O\({}_{3}\) substrate which not only has a favourable lattice match but also allows for the required high substrate temperature. The high quality clean limit film listed in table 1 has been grown at the same rate as our ex-situ and in-situ - C samples. In terms of voltage current conditions the reference film is comparable to our in-situ - A sample which suggests that the authors had a smaller substrate target distance to achieve a ten fold higher rate at roughly half the pressure. The roughness increased with elevated growth rate corresponding to a
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c||c} sample & substrate & p & T\({}_{sub}\) & voltage & J & rate & R\({}_{sq}\) \\ & & (pbar) & (\({}^{\circ}\)C) & (V) & (mA cm\({}^{-2}\)) & (Å s\({}^{-1}\)) & (nm) \\ \hline \hline ex-situ & GaAs & 10 & RT & 214 & 24.2 & 5.0 & 0.95 \\ in-situ - A & GaAs & 9 & RT & 404 & 3.7 & 0.6 & 1.84 \\ in-situ - B & GaAs & 9 & RT & 582 & 10.6 & 2.7 & 1.75 \\ in-situ - C & GaAs & 9 & RT & 708 & 17.4 & 5.0 & 2.03 \\ \hline
[85] B2 & Al\({}_{2}\)O\({}_{3}\) & 4 & 850 & 312 & 2.5 & 5.0 & 0.7 \\
[86] & Si & 23 & RT & 270 & 74 & 27.3 & \\ \end{tabular}
\end{table}
Table 1: comparison of dc sputtering parameters p - pressure, T\({}_{sub}\) - substrate temperature, voltage, J - current density and rate with the resulting root mean square roughness R\({}_{sq}\) obtained from AFM measurements in fig.3. The bottom three lines are comparable films from literature, see text.
larger voltage which could either be due to the kinetic energy of arriving species or adatoms that didn't have enough time to reach a kink site.
Imamura et al.[86] report similarly elongated grains on their Nb films deposited on Si at room temperature. The distance pressure product employed by Imamura et al. is close to ours at 1.3 mbar m which could explain the close resemblance. The authors find that with reducing pressure or equivalently increasing voltage the width of the XRD peaks reduces (therefore crystallite size increases) and the surface becomes smoother. Specifically, the strain went from tensile to compressive at 270 V which corresponds to the point at which crystallite size and roughness did not change anymore. Our in-situ samples deposited at voltages \(>404\) V appeared to follow this in that roughness and crystallite size did not change.
The random crystallite orientation seen in the XRD in fig.4 on the in-situ material suggests that the Nb does not find a preferential orientation with the substrate. The in-situ material presents the clean GaAs surface reconstruction while the ex-situ material was exposed to air. It is therefore less likely that the XRD findings are not originating from the Nb-GaAs interface.
The film growth as we understand it has been discussed in detail by Monti et al. [90]. Randomly nucleated crystallites grow in the direction of the facet that incorporates new material at the highest rate. Which facets grow is determined by the surface energy of the specific facet, the adatom mobility on the surface, the direction and rate of the arriving Nb. This will result in the crystallites with
Figure 4: XRD measurements using a PANalytical X’PERT PRO MPD diffractometer in Bragg-Brentano reflection geometry and Cu K\(\alpha_{1}\) radiation. Tails of the GaAs 001 reflection appear as broad background at 66.1’m while the sharp contributions at this position are from small slightly misoriented substrate crystallites, likely from the cut edge. a) is the ex-situ sample while b) shows traces from the in-situ samples vertically offset for clarity.
Figure 3: Topography maps of the surface of the in-situ deposited Nb films A,B and C and the ex-situ deposited Nb films. Maps acquired 2x2 μm2 areas are shown in the left panel and from 500x500nm2 areas in the right panels.
favourable orientation to outgrow and terminate neighbouring crystallites until only dendrites of one orientation remain.
The fact that the ex-situ sample only shows the (110) family of crystal orientations parallel to the substrate surface, could therefore originate from a fast saturation of the termination process. The (110) facets thus grow the fastest and terminate their neighbours faster in the ex-situ system than in the in-situ system. The in-situ samples on other hand have not reached the point at which slower growing crystallites are buried resulting in a rougher surface and more crystallite orientations appearing in the XRD. The (211) oriented crystallites have been buried early on in both systems and do not appear at all in the XRD data.
## IV Nb superconducting properties
The critical temperature \(T_{c}\) and critical magnetic field \(B_{c2}\) of the Nb films limit the measurement range of SSH devices. \(B_{c2}\) is the field applied perpendicular to the film at which the resistive transition occurs, termed the upper critical field associated with type-II superconductors [91]. Knowledge of the resistive superconducting transition can additionally be used to estimate the coherence length of the Cooper pairs. When compared to the mean free path of the electrons \(\ell\) the coherence length determines whether the superconducting film is in the clean or dirty limit [92].
In the context of proximity induced superconductivity the coherence length appears in the pair breaking parameter indicating that a larger coherence length enhances the pair breaking within the superconductor towards the interface [93].
Van der Pauw structures were made and measured using standard lock-in techniques to determine the sheet resistance as a function of temperature and magnetic field \(R(T,B)\), see supplementary for details [78]. The result of measuring the resistive transitions at a constant temperature, while sweeping the magnetic field, is given in fig 5.
The extracted \(B_{c2}(T)\) values are significantly higher than expected for clean limit films which have \(B_{c2}(0)\) values of 1 T or less [85; 94]. It has been reported that dirty limit films have increased critical fields [85; 94] while the critical temperature is lower. The large critical field of our films indicates that these are in the dirty limit with the ex-situ sample being closest to the clean limit. The results shown in fig.5 indicate that, even for room temperature Nb depositions, it is possible to obtain films with less structural defects by changing the sputtering parameters such as power and voltage.
The target film thickness was chosen to be 100 nm such that the superconducting critical temperature [95] and critical field [96] of Nb are not expected to significantly change with variations in thickness [97; 98]. The thicknesses are determined from QCM deposition rates and time with the in-situ - B sample being slightly thinner at 90 nm.
### Mean free path
Estimating \(\ell\) from the Drude formula [99]
\[\rho\ell=\frac{m^{*}}{e^{2}}\frac{v_{F}}{n} \tag{1}\]
requires knowledge of the normal state resistivity \(\rho\), the carrier density \(n\) and the effective mass \(m^{*}\) at a temperature just above the superconducting transition. Ideally, each of these parameters is determined from a separate measurement. However, it is common [100; 85; 101] for Nb films to estimate \(\ell\) just from \(\rho\) using the expression by Mayadas et al. [102], given as
\[\rho\ell=3.72\times 10^{-6}\,\mathrm{\SIUnitSymbolMicro m}\Omega\,\mathrm{cm}^{ 2}. \tag{2}\]
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} sample & \(T_{c}\) & \(\rho\) & \(\ell\) eq.2 & \(B_{c2}(0)\) & \(\xi_{GL}(0)\) & \(\ell\) eq.8 \\ & (K) & (\(\mathrm{\SIUnitSymbolMicro m}\)) & (nm) & (T) & (nm) & (nm) \\ \hline \hline ex-situ & 9.2 & 47.8 & 0.8 & 1.48 & 15 & 2.3 \\ in-situ - A & 9.0 & 32.7 & 1.1 & 1.59 & 14 & 2.1 \\ in-situ - B & 9.0 & 43.8 & 0.8 & 1.73 & 14 & 1.9 \\ in-situ - C & 8.9 & 71.7 & 0.5 & 1.83 & 13 & 1.8 \\ \hline [85] B2 & 9.1 & 0.45 & 83 & 0.7 & 22 & \\ \end{tabular}
\end{table}
Table 2: superconducting properties of the Nb films. The film thicknesses are 100 nm from QCM rates with the exception of in-situ - B which is 90 nm. The reference film by Dobrovolskiy et al. [85] is 52 nm thick. The normal state resistivity \(\rho\) was measured at 10 K just above the critical temperature \(T_{c}\). \(\ell\) is the mean free path determined using \(\rho\) in eq.2. \(\xi_{GL}(0)\) is obtained when using the GL eq.4. The critical perpendicular magnetic field at zero temperature \(B_{c2}(0)\) is estimated from the WHH expression in eq.5. The mean free path \(\ell\) the last column arrived at from the GL expression in eq. 8.
Figure 5: the resistive superconducting transition as a function of temperature and perpendicular magnetic field for the material deposited ex-situ and the films A, B and C deposited In-situ at a rate of 0.6, 2.7 and 5 Å s\({}^{-1}\), respectively.
The normal state resistivity of the sample is calculated from the measured resistance at \(10\,\mathrm{K}\) and zero applied magnetic field using [103]
\[\rho=\frac{\pi t}{\ln\left(2\right)}R(10\,\mathrm{K},0\,\mathrm{T}) \tag{3}\]
where \(t\) is the Nb film thickness determined from the QCM rate and deposition time. The extracted \(\ell\) values do compare well with published values [85; 100; 104; 105; 98] and are listed in fourth column of table 2.
### Ginzburg Landau coherence length
The Ginzburg- Landau (GL) coherence length \(\xi_{GL}(T)\), which denotes the characteristic length scale over which the order parameter varies, is arrived at via the upper critical field
\[B_{c2}(T)=\frac{\phi_{o}}{2\pi\xi_{GL}^{2}(T)}. \tag{4}\]
where \(\phi_{o}=\frac{h}{2e}\) is the magnetic flux quantum in type-II superconductors.
To obtain a value for \(\xi_{GL}(0)\) the zero temperature critical field has to be estimated. The resistive transition \(R(T,B)\) from fig.5 is not linear down to zero temperature and cannot be simply extrapolated. Werthamer, Helfand and Hohenberg (WHH) [106] arrived at a relevant theory taking into account non-magnetic impurities, spin paramagnetism and spin-orbit scattering at high fields based on initial results by Maki [107]. The relevant result from WHH has been presented by Gurevich et al. [108] as
\[B_{c2}(0)=0.69T_{c}\left.\frac{dB_{c2}}{dT}\right|_{T=T_{c}}. \tag{5}\]
Applying this theory produces the values for \(B_{c2}(0)\) and \(\xi_{GL}(0)\) presented in table 2. The GL coherence length compares well with previously reported values [109; 85; 110] and the expected \(B_{c2}(0)\) are one order of magnitude larger than what has been achieved with thin Al films [28].
### Bardeen Cooper Schrieffer coherence length
Although eq.2 is an established method, there are critiques relevant in the context of thin polycrystalline films [111; 112; 113]. It therefore is warranted to sanity check the consistency of eq.2 with the Ginzburg-Landau-Abrikosov-Gor'kov (GLAG) theory [114]. It connects the findings of GL and Bardeen-Copper-Schrieffer(BCS) for dirty limit films giving a relation between \(\xi_{GL}(T)\) and \(\xi_{o}\)[92] near \(T_{c}\) as
\[\xi_{GL}(T)=0.85\sqrt{\ell\xi_{o}}\left(1-\frac{T}{T_{c}}\right)^{-\frac{1}{2}} \tag{6}\]
where \(\xi_{o}\) is the BCS coherence length at zero temperature.
The BCS theory defines the \(\xi_{o}\) as the average distance between the two electrons making up a Cooper pair determined by the uncertainty principle [114]. It reads
\[\xi_{o}=\frac{\hbar v_{F}}{\pi\Delta(0)} \tag{7}\]
where \(\Delta(0)=1.764k_{B}T_{c}\) is the zero temperature BCS gap and \(v_{F}\) the Fermi velocity. Mayadas et al.[102] arrive at \(v_{F}=0.62\times 10^{8}\,\mathrm{cm}\,\mathrm{s}^{-1}\) in their derivation of eq.2. Using this value we obtain \(\xi_{o}\) between \(93\,\mathrm{nm}\) for the ex-situ film and \(96\,\mathrm{nm}\) for the in-situ - C film.
Combing eq.4 and 6 produces
\[B_{c2}(T)=\frac{\phi_{o}}{2\pi}\frac{1}{0.7225\ell\xi_{o}}\frac{T_{c}-T}{T_{c}}. \tag{8}\]
which expresses the upper critical field to be a linear function of temperature near \(T_{c}\).
Our data in fig.5 is indeed linear which allows us to extract the \(\ell\) from eq.8 with \(\xi_{o}\) given by eq.7. The resulting mean free paths are listed in the last column of table 2. The values are close to the results from eq.2 listed in table 2 and follow the same trend.
### Discussion
Despite the observed structural changes, the critical temperatures of our Nb films vary only by a few \(100\,\mathrm{mK}\). The \(T_{c}\) of the epitaxial clean limit film from Dobrovolskiy et al. listed for comparison in table 1 falls within the range of our results.
The resistivities of the in-situ samples increase going down the table correlating with the deposition voltage from table 1. The upper critical field increases significantly going down the table for all samples. Higher resistivities indicate structural degradation which in turn presents more pinning sites for vortices increasing the critical field [115; 116]. A significant structural change in AFM and XRD data has only been observed between the ex-situ and in-situ samples but not between in-situ samples. Thus the electrical measurements appear to be more sensitive to structural changes than the AFM and XRD analysis.
Although the ex-situ material is smoother and has a preferred crystallite orientation perpendicular to the surface it does not have the lowest resistivity, but a lower critical field and higher critical temperatures. The superconducting properties, AFM and XRD data all indicate a more homogeneous film which should result in a lower resistivity as exemplified by the reference film by Dobrovolskiy et al. listed in table 2. An explanation could be that the ex-situ film is thinner than we expect.
The resistivity of the clean limit reference film from Dobrovolskiy et al. is two orders of magnitude smaller
despite the film being half as thick. The additional effect of surface scattering to the resistivities of thin films [111; 112] does not appear to be significant down to the 52 nm thickness of the reference film. The large resistivities in our dirty limit films therefore do originate in the structural differences.
The mean free path listed in table 2 compared to the extracted coherence lengths confirms that our films are in the dirty limit. The critical field of the clean limit reference film in table 2 is 0.7 T. Reducing the power and hence the voltage for our in-situ depositions brings us closer to the clean limit. However, at the given pressure distance of 1 \(\mathrm{\SIUnitSymbolMicro bar}\), we are limited to the results from in-situ - A as the lowest power that results in a stable plasma.
## V Nb-GaAs interface
ADF STEM images presented in fig.6 were taken of the Nb-GaAs interface before and after annealing at 380 \(\mathrm{\SIUnitSymbolCelsius}\) for 40 s which is the annealing recipe for ohmic AuGeNi contacts to n\({}^{++}\) GaAs [117]. An amorphous interlayer can be seen for both the in-situ - B and ex-situ samples. The thicknesses of the amorphous interlayer indicated in the images were determined by comparing the normalised brightness, see supplementary for details [78]. The ex-situ amorphous interface, initially being thicker than in the in-situ one, is unaffected by the heat, while the in-situ material shows an increased thickness after tempering.
The amorphous interface thickness does not increase when annealed for the ex-situ sample while for the in-situ it does by 1 nm. The Nb-GaAs interface is expected to be sharp up to 600 \(\mathrm{\SIUnitSymbolCelsius}\) for ex-situ deposited films [118]. However, the quality of the TEM images presented by Ding et al. [118] appear to be the limiting factor in the comparison and the interlayer cannot be resolved. Nb deposition on clean GaAs surfaces via magnetron sputtering has been reported [119; 120] but is lacking structural investigation of the interface.
The best reference for in-situ Nb are evaporated films on InAs nanowires, which similarly show an amorphous interlayer [35; 36; 40]. The origin of the interlayer is attributed to the formation of a Nb\({}_{x}\)As\({}_{y}\) compound, which would explain our findings.
## VI Conclusion
We have investigated both the structural and superconducting properties of Nb films in-situ deposited on GaAs in a newly designed magnetron sputtering chamber connected to an UHV MBE cluster. An exceptionally high purity of the material was achieved by designing a UHV compatible gas supply system, as proven by the residual gas analysis of the chamber and elemental analysis of the resulting Nb films.
The structural analysis via AFM of the Nb films revealed a marked difference in surface roughness between our in-situ samples and a reference ex-situ sample, deposited in a commercial system. Varying the deposition power for the in-situ samples had little effect on the surface. XRD measurements further support that difference, with the ex-situ sample having only the (110) crystallite orientation with respect to the substrate surface. The in-situ samples all showed (110), (200) and (310) oriented crystallites, despite significant differences in deposition voltages. This difference in crystallinity is attributed to the nucleation and growth of individual crystallites in the polycrystalline film.
Measuring the superconducting resistive transition showed that the films are in the dirty limit with excellent \(T_{c}\) and \(B_{c2}(0)\) values. The \(T_{c}\) values did not vary significantly despite at steady increase of \(B_{c2}(0)\) with deposition voltage associated with structural degradation. The in-situ method produces pure films underpinned by RBS and PIXE measurements.
The \(B_{c2}(0)\) values reflected subtle structural differences between the in-situ films. With increasing deposition voltage the critical field increased, indicating that an increased power does have an effect, although we did not to resolve a trend in film structure by AFM or XRD.
All the investigated Nb - GaAs interfaces exhibited an
Figure 6: ADF STEM of the annealed and as deposited in-situ a) and ex-situ b) Nb-GaAs interface. See supplementary for complete images and discussion of how the interface widths were determined [78].
amorphous interface layer. Tempering the samples to 380 \({}^{\circ}\)C widens the amorphous layer for the in-situ - B but not the ex-situ sample. It is unclear if the formation of the amorphous alloy between the materials is purely chemically driven or if it is related to the sputtering process. Interestingly, its presence in the in-situ deposited samples shows that it is not related to the formation of the native oxide on the semiconductor. Similar phenomena were reported in literature, and the findings indicate that it is related to formation of Nb\({}_{x}\)As\({}_{y}\) alloy [35; 36; 40].
The presented results validate in-situ magnetron sputtering as a new path to combine Nb and GaAs. Work is ongoing to employ a wider range of III-V semiconductors and superconductors to built a wider material platform for SSH.
## VII Acknowledgement
The authors would like to thank Walter Bachmann, Andreas Stuker and by extension the entire workshop of the physics department of the ETH. Without their technical expertise, and more importantly patience, this system would have never been realised. Furthermore, we acknowledge Luca Alt, who built in large parts the setup used to measure the critical field and temperature and Werner Dietsche for an open ear and fruitful discussions. The project was financially supported by the Swiss National Science Foundation (SNSF). We thank the IBM Quantum Academic Network for financial support.
The authors gratefully acknowledge ScopeM for their support and assistance in this work, as well as the support of the clean room operations team of the Binning and Rohrer Nanotechnology Center (BRNC) and the support from Marilyne Sousa (IBM).
|
2304.10576
|
Evidence Gap Maps as Critical Information Communication Devices for
Evidence-based Public Policy
|
The public policy cycle requires increasingly the use of evidence by policy
makers. Evidence Gap Maps (EGMs) are a relatively new methodology that helps
identify, process, and visualize the vast amounts of studies representing a
rich source of evidence for better policy making. This document performs a
methodological review of EGMs and presents the development of a working
integrated system that automates several critical steps of EGM creation by
means of applied computational and statistical methods. Above all, the proposed
system encompasses all major steps of EGM creation in one place, namely
inclusion criteria determination, processing of information, analysis, and
user-friendly communication of synthesized relevant evidence. This tool
represents a critical milestone in the efforts of implementing cutting-edge
computational methods in usable systems. The contribution of the document is
two-fold. First, it presents the critical importance of EGMs in the public
policy cycle; second, it justifies and explains the development of a usable
tool that encompasses the methodological phases of creation of EGMs, while
automating most time-consuming stages of the process. The overarching goal is
the better and faster information communication to relevant actors like policy
makers, thus promoting well-being through better and more efficient
interventions based on more evidence-driven policy making.
|
Esteban Villa-Turek, Hernan David Insuasti-Ceballos, Jairo Andres Ruiz-Saenz, Jacobo Campo-Robledo
|
2023-04-20T18:07:02Z
|
http://arxiv.org/abs/2304.10576v1
|
# Evidence Gap Maps as Critical Information Communication Devices for Evidence-based Public Policy
###### Abstract
The public policy cycle requires increasingly the use of evidence by policy makers. Evidence Gap Maps (EGMs) are a relatively new methodology that helps identify, process, and visualize the vast amounts of studies representing a rich source of evidence for better policy making. This document performs a methodological review of EGMs and presents the development of a working integrated system that automates several critical steps of EGM creation by means of applied computational and statistical methods. Above all, the proposed system encompasses all major steps of EGM creation in one place, namely inclusion criteria determination, processing of information, analysis, and user-friendly communication of synthesized relevant evidence. This tool represents a critical milestone in the efforts of implementing cutting-edge computational methods in usable systems. The contribution of the document is two-fold. First, it presents the critical importance of EGMs in the public policy cycle; second, it justifies and explains the development of a usable tool that encompasses the methodological phases of creation of EGMs, while automating most time-consuming stages of the process. The overarching goal is the better and faster information communication to relevant actors like policy makers, thus promoting well-being through better and more efficient interventions based on more evidence-driven policy making.
_Keywords:_ information systems, evidence gap maps, efficient communication, evidence-based public policy, structured literature search, impact evaluation.
Introduction
Evidence-based public policy improves the quality of life of individuals and their relationship with the state, and as such it is a fundamental pillar in the efforts towards its modernization. This effort involves improving the state's ability to deliver on citizens' needs to increase their trust in public administration. The use of evidence in government decision-making has increased considerably regarding the use of evidence in the public policy cycle, which includes the formulation, design, implementation, and evaluation of public policies. However, this evidence can be presented in very different formats, such as scientific or academic articles, databases, reports, websites, among others. This poses the following challenges to policy makers: 1) to materially have access to the necessary evidence for the formulation, design, implementation, and evaluation of public policies; and 2) to assess if and where evidence gaps exist around certain topics of interest, including the impacts and results of related policy interventions.
Evidence Gap Maps (EGMs) have been promoted extensively by the International Initiative for Impact Evaluation (3ie) and were conceived as a response to this challenge and to the need to facilitate evidence-based decision making (B Snilstveit et al., 2013; Birte Snilstveit et al., 2016, 2017). In essence, they are instruments that allow for the mapping and synthesizing of large volumes of relevant evidence to offer a general vision of existing evidence about the impact and results of specific public policy interventions on larger outcomes of interest. The process, however, can take up to several months. This document offers a methodological review of EGMs and announces the development of a usable system that comprises all steps of EGM creation and automates some of them by connecting directly to available APIs of academic research databases to query and download articles that meet determined inclusion criteria. The user might be any stakeholder working in academia, civil society or in government and who actively engages in the process, as we will discuss below.
The contribution of the document is two-fold. First, it presents the critical importance of EGMs in the public policy cycle. Second, it justifies and explains the development of a usable tool that encompasses the methodological phases of
creation of EGMs, while automating most time-consuming and resource expensive stages in the process. Additionally, this document contributes to the empirical literature on EGMs and serves as a support to readers and academics interested in evidence-based policy making.
This document is organized as follows. In the second section EGMs are introduced, emphasizing the uses and advantages they possess, and establishes the link between EGMs and the public policy cycle. A methodological review of EGMs is presented in the third section. The fourth section presents the justification for the development of a semi-automatic tool to support the creation of EGMs. Finally, the limitations and final considerations are presented in the fifth and sixth sections.
## 2 Evidence Gap Maps (EGMs) and Evidence-based Public Policy
As mentioned above, EGMs are instruments aimed at obtaining a general, intuitive, and timely view of the evidence or the lack thereof at certain intersections of public policy interventions and outcome variables. Their objective is to promote better decision-making in policy making or when making decisions to allocate research resources to generate evidence in areas where it is scarce, and their importance lies precisely in the fact that they allow an overview of the evidence-generating studies that exist around the world regarding public policy interventions and their corresponding results.
This possibility of having a quick and complete vision of what has been done and what has happened policy-wise, is therefore a fundamental element of evidence-based public policy, which strengthens its creation, implementation, and evaluation processes. This panorama is important, because it allows public policy makers to have a broad and comparative position when faced with the uncertainty inherent in the design of new interventions and thus, we argue, EGMs should be widely used when designing public policy interventions.
Public policies are by nature a forward-looking practice, destined to shape certain scenarios that we consider desirable. One of the main tools to correctly approach this is the use of evidence, understood as a set of systematic observations identified for the purpose of establishing facts and/or testing hypotheses, which were obtained
through replicable methodologies (Nutley, 2003). In this way, evidence-based public policy should be understood as an approach that aims to ensure that decisions on public policy are guided by evidence, which can be generated through quantitative and qualitative methods. This use of evidence allows, firstly, to differentiate it from any other type of knowledge (e.g., expert opinions) and, secondly, it implies the documentation of research methods, peer review and public scrutiny, which favors the confidence in the results derived from the analysis of this type of evidence (Nutley et al., 2013). In general, greater use of evidence is associated with greater probabilities of achieving the social and economic objectives of a wide array of programs and projects proposed by a government, obtaining better results for the population, and saving valuable limited resources by selecting more effective or profitable solutions for social problems (Chalmers, 2003).
This essential aspect of evidence for policy making is therefore why we argue it is critical to gather as much of it as possible and to use it to justify the choice of certain public policy interventions. This is where EGMs become crucial, and they have been used in the past as very effective ways to synthesize the vast amounts of evidence there is to inform public policy decisions. Several recent examples can be seen in EGMs regarding the traditional, complementary and integrative medicines during COVID-19 (Portella et al., 2020); the support of local institutions for green growth (Berkhout et al., 2018); interventions for persons with disabilities in low- and middle-income countries and their effectiveness (Saran et al., 2020); interventions for reducing violence against children in low- and middle-income countries (Pundir et al., 2020); interventions against institutional child maltreatment (Finch et al., 2021); performance measures and management in primary healthcare systems in low- and middle-income countries (Munar et al., 2019).
## 3 Methodological review of EGMs
The methods used for constructing EGMs draw on methodologies previously implemented in other efforts of evidence mapping and synthesis approaches (Birte Snilstveit et al., 2016). Based upon them there are five main steps involved in designing and constructing EGMs, which are outlined below.
1. _Definition of Interventions and Outcomes Framework_
The first step in the EGM construction process requires previous knowledge and research on the topic being investigated. As such, usually a team of researchers conduct a preliminary literature review that allows them to assess existing interventions around the policy area of interest, as well as the outcome universe that might be affected by them (Birte Snilstveit et al., 2016). During this framework definition process policy researchers should generate communication channels to consult with interested actors and stakeholders, as they are crucial sources of knowledge on general relevance and acceptability of the proposed framework (B Snilstveit et al., 2013).
_ii)_ _Determination of Inclusion Criteria_
Generally, inclusion criteria for studies in a new EGM stem directly from the framework definition step because the substantive and field-specific requirements are defined and set forth then, as outlined above. Nevertheless, there can be formal differences regarding the type of documents to include in the EGM depending on its primary purpose. If, for example, the primary goal of the EGM is to inform decision and policy makers regarding existing evidence available in relation to interventions of interest, it could be best to only include systematic reviews that best curate and synthetize a possibly large volume of primary studies. If, on the other hand, the objective of the EGM is to identify existing research gaps, the primary documents to process would necessarily be primary studies, like impact evaluations, which could more easily translate into a quantitative overview of the amounts of evidence being produced in any given intervention-outcome intersection, or the lack thereof (Birte Snilstveit et al., 2016), which could entail a valuable signal for research resource allocation decisions. This gap-related functionality of EGMs can, in turn, signal the existence of _absolute gaps_ if there is little or no impact evaluations or primary studies in the intersection, and _synthesis gaps_, located in intersections where multiple impact evaluations or primary studies exist, but there is no or not recent synthesis of them, usually in the form of a systematic review (Birte Snilstveit et al., 2017).
_iii)_ _Search and Inclusion Assessment_
Once the substantive and formal criteria for inclusion have been set, the next step is to search and screen for studies that fit both sets of requirements. To do it
sustainably, a delicate balance of breadth and depth must be attained. More precisely, it has been established that highly sensitive searches with low precision is unmanageable, reason for which more basic yet systematic searches using keywords should be prioritized (Birte Snilstveit et al., 2016). Search methods necessarily must respond to the ultimate purpose of the EGM, as established in the second step. If the primary goal is to present a translated synthesis of existing evidence for decision and policy makers, the search must target mainly systematic reviews repositories and databases. Otherwise, the search methods should focus on reputable sources of primary studies like impact evaluations, which usually requires more in-depth searches and screenings (B Snilstveit et al., 2013). In either case, researchers should supplement said strategic search methods with studies from other reputable sources using, for instance, snowballing or citation tracking techniques (Birte Snilstveit et al., 2016).
#### 1.0.1 Data Extraction, Coding and Critical Appraisal
After having searched and obtained all relevant documents, the next step is to systematically extract all necessary information and code it in a structured way (Birte Snilstveit et al., 2016). Depending on the scope of the EGM and its goal, the coding can be tailored to reflect the occurrence of any given intervention-outcome intersection, as well as the inclusion of other plausibly relevant insights, such as methodologies implemented, status of the study, geographical scope, etc. (B Snilstveit et al., 2013). Finally, critical appraisals of systematic reviews, their quality ratings and user-friendly summaries of the most relevant documents identified can be included in the EGMs, for instance in their interactive visualization (B Snilstveit et al., 2013).
#### 1.0.2 Analysis and user-friendly presentation
The last step is to populate the EGM itself, which comprises a descriptive and non-formal representation of previously extracted and coded information placed onto the intervention-outcome intersection framework matrix, as defined in the first step of the process (Birte Snilstveit et al., 2016). This allows for a comprehensive and visual overview of the state of the research in each intersection of interest, enabling researchers to find study gaps and even signaling the quality of reviewed systematic reviews using color codes, all of which can be filtered based on characteristics like
geographic location, type of study, population, etc. (Birte Snilstveit et al., 2016). A good practice is to include an explanatory note in the EGM containing a description of the methodologies employed in its construction, and, if possible, brief summaries touching upon identified policy implications, future research, findings summaries, among others (Birte Snilstveit et al., 2016).
## 4 New Horizons: Applied Computational Methods for EGMs
We have outlined the general methodology proposed for the creation of EGMs. The five steps required for the successful design, construction and utilization of EGMs are an important step towards their standardization and generalized use but can imply major drawbacks. Specifically, it has been identified that normally it takes very large amounts of time to complete some of the EGM creation steps. Phases ii) and iii) where seemingly large volumes of documents must be searched for and screened, and information needs to be extracted and manually coded are especially prone to be notoriously taxing, both in terms of human and time resources, which can lead to EGMs taking sometimes up to 6 months to complete (Birte Snilstveit et al., 2016).
If EGMs are instruments intended for timely overviews of evidence or lack thereof aimed at supporting better decision making in rapidly changing policy processes or resource allocation calls, immediacy in their availability should be of the uttermost importance and requiring months towards their completion seems to play the opposite role. For this reason, we are proposing novel and efficient ways of applying computational methods and natural language processing (NLP) techniques that seek to automate as much as possible the most resource-intensive aspects of EGM construction.
In recent years, text-mining has become a salient priority for some of the major publishing companies specializing in academic, scientific, and technical literature, through the design and implementation of application programming interfaces (APIs). Indeed, text-mining has been used in other mapping exercises and can prove to be instrumental for more efficient creation of EGMs (Birte Snilstveit et al., 2016). Notable examples of the latter can be found in the noteworthy technical
developments undertaken by salient publishers like Elsevier (SCOPUS & ScienceDirect), Springer Nature and CORE. All offer API implementations that let authenticated researchers establish a direct connection with their databases and servers to interact with them in a dynamic way, allowing them to send requests containing all relevant search query terms and receiving all matched documents. This way, EGM production teams can centralize and standardize their queries and run them simultaneously with a single call distributed across all three servers.
If all publishing houses, databases, and scientific literature repositories developed similar API implementations, all EGMs could be created with a couple of iterative search queries that returned all matched documents all at once. Researchers would not need to manually search and screen documents on each individual source and queries would not differ from source to source, meaning that more consistency could be attained regarding both matched studies and methodological documentation containing query parameters. This means that the actions outlined above in step iii) of the methodological approach to create EGMs would be greatly simplified and as frictionless as possible.
The other aspect of EGM creation that can be potentially optimized by means of automation relates to step iv) of the methodology. During that process, researchers must manually read, analyze and code all relevant bits of information that will become the EGM. It is, as it sounds, a labor-intensive task that can also be prone to different appraisal outcomes regarding the myriad of important data that has to be extracted from all included documents in the study. This deals essentially with the fact that different people can interpret and therefore decide to include or exclude information in different ways, even if a rigorous methodology to do so has been discussed and put in place in advance. This would render the resulting EGM unreliable, a reason for which a novel approach should be considered and adopted.
Precisely, the approach we propose here tackles the issue of human error and makes the process of data extraction much more efficient. We explored different methods to computationally analyze textual data, particularly Latent Dirichlet Allocation (LDA) models. LDA models for text corpora analysis were introduced in 2003 (Blei et al., 2003) and have had a major impact on computational methods to model the statistical structures present in documents and within document corpora across various disciplines. In a nutshell, LDA is a generative probabilistic approach that
models both topic-word and document-topic probability distributions parting from a "bag of words" notion that emanates from an exchangeability assumption, which aim to capture latent variables of abstract notions like topics (Blei et al., 2003). This methodology allows researchers to identify the latent topics that underlie a certain document corpus, which can prove very useful, especially when facing large volumes of documents and no labeled data to approach the task as a supervised classification problem. Even if the task was to be conducted in such a way, a very labor intensive and manual set of steps would need to be taken to substantially assess and code the information contained in the documents (like the information extraction step outlined above for EGMs), which also lacks scalability (Eshima et al., 2020).
LDA models thus calculate and output two main collections of estimations: a set of topics and a list of the proportion of most frequent words that most probably produced them; and the set of documents in the corpus with the corresponding lists of the proportion of topics that most probably generated them. The second set of estimations, that is, the document-topic probabilities, are straightforward to interpret, whereas the same cannot be said from the first set of estimations, whose number can be set by the researchers. In fact, these topic-word estimations have proven to be nonsensical at times, hard to interpret and even share apparently very similar content across different topics (Morstatter and Liu, 2016; Newman et al., 2011). Moreover, the choice of the number of topics to estimate has been shown to have a significant impact on the results obtained (Roberts et al., 2016).
For these reasons, we have developed a semi-automatic tool to cover the EGM construction process from stage ii) to stage v). This tool connects directly to the available APIs of indexed databases of academic and scientific literature to query and download papers that meet the criteria defined in stages ii) and iii) by experts in the field (i.e., the users or policy experts). Then, in the next stage, the user must select and/or discard documents that are not relevant to the research. The next stage includes the NLP estimation, where we use a novel modification of the Latent Dirichlet Allocation (LDA) algorithm called Keyword Assisted Topic Models (keyATM) (Eshima et al., 2020), an extension of a previous model (Jagarlamudi et al., 2012). As the name implies, this model is based on a set of keywords carefully
selected by the user to signal the model what keywords belong to which topic according to their specific substantive policy knowledge.
These words are included in the model as topic labels before fitting, therefore eliminating interpretations of dubious topics in later stages and allowing multiple keywords to describe different topics (Eshima et al., 2020). Here, each document that is part of the query is automatically evaluated by the NLP module to find out the probability of one or more interventions (modeled as the topics being identified by keyATM given specialized keywords) within the document. Those with the highest probability of occurrence are presented to the user, who must then evaluate the presence of said intervention (topic) within each of the documents and its corresponding effect, as identified in the documents. As a final step, the tool consolidates the information and generates an interactive visualization of the EGM.
Figure 1 shows the general scheme of the EGM construction tool and the NLP phase as part of this process.
This approach is optimal for EGM creation because it essentially allows specialized policy researchers to take advantage of their substantive knowledge and select relevant, content-specific keywords from the corpus and form lists of keywords per topic to the model before fitting it. Also ideal is the fact that the topics that are being
Figure 1: Block diagram of the tool proposed for the construction of EGM
identified by the model can also be the corresponding outcome variables of interest determined in the EGM framework, allowing for the document-topic distribution to be employed to signal the policy research team that each document included in the study has been generated from certain topics in each proportion. This means that the researchers would not need to spend as much time manually reading each document to determine when a given outcome is being spoken about, but rather would get into each document with cues from the model regarding what outcomes are studied in each, resulting in the task being significantly reduced to simply corroborating the model's output and identifying what kind of effect has been found for each intervention-outcome pair, whether positive, negative or non-significant.
This approach would not only make the information retrieval and coding much more efficient, but it would also ensure that there is no room for human-driven discrepancies in the information retrieval process, all the while maintaining valuable scalability characteristics and reproducible output.
## 5 Limitations
Although preliminary tests have showed consistent accuracy when applying keyATM to policy-related academic literature for the creation of EGMs, more testing and iteration is needed to improve the model and its proposed use.
This approach has the potential to make EGM creation a much more frictionless, efficient, and timely process. Of course, it does not intend to completely override human interventions in the process. Rather, it seeks to automate specific sub-tasks that would make human interaction more seamless and control-focused, while also retaining crucial tasks, like determining the direction of the effects found for each intervention-outcome pair.
## 6 Discussion and Conclusions
Unquestionably, the COVID-19 pandemic has left governments facing major crises and challenges, but also with some advantages, in terms of information and extensive evidence, very useful for improving the design, formulation, implementation and evaluation of public policies.
In this sense, this document presented a methodological review of EGMs and the development of an information gathering and processing system to facilitate the EGM construction process, from the determination of inclusion criteria to the analysis and user-friendly presentation.
Our aim is to show how our proposed system can further enable policy makers around the world to have efficient and timely access to all the evidence they need to make decisions, while at the same time communicating vast amount of information in a visual and friendly manner.
Although still in early iteration stages, we believe this approach can facilitate the creation of EGMs, namely by substantially reducing the time it takes to create them; avoiding bias; allowing for more systematized, robust, and replicable methodologies and information management; avoiding human error; enabling access to potentially larger universes of evidence; among many others not yet researched affordances.
|
2307.09577
|
Atmospheric composition of WASP-85Ab with ESPRESSO/VLT observations
|
Transit spectroscopy is the most frequently used technique to reveal the
atmospheric properties of exoplanets, while that at high resolution has the
advantage to resolve the small Doppler shift of spectral lines, and the trace
signal of the exoplanet atmosphere can be separately extracted. We obtain the
transmission spectra of the extrasolar planet WASP-85Ab, a hot Jupiter in a
2.655-day orbit around a G5, V=11.2 mag host star, observed by high-resolution
spectrograph ESPRESSO at the Very Large Telescope array for three transits. We
present an analysis of the Rossiter-McLaughlin effect on WASP-85A, and
determine a spin-orbit angle ${\lambda = -16.155^{\circ}}^{+2.916}_{-2.879}$,
suggesting that the planet is in an almost aligned orbit. Combining the
transmission spectra of three nights, we tentatively detected H$\alpha$ and Ca
II absorption with $\gtrapprox 3\sigma$ via direct visual inspection of the
transmission spectra with the Center-to-Limb variation and the
Rossiter-McLaughlin effects removed, which still remain visible after excluding
the cores of these strong lines with a 0.1 A mask. These spectral signals seems
likely to origin from the planetary atmosphere, but we can not fully exclude
their stellar origins. Via the cross-correlation analysis of a set of atoms and
molecules, Li I is marginally detected at $\sim4\sigma$ level, suggesting that
Li might be present in the atmosphere of WASP-85Ab.
|
Zewen Jiang, Wei Wang, Guo Chen, Fei Yan, Heather M. Cegla, Patricio Rojo, Yaqing Shi, Qinlin Ouyang, Meng Zhai, Yujuan Liu, Fei Zhao, Yuqin Chen
|
2023-07-13T06:25:23Z
|
http://arxiv.org/abs/2307.09577v1
|
# Atmospheric composition of WASP-85Ab with ESPRESSO/VLT observations
###### Abstract
Transit spectroscopy is the most frequently used technique to reveal the atmospheric properties of exoplanets, while that at high resolution has the advantage to resolve the small Doppler shift of spectral lines, and the trace signal of the exoplanet atmosphere can be separately extracted. We obtain the transmission spectra of the extrasolar planet WASP-85Ab, a hot Jupiter in a 2.655-day orbit around a G5, \(V=11.2\) mag host star, observed by high-resolution spectrograph ESPRESSO at the Very Large Telescope array for three transits. We present an analysis of the Rossiter-McLaughlin effect on WASP-85A, and determine a spin-orbit angle \(\lambda=-16.155^{+2.296}_{-2.879}\), suggesting that the planet is in an almost aligned orbit. Combining the transmission spectra of three nights, we tentatively detected H\(\alpha\) and Ca ii absorption with \(\gtrsim 3\sigma\) via direct visual inspection of the transmission spectra with the Center-to-Limb variation and the Rossiter-McLaughlin effects removed, which still remain visible after excluding the cores of these strong lines with a 0.1A mask. These spectral signals seems likely to origin from the planetary atmosphere, but we can not fully exclude their stellar origins. Via the cross-correlation analysis of a set of atoms and molecules, Li i is marginally detected at \(\sim 4\sigma\) level, suggesting that Li might be present in the atmosphere of WASP-85Ab.
## 1 Introduction
Characterization of exoplanets is one of the fastest-growing and most exciting fields in astronomy in recent years. The information encoded in the exoplanet atmosphere can provide critical insights into myriad atmospheric processes as well as the formation and evolutionary history of the planet (Madhusudhan, 2019). Thanks to their relatively high equilibrium temperatures and large atmosphere scale heights, hot Jupiters (HJs) have relatively prominent atmosphere signals. This makes HJs the best targets to carry out detailed characterization of the physical and chemical properties of exoplanet atmospheres. Among all the powerful tools for the atmospheric characterization of exoplanets, high-resolution spectroscopy (HRS) becomes widely applied to search for atoms and molecules whose transmission spectra have either several strong individual lines or a few dense forests of spectral lines. HRS is sensitive to the change of exoplanet radial velocity, since the host star and Earth moves much slower than a planet, so that the atmospheric signals of an exoplanet are separated from the stellar and telluric signals. For the same reason, HRS can be used to characterize the planet's atmosphere due to its sensitivity to the depth, shape, and position of the planet's spectral lines, Compared to low-resolution spectroscopy, HRS has the advantage to probe above cloud decks where the cores of the strongest spectral lines are formed (Birkby, 2018).
The first ground-based detection of CO was achieved by HRS in the hot Jupiters HD 209458b (Snellen et al., 2010). Plenty of atoms and molecules have been detected in the atmospheres of many HJs including Na, K, Li, Ca, Cr, Mg, He, Ti, Fe, Fe+, and Ca+ as well as H\({}_{2}\)O, etc. For example, CH and Na are detected in the atmospheres of HD 209458b (Giacob et al., 2021; Charbonneau et al., 2002), Cr in WASP-189b (Prinoth et al., 2022), K in WASP-121b (Merritt et al., 2021), Sc and Ti+ in HD 189733b (Lira-Barria et al., 2022), Ti and TiO in HD 149026b (Ishizuka et al., 2021). All these species are detected by the HRS method.
The planet WASP-85Ab (Brown et al., 2014) was identified in both the Super-WASP and WASP-South photometry independently and confirmed by spectroscopic follow-up using the SOPHIE spectrograph (Perruchot et al., 2008) mounted on the 1.93-m telescope of the Observatoire de Haute Provence. It is a classic HJ with a mass of \(1.265\pm 0.065\,M_{\rm Jup}\), a radius of \(1.240\pm 0.030\,R_{\rm Jup}\) and an equilibrium temperature \(T_{\rm eq}\) of 1452 K
with an orbital period of \(\sim\)2.65 d (Mocnik et al., 2016). Thus, it is an interesting target for atmospheric characterisation given its moderately high equilibrium temperature and the low density. Up to now, no such study has been performed yet. WASP-85Ab orbits a G5V star in a binary star system. The host star WASP-85A is active with starspots detected by Mocnik et al. (2016) from their analysis on the K2 short-cadence data of WASP-85A.
This paper is organized as follows. We present the details of three transit observations of WASP-85Ab using the Echelle Spectrograph for Rocky Exoplanet and Stable Spectroscopic Observations (ESPRESSO) in Section 2. We measure the orbital architecture of the WASP-85A system analyzed based on the Rossiter-McLaughlin (RM) effects in Section 3. The high-resolution transmission spectra are presented in Section 4, followed by our cross-correlation analysis of the spectra. The discussion and conclusions are presented in Section 5.
## 2 Observations and data reduction
Three transits of WASP-85A were observed in February and March 2021 with ESPRESSO (Pepe et al., 2021) under the ESO programs 0106.D-0853 (PI: H. M. Cegla). They aimed to detect and characterize stellar differential rotation, and convection-induced variations and to determine spin-orbit alignment, which can help us to better understand the properties of WASP-85A (H.M Cegla et al. _in prep_). While in this study, we will mainly target to reveal the composition of WASP-85Ab atmosphere.
ESREESSO is a fibre-fed ultra-stable echelle high-resolution spectrograph, mounted at the 8.2m Very Large Telescope at European Southern Observatory in Cerro Paranal, Chile (Pepe et al., 2021). The observations were taken in the High Resolution 1-UT mode, and data were read out in \(2\times 1\) binning and slow readout mode, which yields a spectral resolving power \(R\) of \(\sim\)140 000 and a wavelength range of 380\(\sim\)788 nm. During the observations, fiber A was pointed to the target star, while fiber B was pointed to the sky for simultaneous monitoring of the sky emission.
The three transits were observed on the following dates: 2021 Feb. 13 (Night 1, the first transit, hereafter T1), Feb. 21 (Night 2, the second transit, T2), and Mar. 17 (Night 3, the third transit, T3), each covering an entire transit plus 1 to 2 hours out-of-transit baselines. Individual exposure time for each transit is 400 s. In T1, a total of 47 spectra were obtained with 21 in transit and 26 out of transit covering the orbital phase \(\Phi\) from \(-\)0.062 to \(+\)0.031. In T2, 41 spectra were acquired including 22 in-transit and 19 out-of-transit spectra with \(\Phi\) from \(-\)0.043 to \(+\)0.037. In T3, the number of exposures collected in total is 39 with \(\Phi\) ranging from \(-\)0.036 to \(+\)0.044. The numbers of the in- and out-of-transit spectra are 21 and 18, respectively. Details of the three observations are summarized in Table 1.
WASP-85A has a K0-dwarf stellar companion namely WASP-85B with a \(V\)-mag difference of 0.7 mag and an angular distance of \(\sim 1.5\arcsec\)(Brown et al., 2014), larger than the ESPRESSO fiber size of \(1\arcsec\). Assuming a 2D Gaussian PSF with seeing sizes of \(1\arcsec\) and 0.85\(\arcsec\), we determine that WASP-85B's flux contamination to 85A is less than 0.44% and 0.14%. This suggests that at least for the T2 and T3 data, which were mostly taken with seeing better than 0.85\(\arcsec\), should be mostly clean from WASP-85B's contamination.
The temporal variations of the airmass, S/N ratio, seeing and S-index during the three observing nights are shown in Fig. 1. The stellar activity level of WASP-85A as represented by the S-index is the lowest in T3 and the highest in T2, suggesting that the data taken in T3 should be the least affected by stellar activity while those in T2 are affected most. It is also worthy to note that the median and variance of the seeing size in T1 are larger than those in T2 and T3, with quite a number exceeding \(1\arcsec\). Given that the fiber size of ESPRESSO is \(1\arcsec\) and the distance between the WASP-85A and B is \(\sim 1.5\arcsec\)(Brown et al., 2014), at least some of the spectra taken in T1 may bear both slit-loss and light contamination from the stellar companion, which will lead to lower S/N ratios and weaker planet signals. On the other hand, most of T2 & T3 data points should have only minor slit loss and should not be affected by the companion's contamination. Combining the above discussion, and the fact that planet signal is the strongest or only observable in T3, we use the T3 data as the main data set in this work.
For data analysis, we used the one-dimensional sky-corrected spectra processed by the ESPRESSO reduction pipeline, version 2.3.4, downloaded from the ESO advanced science archive. The raw spectral images were corrected for bias, dark, flat, and bad pixels before being extracted into 1D spectra.
## 3 Data analysis
### Telluric correction
The telluric correction was firstly conducted by subtracting the fiber B on-sky spectra from the fiber A on-target spectra for telluric emission features, which had been performed by the ESPRESSO reduction pipeline. We then corrected the telluric
Figure 1: The temporal variations of the airmass, S/N@550 nm, seeing and S-index of the three transit observations from top to bottom, respectively. The green dashed lines indicate the first and fourth contacts of the transits.
absorption imprinted on the obtained spectra using the ESO software Molecfit version 1.5.7 (Smette et al., 2015), following Al- larat et al. (2017).
Mocefit is based on synthetic modeling of the Earth's atmospheric transmission with a line-by-line radiative transfer model, which has been widely used recently in high-resolution transmission spectroscopic studies. Allart et al. (2017) removed telluric features in the spectrum using Molecfit in order to search for water vapor of HD 189733b; Hoeijmakers et al. (2019) detected absorption of Na i, Cr ii, Sc ii and Y ii in the atmosphere of an ultra-hot Jupiter, KELT-9b, after removing telluric contamination; Tabernero et al. (2021) modeled the telluric transmission spectrum using Molecfit to find the absorption of Li i, Na i, Mg i, Ca ii, K i and Fe i.
Fig. 1 presents comparisons of a sample original spectrum and its corresponding spectrum with telluric correction applied around the Na D1 & D2-line doublet region (Top panel) and H\(\alpha\) spectral line region (Bottom panel) on the night of 2018 October 31, which illustrate that the telluric features have been corrected quite well. The specific parameters used in this work for telluric correction are listed in Table 2. Note that the obtained ESPRESSO spectra are given in the solar system barycentric rest frame, while Molecfit requires terrestrial rest frame spectra as input. Therefore, we shifted the observed spectra to the terrestrial rest frame considering the barycentric Earth radial velocity (BERV) for a better correction of telluric absorption.
### Rossiter-McLaughlin analysis
When a planet transits its rotating host star, the portion of the stellar disk blocked by the planet varies with time, resulting in anomalous radial-velocity variation that will overlay on the Doppler reflex motion. This effect was first pointed out by Rossiter (1924) and McLaughlin (1924), and is known as the RM effect (Triaud, 2018). Queloz et al. (2000) reported the first discovery of the RM effect on an exoplanet, by analyzing the anomaly in the radial velocity curve after removing the influence of Doppler reflex motion. From this anomaly, they determined \(vsini_{\star}\) and the angle between the orbital plane and the apparent equatorial plane, namely the spin-orbit angle \(\lambda\).
Two main methods are widely used to measure the radial velocities (RVs), one is based on least-squares fitting by carrying out the template matching of the observed spectra (Butler et al., 1996); the other utilizes the cross-correlations function (CCF) of observation spectra with the binary mask (Baranne et al., 1996; Pepe et al., 2002). In this work, the RVs and their uncertainties determined using the CCF method by the ESPRESSO pipeline are adopted.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & Date & VLT/UT & \multicolumn{3}{c}{Number of spectra} & Exp. time & Airmass range & Mean S/N & Program ID \\ & (UT Time) & \multicolumn{3}{c}{Total} & In-transit & Out-of-transit & (s) & & (@550 nm) & \\ \hline Night 1 & 2021-02-13 & UT3 & 47 & 21 & 26 & 400 & \(1.1-1.7\) & \(\sim\)64 & 0106.D-0853 \\ Night 2 & 2021-02-21 & UT4 & 41 & 22 & 19 & 400 & \(1.1-1.6\) & \(\sim\)63 & 0106.D-0853 \\ Night 3 & 2021-03-17 & UT2 & 39 & 21 & 18 & 400 & \(1.1-1.5\) & \(\sim\)61 & 0106.D-0853 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the WASP-85Ab transit observations
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ Parameter} & Value & Description \\ \hline \multicolumn{1}{c}{ftol} & \(10^{-9}\) & \(\chi^{2}\) tolerance \\ xtol & \(10^{-9}\) & Tolerance for the molecfit \\ & & fitted variables & \\ fit\_cont & 1 & Continuum fitting flag \\ cont\_n & 3 & Degree of polynomial continuum \\ fit\_res\_gauss & 1 & Gaussian kernel \\ res\_gauss & 3.5 & Kernel size (pixels) \\ kernfac & 6.0 & Kernel size measured in units of \\ & & the kernel FWHM \\ list\_moc & H\({}_{2}\)O, O\({}_{2}\) & Molecules to be synthetised \\ \hline \end{tabular}
\end{table}
Table 2: Molecfit parameters used for telluric correction
\begin{table}
\begin{tabular}{l l l} \hline \hline Description & Symbol & Value \\ \hline \multicolumn{1}{c}{Stellar Parameters} \\ \hline \(V\) magnitude & \(m_{\rm v}\) & \(11.2\pm 0.011\) mag \\ Effective temp & \(T_{\rm eff}\) & \(6112\pm 27\) K \\ Surface gravity & \(\log g_{\star}\) & \(4.48\pm 0.11\) cgs \\ Metallicity & [Fe/H] & \(0.00\pm 0.05\) dex \\ Stellar mass & \(M_{\star}\) & \(1.09\pm 0.08\,M_{\odot}\) \\ Stellar radius & \(R_{\star}\) & \(0.935\pm 0.0023\,R_{\odot}\) \\ Right ascension & R. A. & \(11^{\rm h}43^{\rm m}38.01^{\rm s}\) \\ Declination & Dec & \(+06^{\circ}33^{\prime}49.4^{\prime\prime}\) \\ \hline \multicolumn{1}{c}{Planet Parameter} \\ \hline Planet mass & \(M_{\rm p}\) & \(1.265\pm 0.065\,M_{\rm Jup}\) \\ Planet radius & \(R_{\rm p}\) & \(1.24\pm 0.03\,R_{\rm Jup}\) \\ Planet density & \(\rho\) & \(0.660\pm 0.020\,{\rm g}\,{\rm cm}^{-3}\) \\ Equilibrium temp & \(T_{\rm eq}\) & \(1452\pm 6\) K \\ Radius ratio & \(R_{\rm p}/R_{\star}\) & \(0.0187\pm 0.00002\) \\ \hline \multicolumn{1}{c}{Orbit Parameters} \\ \hline Epoch\(-2450000\) & \(T_{\rm c}\) & \(6847.472856\pm 0.000014\) BJD \\ Semi-amplitude\({}^{1}\) & \(K_{\star}\) & \(173.3\pm 1.8\,{\rm m}\,{\rm s}^{-1}\) \\ Period & \(P\) & \(2.6556777\pm 0.00000044\) d \\ Transit duration & \(T_{14}\) & \(0.10817\pm 0.00002\) d \\ Ingress duration & \(T_{12}\) & \(0.013037\pm 0.000002\) d \\ Semi-major axis & \(a\) & \(0.039\pm 0.001\) AU \\ Inclination & \(i\) & \(89.69^{+0.11}_{-0.03}\,{\rm deg}\) \\ \hline \end{tabular}
\end{table}
Table 3: Physical and orbit parameters of the WASP-85 system
To analyze and model the RM effect based on the RVs, we use Markov chain Monte Carlo (MCMC) algorithm which is available in emcee(Foreman-Mackey et al., 2013) and employ the RM effect model presented in Ohta et al. (2005) implemented in PyAstronomy (Czesla et al., 2019) as modelSuite.RmCL, which performs the analytical fit for the RM effect of a circular orbit. The RmCL model incorporates with 9 parameters, including the planet's orbital period \(P\), the time of transit midpoint \(T_{\rm c}\), the inclination of planetary orbit plane \(i\), the inclination of the stellar rotation axis \(i_{\star}\), the ratio of planetary and stellar radius \(R_{\rm p}/R_{\star}\), the semi-major axis \(a\), the linear limb-darkening coefficient \(\epsilon\), the stellar angular rotation velocity \(\Omega\), the sky-projected angle \(\lambda\) which is the angle between the stellar rotation axis and the normal of planetary orbit plane. Before modeling the RM effect, we estimated and removed the corrected noise in the observed RVs with Gaussian process (GP) (Foreman-Mackey et al., 2017), and corrected for the Doppler reflex motion using Kepler's Laws. The rest RVs are used as input for the RM effect model. Note that there are four RV points in T3 deviating from the RV trend which are excluded from the RM analysis.
Assuming a circular planetary orbit, the RV curve with the baseline trend removed can be expressed with the following formula:
\[V_{\star}=K_{\star}\sin(2\pi\phi)+V_{\rm bary}+V_{\rm sys} \tag{1}\]
where \(V_{\star}\) is the RV of the host star, \(K_{\star}\) is the RV semi-amplitude, \(\phi\) is the orbital phase of the planet, \(V_{\rm bary}\) is the barycentric velocity because of the Earth's revolution, \(V_{\rm sys}\) is the radial velocity of targeted exoplanet system. Among them, \(V_{\star}\), \(\phi\) and \(V_{\rm bary}\) are functions of time, while the values of \(K_{\star}\) and \(V_{\rm sys}\) for each night are derived from our GP analysis as listed in Table 5. Here, we use the \(K_{\star}\) value shown in Table 3 and fixed it in the next steps.
We set \(\lambda\), \(\epsilon\), \(\Omega\), and \(i_{\star}\) as free parameters for our model, with their priors listed in Table 4. The remaining parameters including \(a\), \(P\), \(i\), \(T_{\rm c}\) and \(R_{\rm p}/R_{\star}\) are fixed to the specific values as listed in Table 3. In order to explore a wide parameter space, we set the walkers to 200 with \(50,000\) steps. The first \(10,000\) steps were discarded as burn-in in order to probe the parameter space and get settled into the maximum density in the MCMC chain. The thus obtained parameters from the MCMC fitting are shown in Table 4, whereas the retrieved best model is compared with the data set in Fig. 2, with an rms residual of \(\sim 1.877\) m s\({}^{-1}\). The derived projected spin-orbit angle \(\lambda=-16.155^{+2.916}_{-2.879}\), implying the planetary orbit plane is roughly aligned to the stellar equator. We also derive the projected stellar rotation velocity \(v\sin i_{\star}=2.98^{+0.050}_{-0.043}\) km s\({}^{-1}\), the linear limb dark coefficient \(\epsilon=\)0.855\({}^{+0.014}_{-0.014}\). These values are used as input to model the center-to-limb variation (CLV) effect and RM effect in order to eliminate the influence caused by the occultation of the host star by the planet. The derived posterior distributions for the three transits are given in Fig. 2 in detail.
## 4 Transmission spectrum analysis
The atmosphere composition of HJs plays an important role in understanding atmospheric dynamics and planetary evolution. Here we mainly search for the most abundant atoms including Na i, Mg i, H\(\alpha\), H\(\beta\), Ca ii H, Ca ii K and Li i. We adopt the method outlined in Wyttenbach et al. (2015) and Casasayas-Barris et al. (2019) to extract transmission spectra of individual lines. As described in Section 3, the observed spectra have been corrected for telluric contamination. Then the spectra are normalized to their corresponding continuum level with a two-degree spline curve fit with the regions around strong absorption lines excluded. We also apply a sigma-clipping rejection algorithm on the normalized spectra and replace the cosmic ray hits with the mean value of all the other spectra at each wavelength (Allart et al., 2017).
In order to align the stellar lines, we first shift the spectra to the stellar rest frame, taking into account BERV, \(V_{\rm sys}\) and the stellar reflex motion induced by the planet \(V_{\rm reflex}\). BERV is obtained directly from the files header information of the ESPRESSO spectra, while \(V_{\rm sys}\) is derived from the RM analysis as described in Section 3.2 and \(V_{\rm reflex}\) is calculated using the orbital parameters in Table 3. Then the aligned out-of-transit spectra are averaged with their mean signal-to-noise ratio (SNR) as their weights to construct a master out-of-transit spectrum, which is used to divide each individual spectrum in order to remove stellar light contribution so that the absorption signal from the planetary atmosphere may be exposed (Stangert et al., 2021). Next, we shifted each divided in-transit spectrum to the planet rest frame using the planet radial velocity \(V_{\rm p}\) determined from \(K_{\rm p}\) and the orbit phase of a given time with the following formula:
\[\Re(\lambda)=\sum_{in}\frac{F_{in}(\lambda)}{F_{out}}\bigg{|}_{\rm Planet\;RV\; Shift}\;-1 \tag{2}\]
Finally, the transmission spectrum is the SNR-weighted sum of in-transit residuals.
Note that there is a wiggle pattern in the derived transmission spectrum, with an amplitude of \(\sim\)0.075% and a period of \(\sim\)30 A, which might potentially affect the final spectral signature analysis and need to be corrected. These wiggles were first reported in Allart et al. (2020) and are possibly induced by an interference pattern caused by the Coude train optics as pointed
Figure 2: The RM effect fit using the GP+PyAstronomy package on the ESPRESSO RVs taken from the spectrum headers. _Top panel:_ The RV curve with Kepler motion and red noise removed, showing the RM effect of WASP-85Ab of the three transits represented by the red, green, and blue circles with error bars. The best-fit model using the PyAstronomy package is shown in the solid line. _Bottom panel:_ The residuals between the data and the model predictions.
out by Tabernero et al. (2021). To correct the wiggles, we remove a sinusoidal trend by fitting a set of cubic splines to the obtained transmission spectra. Fig. 3 shows an example of such correction.
The CLV and RM effect affect stellar lines profile and may produce additional time-correlated signals in the transmission spectra. These two effects are both induced by the joint effect of planet occultation and stellar rotation. We adopt the method as presented in Yan & Henning (2018), Chen et al. (2020) and Casasayas-Barris et al. (2019) to model the stellar spectra at different transit positions, which considers both the RM and CLV effects. We use the Spectroscopy Made Easy tool (SME,Valenti & Piskunov (1996)) to compute the theoretical stellar spectra at 21 different limb-darkening angles (\(\mu\)) using the MARCS and VALD3 line lists (Ryabchikova et al., 2015). We employ solar abundance and local thermodynamical equilibrium (LTE) for the calculation of stellar spectra.
The stellar disc is then divided into elements of size \(0.01\,R_{\star}\times 0.01\,R_{\star}\), each element owning specific parameters including \(v\sin i_{\star}\), \(\mu\), and \(\theta\), which is the angle between the normal to each element and the line of sight. The position of the planet relative to the stellar disk is calculated assuming a uniform velocity during the transit. Then, the synthetic spectrum during transit is calculated by integrating all the surface elements which are not obscured by the planet (Yan & Henning, 2018), considering the obscured elements, the proper RV shift, and the corresponding interpolated \(\mu\) spectrum.
The next step is to simulate the CLV and RM effects and correct them. We construct the master out-of-transit spectrum of the synthetic spectrum and divided each synthetic spectrum by it, and then we shift each in-transit residual of the synthetic spectrum to the planet rest frame and calculated the sum of all in-transit residuals which only contains the CLV and RM effects. The thus simulated spectra are re-scaled to match the observed spectra and are subtracted to eliminate the influence of CLV and RM effects. Fig. 4 shows the CLV and RM effects in the residual spectra around the Na D1 and D2 lines.
### The strong atomic lines
We then performed a visual inspection on the phase-resolved transmission spectra around each known strong feature, which will allow us to confirm or deny the presence of the atoms of interest Tabernero et al. (2021). An integrated SNR-weighted transmission spectrum will be obtained and fitted with a Gaussian function to determine the center, depth, and width of each line. Fig. 5 depicts the 2D phase-resolved transmission spectra and the integrated spectrum around H\(\alpha\), showing a clear detection of H absorption from the WASP-85Ab's atmosphere. Fig. 6 shows the same plots for other atomic lines, including the Na i, Mg i, Li i, H\(\alpha\), H\(\beta\), Ca ii H and K lines. The
\begin{table}
\begin{tabular}{l c c c} \hline \hline Description & Symbol & Prior & Posterior \\ \hline Projected spin-orbit angle & \(\lambda\) & U(-50, 50) & \(-16.155^{+2.916}_{-2.879}\) deg \\ Linear limb dark coefficient & \(\epsilon\) & U(0.5, 1) & \(0.855^{+0.014}_{-0.014}\) \\ Projected stellar rotation velocity & \(\Omega\) & U(3e\({}^{-6}\), 1e\({}^{-5}\)) & \(0.00000468^{+1.57e}_{-8.39e}\) rad s\({}^{-1}\) \\ Inclination of stellar rotation axis & \(i_{\star}\) & U(70, 110) & \(91.6947^{+1.2768}_{-14.7692}\) deg \\ \hline \end{tabular} 1
\end{table}
Table 4: Parameters derived from the RV curve fitting.
Figure 4: The simulated CLV and RM effects overplotted in red on the transmission spectrum. The transmission spectrum is shown in gray, and the spectrum binned by 0.1Å is shown in black.
Figure 3: An example showing the wiggle pattern in one transmission spectrum taken in T2 in blue, the fitted model in red, and the wiggle-corrected spectrum in green.
Gaussian fit results are listed in Table 6. In addition, Table 6 also lists the observed Doppler shift of the line center (\(V_{wind}\)), which traces the planetary winds towards the observer from the morning hemisphere to the evening hemisphere, and effective wavelength-dependent planet radius \(R_{\lambda}\), which can be derived with \(\sqrt{1+h/\delta}\)\(R_{p}\), where \(h\) is the line depth corresponding to different atoms, and \(\delta\) is the transit depth of the planet.
As shown in Table 6 and Fig. 5, H\(\alpha\) absorption is marginally visible in the 2D spectrum (the left panel), and is clearly detected in the integrated spectrum (the right panel), with a depth of \(\sim\)0.625 \(\pm\) 0.06 %, a full width at half maximum (FWHM) of \(\sim\) 23.806 \(\pm\) 2.50 km s\({}^{-1}\), a \(R_{\lambda}\) of 1.028 \(\pm\) 0.003\(R_{\rm p}\). Clear absorption is detected for the Ca ii doublet in the integrated transmission spectra (cf. the two bottom panels of Fig. 11). FWHM and \(R_{\lambda}\) values are similar with those of H\(\alpha\). However, it is not visible in the H line 2D spectrum and is only marginally noticeable for the K line. The derived \(V_{wind}\) for the H & K lines are \(\sim-4.805\) and 5.974 km s\({}^{-1}\), respectively, while the obvious inconsistency may arise from the T1 and T2 spectra that suffer large uncertainties due to stellar activity and/or companion's flux contamination. As shown in the top two rows of Fig 11, while the the H & K lines in T3 are both close to the expected line centers, they are obviously deviated from the line centers - either blue-shifted or red-shifted. There seems to be 3\(\sigma\) detection for Li i and no clear detection for the other species including Na i, Mg i, and H\(\beta\). For those with poor Gaussian fits, i.e., Mg i and H\(\beta\), upper limits of the absorption depths are estimated and listed in Table 6. As shown in Fig. 11 and Fig. 11, the T3 spectrum provides the most significant signal for all the species, while the T1 spectrum provides the least information. This is reasonable according to the discussion on the observing conditions in Section 2.
Note that the Na i D2 absorption line in T1 is abnormal, bearing huge error bars as shown in Fig. 11. Through visual inspection of every 2D raw images, we find that the Na i D2 line in nearly half of the T1 spectra is contaminated by two bright spikes, while the corresponding 1d spectra without sky subtraction show strong emission at 589 nm. Examples are given in Fig. 12. These spikes appear simultaneously in both the target and sky fibers and some of them are saturated, leading to close-to-zero counts and thus large uncertainties after sky subtraction. Given the fact that these bright spikes only present around Na i D2 at 589 nm, it is quite likely that they arise from projected laser signal leaked from the VLT Laser Guide star.
In order to further confirm the existence of these signals, we perform Empirical Monte Carlo (EMC) simulations (Redfield et al., 2008) to estimate the effect of systematics, which is widely used previously (Wyttenbach et al., 2015; Casasayas-Barris et al., 2019; Seidel et al., 2019; Allart et al., 2020). The idea of this method is to artificially create new data set by randomizing original data, to verify whether the investigated signals still exist. We explore three scenarios covering in-in, in-out, out-out for each strong line, with 10000 iterations for the \(\sim\)1.5 A passband. The results are shown in Fig. 12. The in-out distributions of Li i, H\(\alpha\), Ca ii H and Ca ii K exhibit excess absorption, while the absorption depths in the in-in and out-out scenarios center at zero. The EMC simulations suggest that the signals of Li i, H\(\alpha\), Ca ii H & K are likely to be created by the transits and may origin from the planet. In addition, we also mask the line cores with a width of 0.1 A, where the SNRs are low and the RM+CLV residuals may still exist. These results are shown in Fig. 12 and the fitted result is listed in Table 6.
### Cross-correlation function analysis
In addition to the direct measurements of strong lines, we employ the CCF analysis to explore the presence of species or enhance their detection. The CCF method is to extract multiple spectral signals from a certain species by cross-correlating the template spectrum with only this species included with the observed transmission spectra. This technique is widely used for the high-resolution transmission spectroscopic study of exoplanets, which can fully utilize all the spectral signals from multiple lines of a certain species to maximize the detectability of that species. In this work, the species to be explored include Na i, K i, Ca i, Cr i, Fe i, Li, Mg i, Ti, V i and the molecules TiO and VO, and the spectrum template is computed by petitRADTRANS code (Molliere et al., 2019). As input parameters, we assume the WASP-85Ab radius is 1.24\(R_{\rm J}\)(Mocnik et al., 2016), the planet mass is 1.265\(M_{\rm J}\), and thus \(\log g_{\rm p}=3.329\) cgs. We apply an isothermal temperature of 1500 K and assume a solar abundance for the determination of the volume mixing ratio (VMR) of various species. The template spectrum generated by petitRADTRANS is convolved to match the resolution of ESPRESSO. We set the radial velocity at the range of \(\pm\)100 km s\({}^{-1}\) with a step of 0.5 km s\({}^{-1}\), and calculate CCF for each residual spectrum and model spectrum. The cross-correlation coefficients are calculated by the following formula:
\[c(v,t)=\frac{\sum_{i}^{N}x_{i}(t)T_{i}(v)}{\sum_{i}^{N}T_{i}(v)}, \tag{3}\]
where \(T_{i}(v)\) is the template shifted to a radial velocity of \(v\), \(x_{i}(t)\) is the transmission spectrum at time \(t\), \(c(v,t)\) is a 2-dimensional matrix which is the function of \(t\) and \(v\). If the investigated species does exist, the signal will appear at the position of the estimated orbital velocity \(K_{\rm p}\) and system velocity \(V_{\rm sys}\)(Snellen et al., 2010; Hoeijmakers et al., 2019).
The deformation of the stellar line profile caused by CLV and RM effects can also mask the planetary absorption signal in the cross-correlation maps which contains RM+CLV and planetary atmosphere signal together. It is necessary to correct these effects by modeling stellar lines at each phase. As described in Section 3, we used SME and the VALD3 line list to compute theoretical stellar spectra that take into account the CLV and RM effects. The simulated stellar spectra are then cross-correlated with template spectra with a certain species, and the resulting 2D CCF map can be used as a proxy for the influence of the RM+CLV effects, and should be subtracted from the CCF map of the observed transmission spectra. The CCF map of the CLV+RM model CCF is shown in Fig. 13.
The obtained 2D CCF map with CLV+RM effects uncorrected and corrected for Li i, Na i, Ca i, K i, Cr i, and Fe i are shown in the first and second panels of Fig. 6, where the inclined white dashes line indicates the expected track of planet signal. The third panels show the \(K_{\rm p}-\Delta V_{\rm sys}\) map, in which the planet signal, if exists, should appear in the intersection of white dashed lines, i.e, with a velocity close to \(V_{\rm sys}\) and an RV semi-amplitude \(K_{\rm p}=159.76\pm 4.09\) km s\({}^{-1}\). The fourth panels show the SNR of the corresponding species at the expected \(K_{\rm p}\). From Fig. 6, only Li i is clearly detected with a maximum SNR of \(\sim\) 4.5, where the peak position corresponds to \(K_{\rm p}\) of \(\sim\) 134.8 \(\pm\) 88.43 km s\({}^{-1}\) and \(\Delta V_{\rm sys}\sim\) -4.6 \(\pm\) 11.29 km s\({}^{-1}\). Some remaining features in the 2D maps may be associated with stellar activity (Stangter et al., 2021) and propagated noise. The CCF maps and SNR plots for Mg i, Ti i, V i, Y i and TiO and VO are shown in Fig. 7, with no detection reported.
## 5 Discussion and conclusion
We observed three transits of the hot Jupiter WASP-85Ab using the ultra-stable high-resolution spectrograph ESPRESSO. A total of 127 spectra were obtained, 64 spectra were taken during transits while the rest 63 spectra were taken out of transits. Telluric contamination was corrected for each spectrum, from which the master out-of-transit spectra were created and the transmission spectra were generated for the three transits.
For the observed RV of the three transits, we performed a joint RM analysis, which can provide a good estimate of the origin of planets' angular momentum (Ohta et al. 2005). We retrieved the projected obliquity value of \(\lambda=-16.155^{+2.916}_{-2.879}\), suggesting that the hot Jupiter WASP-85Ab's orbit is almost aligned with its host star. The spectral signals induced by the CLV and RM effects were modeled and removed from the obtained transmission spectra and CCF map. The residual transmission spectra were used for the exploration of atomic and molecular lines that originate from WASP-85Ab's atmosphere, via direct inspection for strong lines or the CCF method for the species with multiple lines. The species we explored include H i,Li i,Na i, Ca ii, K i, Mg i, Fe i, Cr i, TiO and VO.
Among them, direct inspection revealed a \(\sim 10\sigma\) absorption signature at H\(\alpha\) but it is not possible to confirm its origin as either stellar, planetary, or else. In the meanwhile, the Ca ii H & K lines are also tentatively detected, with quite similar FWHM and consistent \(R_{\lambda}\). The potential spectral signals detected from H i and Ca ii may arise from the planetary atmosphere but we can not exclude yet the possibility of being induced by stellar activity. Another species that may be detected is Li i, which has 4.5 \(\sigma\) significance at the estimated \(K_{\rm p}\) velocity in the \(K_{\rm p}-\Delta V_{\rm sys}\)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Line & \(\lambda\) & h & V\({}_{wind}\) & FWHM & R\({}_{\lambda}\) \\ & [nm] & [\%] & [km s\({}^{-1}\)] & [km s\({}^{-1}\)] & [\(R_{\rm p}\)] \\ \hline Ca ii K & 393.478 & 2.283 \(\pm\) 0.389 & 8.536 \(\pm\) 2.528 & 28.584\(\pm\) 6.144 & 1.512\(\pm\) 0.145 \\ Ca ii K (masked) & - & 1.769 \(\pm\) 0.372 & 8.952 \(\pm\) 3.359 & 30.705\(\pm\) 8.180 & 1.413\(\pm\) 0.148 \\ Ca ii H & 396.959 & 2.410 \(\pm\) 0.476 & \(-\)4.201 \(\pm\) 2.378 & 23.646 \(\pm\) 5.713 & 1.535 \(\pm\) 0.175 \\ Ca ii H (masked) & - & 1.729 \(\pm\) 0.497 & \(-\)3.495 \(\pm\) 3.105 & 21.294 \(\pm\) 7.458 & 1.405\(\pm\) 0.199 \\ Mg i & 457.238 & \(\leq\) 0.017 \(\pm\) 0.035 & - & - & \(\leq\) 1.005 \(\pm\) 0.01 \\ H\(\beta\) & 486.271 & \(\leq\) 0.050 \(\pm\) 0.010 & - & - & \(\leq\) 1.014 \(\pm\) 0.006 \\ Na i D2\({}^{2}\) & 589.158 & \(\leq\) 0.069 \(\pm\) 0.073 & - & - & \(\leq\) 1.019 \(\pm\) 0.040 \\ Na i D1 & 589.756 & \(\leq\) 0.067 \(\pm\) 0.077 & - & - & \(\leq\) 1.018 \(\pm\) 0.043 \\ H\(\alpha\) & 656.461 & 0.341 \(\pm\) 0.120 & \(-\)1.786 \(\pm\) 3.098 & 18.890 \(\pm\) 7.109 & 1.091 \(\pm\) 0.060 \\ H\(\alpha\) (masked) & - & 0.350 \(\pm\) 0.14 & \(-\)3.025 \(\pm\) 2.431 & 12.899 \(\pm\) 5.659 & 1.096 \(\pm\) 0.072 \\ Li i & 670.961 & 0.089\(\pm\) 0.033 & 4.223\(\pm\) 6.167 & 33.951\(\pm\) 14.425 & 1.025\(\pm\) 0.018 \\ Li i (masked) & - & 0.080 \(\pm\) 0.031 & 5.554 \(\pm\) 6.514 & 38.307 \(\pm\) 15.243 & 1.025 \(\pm\) 0.017 \\ \hline \multicolumn{5}{l}{\({}^{1}\) The line center wavelength is in vacuum, the depth of absorption line \(h\), the Doppler shift of line center \(V_{\rm wind}\), line width (FWHM) and the effective planetary radius \(R_{\lambda}\)} \\ \multicolumn{5}{l}{\({}^{2}\) The Na i D2 measurement is obtained on the transmission spectra generated from only the T2 and T3 data.} \\ \end{tabular}
\end{table}
Table 6: The derived parameters of the atomic lines from the 3-night combined transmission spectrum\({}^{1}\)
Figure 5: The phase-resolved 2D transmission spectra based on the combination of three nights around the H\(\alpha\) line without and with RM+CLV correction applied in the left panel. The horizontal black-dashed lines indicate the beginning and end of the transit. The inclined black-dashed line presents the expected trace of signal from the exoplanet atmosphere. The in-transit part of this line is not plotted so as not to interfere with the visibility of the trace signal. The middle panel shows the 2D transmission spectra in the planet rest frame (PRF) assuming \(K_{\rm p}\)= 159.76 km s\({}^{-1}\), and the vertical black-dashed lines indicate the position of expected signal from the exoplanet atmosphere in the planet rest frame. The right panel shows the combined integrated transmission spectrum of H\(\alpha\). The grey line represents the transmission spectra in the planet rest frame which has been corrected for the CLV+RM effects, the black dotted line is the binned version with a bin size 0.1 Å, while the best Gaussian fit of the H\(\alpha\) line is shown in red. The dashed green vertical line represents the static position of H\(\alpha\) at vacuum wavelength.
## 6 Conclusion
Figure 6: The CCF results derived by cross-correlating the template spectra produced by petitRADTRANS with the observed transmission spectra. _First panels_: The 2D CCF maps of Li i, Na i, Ca i, K i, Cr i, and Fe i with CLV+RM effects uncorrected. The white dotted lines mark the beginning and ending position of the transit, and the inclined white lines indicate the expected trace of signal from the planet. _Second panels_: Same as _the first panels_ but with CLV+RM effects corrected. _Third panels_: the \(K_{p}\)-\(\Delta V_{\rm sys}\) maps in the range of \(-50\sim 350\)\({\rm\ km\ s^{-1}}\). The signal is expected to appear around the intersection of two white dotted lines. _Fourth panels_: Aircle number, page 8 of [18]
Figure 7: Same as Fig 6: but for Mg i, Ti i, V i, Y i TiO and VO.
map. We note there is an offset of \(21\pm 4.3\,\mathrm{km\,s^{-1}}\) between the estimated \(K_{\mathrm{p}}\) and the retrieved \(K_{\mathrm{p}}\). This discrepancy may be due to the large uncertainty of \(K_{\mathrm{p}}\), given that the Li CCF signal is quite extended in the \(K_{\mathrm{p}}\) direction.
We note that there are still some structures visible in the 2D CCF maps, which may be the RM+CLV effect residuals that have not been removed completely, or be caused by the low SNRs in the stellar cores. We thus mask out the line cores with a width of 0.1A for the tentatively detected species to check whether there are signals left. As shown in Fig. 11, the absorption features of H\(\alpha\), Ca ii H&K and Li i are still visible, although slightly shallower. Therefore, the tentative detection of these three elements are unlikely to be caused by the RM+CLV effects or the low SNRs of the line cores.
The detection should not be affected by stellar activity, because there is no lithium line in the stellar spectrum. For the same reason, the continuum around the lithium lines possesses very high SNRs, resulting in a relatively strong lithium signal. Lithium was first reported in the atmosphere of WASP-127 by Chen et al. (2018) with low-R spectroscopy but was not confirmed with high resolution in Allart et al. (2020). Borsa et al. (2021) and Tabernero et al. (2021) present the first detection of Li in the atmosphere of WASP-121b and WASP-76b respectively at high resolution. The detection of Li in an exoplanet atmosphere is not unexpected, as substellar objects with masses below \(\sim\)55 \(M_{\mathrm{Jup}}\) do not deplete this element during their lifetime (Chabrier and Baraffe, 2000; Baraffe et al., 2015). Such detection, if confirmed, is an important step, which can help understand of planet formation history and lithium depletion in planet-hosting stars (e.g. Bouvier, 2008; Chen et al., 2018).
Following the formalism of Wyttenbach et al. (2015) and Allart et al. (2017), we estimate that the 5\(\sigma\) upper limits of the excess absorption of the non-detected atoms and molecules, which are listed in Table 7. Only lines with amplitudes higher than 10 ppm are included in the template spectrum, and the corresponding number of lines employed are listed in Table 7 as well. Fig. 9 is an example showing how an upper limit is obtained by comparing the theoretical CCF using template spectrum with the measured CCF with the observed transmission spectra.
It is somehow expected that the spectral features seen in the transmission spectra are very few and in general weak, given the relatively low \(T_{\mathrm{eq}}\) and small \(R_{\mathrm{p}}\) and thus small Transmission Spectroscopy Metric (Kempton et al., 2018), and that the star is active with recurring starspot reported which may induce relatively large noise in the derived spectra and 2D maps, although our measured S-index activity indicator of WASP-85A for three nights barely varies. In addition, the lack and weakness of atomic and molecular features can be due to several technical factors. The first reason is the low SNR of the observed stellar spectrum near the cores of the strong lines, where the photon counts are close to zero. Another factor is the overlap of CLV+RM effect and planet radial-velocity in cross-correlation residuals maps, making it difficult to disentangle the signal of an exoplanet atmosphere, as noted for example by Casasayas-Barris et al. (2022). More precise measurements of spin-orbit angle should in principle be able to better constrain the planet-occulating position, which will help eliminate the influence of the variation of stellar line profile during transit.
We also note that the three nights data set does not comply with each other well. It seems the last night has the most significant spectral signals which should be related to the combined effect considering the lowest stellar activity indicated by S-index and lower impacts of WASP-85B showed by seeing value. More high-precision observations are necessary in order to further constrain and understand the atmosphere of WASP-85Ab.
###### Acknowledgements.
We thank the anonymous reviewer for the constructive comments, thank Lauren Doyle for useful conversations and thank Jens Hoeijmakers for many useful suggestions. This research is the National Natural Science Foundation of China grants No. 11988101, 42075123, 42005098, 62127901, supported by the National Key Reb Program of China No. 2019YFA0405102, the Strategic Priority Research Program of Chinese Academy of Sciences, Grant No. XDA15072113, the China Manned Space Project with NO. CMS-CSST-2021-B12, ZM, YQS, OLOV are supported by the Chinese Academy of Sciences (CAS), through a grant to the CAS South America Center for Astronomy (CAS-SACA) in Santiago, Chile. HMC acknowledges support from a UKRI Future Leaders Fellowship (MR/S035214/1).
\begin{table}
\begin{tabular}{c c c} \hline \hline Species & Number of lines & Detection limits \\ \hline Mg i & 6 & 527.3 ppm \\ Li i & 8 & 323.9 ppm \\ Fe i & 34 & 341.8 ppm \\ Cr i & 45 & 264.1 ppm \\ K i & 53 & 364.2 ppm \\ Y i & 99 & 283.8 ppm \\ V i & 154 & 301.6 ppm \\ Ti i & 159 & 377.1 ppm \\ TiO & 7832 & 27.6 ppm \\ VO & 8280 & 22.3 ppm \\ \hline \end{tabular}
\end{table}
Table 7: Number of lines used in the CCF calculations and upper limits derived for the investigated species.
Figure 8: The CCF map of model spectra generated by SME for Na i on 2021 Feb 21, with only CLV and RM effects included, which is used to eliminate the influence of the variation of stellar line profile during transit.
Figure 9: Comparison between the measured CCF and theoretical CCF calculated by the template spectrum and observed data.
|
2309.01487
|
GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for
Histopathological Image Segmentation
|
Histopathological image segmentation is a laborious and time-intensive task,
often requiring analysis from experienced pathologists for accurate
examinations. To reduce this burden, supervised machine-learning approaches
have been adopted using large-scale annotated datasets for histopathological
image analysis. However, in several scenarios, the availability of large-scale
annotated data is a bottleneck while training such models. Self-supervised
learning (SSL) is an alternative paradigm that provides some respite by
constructing models utilizing only the unannotated data which is often
abundant. The basic idea of SSL is to train a network to perform one or many
pseudo or pretext tasks on unannotated data and use it subsequently as the
basis for a variety of downstream tasks. It is seen that the success of SSL
depends critically on the considered pretext task. While there have been many
efforts in designing pretext tasks for classification problems, there haven't
been many attempts on SSL for histopathological segmentation. Motivated by
this, we propose an SSL approach for segmenting histopathological images via
generative diffusion models in this paper. Our method is based on the
observation that diffusion models effectively solve an image-to-image
translation task akin to a segmentation task. Hence, we propose generative
diffusion as the pretext task for histopathological image segmentation. We also
propose a multi-loss function-based fine-tuning for the downstream task. We
validate our method using several metrics on two publically available datasets
along with a newly proposed head and neck (HN) cancer dataset containing
hematoxylin and eosin (H\&E) stained images along with annotations. Codes will
be made public at https://github.com/suhas-srinath/GenSelfDiff-HIS.
|
Vishnuvardhan Purma, Suhas Srinath, Seshan Srirangarajan, Aanchal Kakkar, Prathosh A. P
|
2023-09-04T09:49:24Z
|
http://arxiv.org/abs/2309.01487v2
|
GenSelfDiff-HIS: Generative Self-Supervision Using Diffusion for Histopathological Image Segmentation
###### Abstract
Histopathological image segmentation is a laborious and time-intensive task, often requiring analysis from experienced pathologists for accurate examinations. To reduce this burden, supervised machine-learning approaches have been adopted using large-scale annotated datasets for histopathological image analysis. However, in several scenarios, the availability of large-scale annotated data is a bottleneck while training such models. Self-supervised learning (SSL) is an alternative paradigm that provides some respitp by constructing models utilizing only the unannotated data which is often abundant. The basic idea of SSL is to train a network to perform one or many pseudo or pretext tasks on unannotated data and use it subsequently as the basis for a variety of downstream tasks. It is seen that the success of SSL depends critically on the considered pretext task. While there have been many efforts in designing pretext tasks for classification problems, there haven't been many attempts on SSL for histopathological segmentation. Motivated by this, we propose an SSL approach for segmenting histopathological images via generative diffusion models in this paper. Our method is based on the observation that diffusion models effectively solve an image-to-image translation task akin to a segmentation task. Hence, we propose generative diffusion as the pretext task for histopathological image segmentation. We also propose a multi-loss function-based fine-tuning for the downstream task. We validate our method using several metrics on two publically available datasets along with a newly proposed head and neck (HN) cancer dataset containing hematoxylin and eosin (H&E) stained images along with annotations. Codes will be made public at [https://github.com/PurmaVishnuVardhanReddy/GenSelfDiff-HIS.git](https://github.com/PurmaVishnuVardhanReddy/GenSelfDiff-HIS.git).
Contrastive Learning, Diffusion, H&E-stained Histopathological Images, Representation Learning, Self-Supervised Learning.
## I Introduction
Automated histopathological analysis has received a lot of attention owing to its utility in reducing time-intensive and laborious efforts of human pathologists [1, 2, 3]. Deep learning-based models are the ubiquitous choice for this purpose, which learn useful task-specific representations, enabling efficient mapping to the corresponding task labels [4]. Early methods [5, 6, 7] focused on learning representations in a fully supervised manner which demands substantial amounts of annotated data which is often difficult to obtain in histopathology. Moreover, biases and disagreements in annotations from experts lead to uncertainty in the ground truth labels themselves. This motivates the study of unsupervised learning, particularly self-supervised learning (SSL). SSL uses the information available from a large number of unannotated images to learn effective visual representations through designing pseudo or pretext tasks [8, 9]. These learned representations can then improve the performance of downstream tasks such as classification and segmentation with a limited number of labeled images, thereby reducing the amount of annotated data.
The pretext tasks for SSL methods can be broadly classified as predictive, contrastive, and generative (Sec. II-B). Most of the existing SSL approaches [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] are based on predictive and contrastive pretext tasks. Some methods like [20] use a combination of predictive and contrastive tasks. However, generative pretext tasks of SSL have remained relatively unexplored, particularly in histopathology. Generative pretext tasks can potentially be more suitable for histopathological segmentation tasks since they are tasked to model the entire image distribution which is conducive for a downstream segmentation task. On the contrary, in predictive and contrastive SSL, the designed pretext task focus on learning the'salient features' without necessarily learning the entire image distribution needed for segmentation tasks. This motivates us to explore the use of generative models for SSL of histopathological images.
Previous attempts on generative SSL [21, 22, 23, 24] utilize models such as variational autoencoders (VAEs) [25] and generative adversarial networks (GANs) [26]. However, GANs and VAEs suffer from several issues such as training instability, degraded image quality, and mode collapse [27]. Recently, denoising diffusion probabilistic models (DDPMs) [28] have emerged to be powerful alternatives for GANs and VAEs in producing high-quality images [29], which motivates us to use them for pretext task. Additionally, since DDPMs inherently solve an image-to-image translation task using a segmentation-like backbone, they make a natural choice for self-supervised pretraining of segmentation problems. While DDPMs have been explored for medical image segmentation [30, 31] for modalities such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound imaging modalities, the use of DDPMs for SSL and for histopathological images is unexplored hitherto, to the best of our knowledge. Motivated by the aforementioned observations, we propose the use of DDPMs as pretext tasks in SSL for histopathological segmentation. Specifically, the contributions of our work can
be summarized as follows:
1. We propose a diffusion-based generative pre-training process for self-supervision to learn efficient histopathological image representations and use the underlying UNet architecture as the backbone for the downstream segmentation task.
2. We propose a new head and neck (HN) cancer dataset with H&E stained histopathological images along with corresponding segmentation mask annotations.
3. We show the efficacy of our method on three histopathology datasets on multiple evaluation metrics with improved performance over other SSL pre-training methods and pretext tasks.
## II Related Work
### _Histopathological Image Analysis_
Deep convolutional networks have shown remarkable performance in histopathology, particularly in the segmentation of Hematoxylin and Eosin (H&E) stained histopathological images. A comprehensive survey paper [32] delves into the methodological aspect of various machine learning strategies, including supervised, weakly-supervised, unsupervised, transfer learning, and their sub-variants within the context of histopathological image analysis. Komura _et al._[33] discuss diverse machine-learning methods for histopathological image analysis. Successful approaches such as U-Net-based networks [5, 34] have employed skip-connections between encoder and decoder parts to address the vanishing gradient problem similar to [35]. These skip connections facilitate the extraction of richer feature representations. Xu _et al._[36] propose a novel weakly-supervised learning method called multiple clustered instance learning (MCIL) for histopathological image segmentation. MCIL performs image-level classification, medical image segmentation, and patch-level clustering simultaneously. Another significant contribution is the fully convolutional network (FCN) based method [37], which introduced a deep contour-aware network specifically designed for histopathological gland segmentation. This method effectively tackles multiple critical issues in gland segmentation.
Liu _et al._[38] presented a unique weakly-supervised segmentation framework based on sparse patch annotation for tumor segmentation. Bokhorst _et al._[39] compared two approaches, namely instance-based balancing and mini-batch-based balancing, when dealing with sparse annotations. Their study demonstrated that employing a large number of sparse annotations along with a small fraction of dense annotations yields performance comparable to full supervision. Yan _et al._[40] proposed a multi-scale encoder network to extract pathology-specific features, enhancing the discriminative ability of the network. Yang _et al._[41] present a deep metric learning-based histopathological image retrieval method that incorporates a mixed attention mechanism and derives a semantically meaningful similarity metric. An introductory and detailed review of histopathology image analysis is available in [42]. A recent study [43] provided a comprehensive survey of weakly-supervised, semi-supervised, and self-supervised techniques in histopathological image analysis.
### _Self-Supervised Learning_
Self-Supervised Learning (SSL) is a paradigm aimed at learning visual feature representations from a large amount of unannotated data. The idea is to train a network to solve one of many pretext tasks (also known as pretraining) using the unlabelled data followed by a fine-tuning stage, where the model is further trained using a limited amount of annotated data for specific downstream tasks, such as nuclei and HN cancer histopathological image segmentation. This approach leverages the power of unsupervised learning to capture meaningful representations from unlabelled data and enhances the performance of supervised models in limited annotated data scenarios.
The success of self-supervised models relies heavily on the choice of pretext tasks during the pre-training stage. Examples of pretext tasks include cross-channel prediction [10], image context restoration [11], image rotation prediction [12], image colorization [13], image super-resolution [14], image inpainting [15], resolution sequence prediction (RSP) [16]. The quality of the learned visual features depends on the objective function of the pretext tasks and the pseudo labels generated from the available unannotated data. These pseudo-labels act as supervisory signals during the pre-training phase. Pretext tasks can be categorized into predictive tasks [16], generative tasks [21, 22, 23], contrasting tasks [17, 18, 19], or a combination of them [20]. Predictive SSL is based on predictive tasks that focus on predicting certain properties or transformations of the input data. Generative SSL is based on generative tasks that involve generating plausible outputs from corrupted or incomplete inputs. Finally, contrastive SSL is based on contrasting tasks that aim to learn invariant representations under different augmentations of the same image.
Jing _et al._[44] provide a detailed review of deep learning-based self-supervised general visual feature learning methods from images. Liu _et al._[45] reviewed comprehensively the existing empirical methods of self-supervised learning. Koohbanani _et al._[46] introduced novel pathology-specific self-supervision tasks that leverage contextual, multi-resolution, and semantic features in histopathological images for semi-supervised learning and domain adaptation. However, their study is limited to classification tasks.
### _Contrastive Learning for Medical Segmentation_
Contrastive learning methods aim to discriminate between instances and learn effective representations that capture essential characteristics [47, 48, 49]. These methods extract the information in unannotated images by treating each unannotated image as a positive pair with its counterpart supervisory signal obtained through some transformation and considering the supervisory signals of other unannotated images as negative pairs. Chaitanya _et al._[50] addressed the challenge of limited annotations in medical image segmentation through contrastive learning of global and local features on three magnetic resonance imaging (MRI) datasets. Chen _et al._[47] demonstrated that incorporating a learnable non-linear transformation between the representations and contrastive loss can enhance
the representation quality. However, they limited their study to natural images.
In histopathology, Ciga _et al._[51] applied self-supervised contrastive learning to large-scale studies involving \(57\) histopathological datasets. They observed improved performance across multiple downstream tasks like classification, regression, and segmentation through unannotated images.
A recent study [52] used the approach of cross-stain prediction and contrastive learning (CS-CO), which integrates the advantages of both predictive and contrastive SSL. Xu _et al._[18] proposed a self-supervised deformation representation learning (DRL) approach, which uses elastically deformed images as the supervisory signals in the pretraining. This approach is based on maximizing the mutual information between the input images and the generated representations. In a recent study, Stacke _et al._[53] demonstrated the potential of contrastive self-supervised learning for histopathology applications in learning effective visual representations.
Our method differs from the existing literature in that it leverages the potential of unannotated images through a generative self-supervision using diffusion as the pretext task.
## III Methodology
### _Problem Formulation_
Let \(\mathcal{S}_{\text{pr}}\) denote the set of unlabelled images used for SSL pre-training. Subsequently, a small set of labeled images, denoted by \(\mathcal{S}_{\text{tr}}\) are used in the supervised segmentation task. Specifically, \(\mathcal{S}_{\text{tr}}\) consists of both labeled images \(\mathbf{x}\in\mathcal{X}\) and the corresponding masks \(\mathbf{y}\in\mathcal{Y}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) respectively denote the image and label spaces. Finally, the performance of the model is evaluated on a test set \(\mathcal{S}_{\text{te}}\). In our formulation, we assume \(\mathcal{S}_{\text{pr}}\cap\mathcal{S}_{\text{tr}}=\phi\) (i.e., the unlabeled and labeled images are mutually exclusive). In the SSL stage, the task is to learn a representation function \(f_{\theta}:\mathcal{S}_{\text{pr}}\rightarrow\mathcal{Z}\) (i.e., \(f_{\theta}(\mathbf{x})=\mathbf{z}\)), from the image space to latent space \(\mathcal{Z}\), that would be effective during the downstream tasks. In this work, we propose to learn \(f_{\theta}\) via a generative diffusion process described next.
### _Self-Supervision using Diffusion_
Let \(\mathbf{x}_{0}\in\mathcal{S}_{\text{pr}}\) be the original input image and \(\mathbf{x}_{t}\) be the corresponding noisy image obtained at time step \(t=1,2,\dots,T\). Each \(\mathbf{x}_{t}\) is obtained via \(\mathbf{x}_{t-1}\) according to the following diffusion process:
\[\mathbf{x}_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}+\sqrt{\beta_{t}}\epsilon_{ t}, \tag{1}\]
where \(\beta_{t}\) is the noise schedule parameter at time \(t\), and \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), \(\forall t=1,2,\dots,T\). This model imposes a set of encoding distributions \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\), which are assumed to be following a first-order Gaussian Markov process. In DDPMs, the encoding or the forward process is assumed to be fixed while the reverse or the decoding process is modeled using a parametric family of distributions denoted by \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). The objective of the DDPM models is to estimate the parameters of \(p_{\theta}\), which is accomplished by optimizing a variational lower bound on the log-likelihood of the data \(\mathbf{x}_{0}\) under the model \(p_{\theta}\).
The variational lower bound is shown to take the following form in [28, 54]:
\[log\ p_{\theta}(\mathbf{x}_{0}) \geq\mathop{\mathbb{E}}_{q(\mathbf{x}_{1}|\mathbf{x}_{0})}\bigl{[} log\ p_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})\bigr{]}-\mathbb{D}_{KL} \bigl{[}q(\mathbf{x}_{T}|\mathbf{x}_{0})||p(\mathbf{x}_{T})\bigr{]}\] \[-\sum_{t=2}^{T}\mathbb{D}_{KL}\bigl{[}q(\mathbf{x}_{t-1}|\mathbf{ x}_{t},\mathbf{x}_{0})||p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\bigr{]}, \tag{2}\]
where \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\sim\mathcal{N}\Bigl{(} \mathbf{x}_{t-1};\mu_{q}(\mathbf{x}_{t},\mathbf{x}_{0}),\sigma_{q}^{2}(t) \mathbf{I}\Bigr{)}\), \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\sim\mathcal{N}\Bigl{(}\mathbf{x}_ {t-1};\mu_{\theta},\sigma_{q}^{2}(t)\mathbf{I}\Bigr{)}\),
\[\mu_{q}(\mathbf{x}_{t},\mathbf{x}_{0}) =\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})\mathbf{x}_{t}+ \sqrt{\bar{\alpha}_{t-1}}(1-\alpha_{t})\mathbf{x}_{0}}{1-\bar{\alpha}_{t}},\] \[\text{and }\sigma_{q}^{2}(t) =\frac{(1-\alpha_{t})(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}.\]
\(\mu_{\theta}\) is the set of learnable model parameters. Equation (2) can be simplified by using the distributional forms (Gaussian) of \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\) and \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) in \(\mathbb{D}_{KL}\bigl{[}q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})||p_{ \theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\bigr{]}\), as follows:
\[logp_{\theta}(\mathbf{x}_{0}) \geq\mathop{\mathbb{E}}_{q(\mathbf{x}_{1}|\mathbf{x}_{0})} \bigl{[}log\ p_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})\bigr{]}\] \[-\mathbb{D}_{KL}\bigl{[}q(\mathbf{x}_{T}|\mathbf{x}_{0})||p( \mathbf{x}_{T})\bigr{]}-\sum_{t=2}^{T}\frac{\|\mu_{q}(\mathbf{x}_{t},\mathbf{x }_{0})-\mu_{\theta}\|^{2}}{2\sigma_{q}^{2}(t)} \tag{3}\]
While the above formulation is sufficient for model learning, it has been found that alternative noise-based reparameterization yields better performance [28]. Specifically, the third term in Equation (3) can be re-parameterized [28] in terms of the'real' (\(\epsilon\)) and predicted (\(\epsilon_{\theta}\)) noise parameters as follows:
\[logp_{\theta}(\mathbf{x}_{0}) \geq\mathop{\mathbb{E}}_{q(\mathbf{x}_{1}|\mathbf{x}_{0})}\bigl{[} logp_{\theta}(\mathbf{x}_{0}|\mathbf{x}_{1})\bigr{]}-\mathbb{D}_{KL}\bigl{[}q( \mathbf{x}_{T}|\mathbf{x}_{0})||p(\mathbf{x}_{T})\bigr{]}\] \[-\sum_{t=2}^{T}\frac{(1-\alpha_{t})}{2\alpha_{t}(1-\bar{\alpha}_{t -1})}\|\epsilon-\epsilon_{\theta}(\mathbf{x}_{t},t)\|^{2}. \tag{4}\]
A DDPM is optimized using the aforementioned formulation. Specifically, the first term in Eq. (4), is a reconstruction term, similar to the one obtained in vanilla VAE [25]. The second term is the prior matching term which represents the closeness of the noisy distribution \(p(\mathbf{x}_{T})\) with standard normal distribution. However, it is independent of network parameters \(\boldsymbol{\theta}\) and hence can be neglected in the optimization. The third term is the denoising matching term which represents the closeness of desired denoising transition step \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) with the ground truth denoising transition step \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\). Therefore, the resultant loss for training a DDPM is given by:
\[L_{\text{sIDE}} =\mathop{\mathbb{E}}_{\mathbf{x}_{0},\epsilon}\Bigl{[}\frac{(1- \alpha_{t})}{2\alpha_{t}(1-\bar{\alpha}_{t-1})}\|\epsilon-\epsilon_{\theta}( \mathbf{x}_{t},t)\|^{2}\Bigr{]}. \tag{5}\]
In practice, the weight factor in Eq. (5) is discarded to obtain the following simplified loss function:
\[L_{\text{simple}} =\sum_{t\geq 1}L_{t}\] \[L_{t} =\mathop{\mathbb{E}}_{\mathbf{x}_{0},\epsilon}\Bigl{[}\|\epsilon- \epsilon_{\theta}(\mathbf{x}_{t},t)\|^{2}\Bigr{]} \tag{6}\]
Since the time-dependent weighting is discarded, the objective in (6) focuses on more difficult denoising tasks at larger \(t\).
Our focus is mainly on learning effective visual representations using diffusion in a self-supervised manner. This can be achieved by focusing more on the content and avoiding insignificant or trivial details of the image. But, the simple diffusion loss or variational bound in (6) does not guarantee much because of uniform weighting for all the time steps. Choi _et al._[55] proposed a perception prioritized (P2) weighting and demonstrated that P2 weighting provides a good inductive bias for learning rich visual concepts by boosting weights at the coarse and the content stage and suppressing the weights at the clean-up stage (say high SNR or initial time steps). The variational bound with the P2 weighting scheme is
\[L_{\mathbf{p}2} =\sum_{t\geq 1}L_{t},\] \[L_{t} =\operatorname*{\mathbb{E}}_{\mathbf{x}_{0},\epsilon}\Bigl{[} \frac{1}{(k+\text{SNR}(t))^{\gamma}}\|\epsilon-\epsilon_{\boldsymbol{\theta}}( \mathbf{x}_{t},t)\|^{2}\Bigr{]}, \tag{7}\]
where \(\gamma\) is a hyperparameter that controls the strength of down-weighting focus on learning imperceptible details (high SNR). Here, \(k\) is also a hyperparameter that prevents exploding weights for extremely small SNRs and determines the sharpness of the weighting scheme. SNR of the noisy sample \(\mathbf{x}_{t}\) is obtained by taking the ratio of the squares of coefficients of \(\mathbf{x}_{0}\) and \(\epsilon\), corresponding to signal and noise variances respectively. i.e. \(\text{SNR}(t)=\frac{\hat{\alpha}_{t}}{1-\hat{\alpha}_{t}}\).
We finally use the P2 weighting loss (7) for the DDPM training (Hyper-parameter details described later). It is to be noted that, architecturally, DDPM solves a regression task using a UNet-like base network, which takes the noisy image (along with the time embedding) as the input to predict the noise content as shown in Fig. 1(a). We use all the available unannotated images from the set \(\mathcal{S}_{\text{pr}}\) to train the network in a self-supervised manner to obtain a rich set of representations \(f_{\theta}\) for the input data distribution. Note that our objective of training a DDPM is not data generation but self-supervised representation learning.
Post-training, we propose to use the exact same base UNet (obtained via DDPM training) to fine-tune for the downstream H&E stained histopathological image segmentation tasks Fig. 1(b). The timestamp is added as an embedding layer with \(t\in\{0\}\) since the downstream task does not involve or require noisy predictions. In other words, \(\mathbf{x}_{0}\) is enough without any need for the noisy versions \(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{T}\). The segmentation network is then trained using a multi-loss function proposed in the next section.
### _Segmentation via Multi-Loss Formulation_
In histopathology, segmentation is majorly based on the structural aspects of the underlying images. Further, on many occasions, class imbalance is also inevitable because of patch-based analysis, which often introduces class dominance. Hence we propose a multi-loss function which is a combination of structural similarity (SS) loss [56] and focal loss (FL), which simultaneously caters to preserving structural importance and mitigating class imbalance.
#### Iii-C1 Structural Similarity Loss
Structural similarity (SS) loss is designed to achieve a high positive linear correlation between the ground truth and the predicted segmentation masks. Zhao _et al._[56] proposed it as a reweighted version of cross-entropy loss. However, we modify it as the weighted absolute error between ground truth and predicted segmentation masks with the weights being the cross-entropy loss as
\[\mathcal{L}_{SS}(y_{nc},\hat{y}_{nc}) =\mathcal{L}_{CE}\cdot f_{nc}\cdot e_{nc},\] \[e_{nc} =\Bigl{|}\frac{y_{nc}-\mu_{y_{nc}}+C_{1}}{\sigma_{y_{nc}}+C_{1}}- \frac{\hat{y}_{nc}-\mu_{\hat{y}_{nc}}+C_{1}}{\sigma_{\hat{y}_{nc}}+C_{1}}\Bigr{|},\] \[f_{nc} =\mathbf{1}_{\{e_{nc}>\beta_{enc}\}},\text{ and}\] \[\mathcal{L}_{CE} =-\frac{1}{N}\sum_{n=0}^{N-1}\sum_{c=0}^{C-1}y_{nc}\ log\ \hat{y}_{nc}, \tag{8}\]
where \(\mu_{y_{nc}}\) and \(\sigma_{y_{nc}}\) are the mean and standard deviation of the ground truth \(y_{nc}\) respectively. \(n\) and \(c\) correspond to batch \(N\) and channels \(C\) respectively. \(\hat{y}\) is the predicted segmentation mask and \(C_{1}=0.01\) is an empirically set stability factor. The absolute error \(e_{nc}\) measures the degree of linear correlation between two image patches. \(e_{\text{max}}\) is the maximum value of \(e_{nc}\), \(\beta\in[0,1)\) is a weight factor with \(\beta=0.1\) in practice, \(\mathbf{1}_{\{.\}}\) is the indicator function, and \(\mathcal{L}_{CE}\) is the cross-entropy loss. The structural similarity loss is expressed as
\[\mathcal{L}_{SS}=\frac{1}{M}\sum_{n=0}^{N-1}\sum_{c=0}^{C-1}\mathcal{L}_{SS}(y_ {nc},\hat{y}_{nc}) \tag{9}\]
where \(M=\sum_{n=0}^{N-1}\sum_{c=0}^{C-1}f_{nc}\) is the number of hard examples. \(f_{nc}\) is used to consider the pixels with the significant absolute error between the predicted and ground truth segmentation masks for every class, and \(\mathcal{L}_{CE}\) adds weighting to those pixels. Here, \(\mathcal{L}_{CE}\) is the dynamic weighting factor varying over the iterations based on the prediction. We empirically observe the effect of this modified structural similarity loss in boosting the segmentation performance in Sec. IV-F2.
Fig. 1: An overview of the proposed framework. (a) Self-supervised pre-training using diffusion: The UNet model (encoder-decoder) takes the corrupted version \(\mathbf{x}_{t}\) of the image \(\mathbf{x}_{0}\) and the corresponding time embedding \(t_{e}\) as the input to predict the noise that takes \(\mathbf{x}_{0}\) to \(\mathbf{x}_{t}\), using P2 weighted [55] loss function. \(f(\cdot)\) denotes the function that recovers \(\mathbf{x}_{t-1}\) from \(\mathbf{x}_{t}\). (b) Downstream segmentation: The self-supervised pre-trained UNet is fine-tuned end-to-end in a supervised manner to predict the segmentation masks.
#### Iii-B2 Focal Loss
Focal loss is shown to perform well in the presence of imbalanced datasets [57], defined as:
\[\mathcal{L}_{FL}=-\frac{1}{N}\sum_{n=0}^{N-1}\sum_{c=0}^{C-1}(1-\hat{y}_{nc})^{ \gamma}y_{nc}log\hat{y}_{nc}, \tag{10}\]
where N denotes the batch size, C denotes the number of classes, and \(y_{nc}\), \(\hat{y}_{nc}\) are the ground truth and predicted values for any pixel corresponding to a class. \((1-\hat{y}_{nc})^{\gamma}\) acts as a weighting factor and takes care of the class imbalance. The value of \(\gamma\) is set to \(2.0\) empirically. The final loss function for supervised fine-tuning our method is a weighted combination of the structural similarity and the focal loss given by \(\mathcal{L}_{SSFL}=\mathcal{L}_{SS}+\lambda\mathcal{L}_{FL}\), where \(\lambda\) is a hyperparameter.
## IV Experiments and Results
### _Datasets_
For our experiments, we use three datasets namely the Head and Neck Cancer Dataset, Gland segmentation in colon histology images (GlaS), and (multi-organ nucleus segmentation (MoNuSeg). The details of the datasets used are shown in Table I. While the latter two are publically available, the former is curated, annotated, and proposed by us for research community usage. The dataset will be made available for research use upon request.
#### Iv-A1 Head and Neck Cancer Dataset
This dataset was collected with the approval of the institutional ethical committee of All India Institute of Medical Sciences (AIIMS), New Delhi, with the approval number bearing IEC-58/04.01.2019.
A total of \(163\) cases of head and neck squamous cell carcinoma (SCC) [58] were retrieved from the Department of Pathology of AIIMS archives. Tumor tissue had been fixed in 10% neutral buffered formalin, routinely processed, and embedded into paraffin blocks. Representative H&E stained sections lacking cutting artifacts were selected. Images were captured at 10x magnification using a digital camera attached to a microscope. A minimum of four images were obtained from each case. A team of trained pathologists performed manual annotation of the captured images. Images were annotated using an online image annotation tool into three classes: malignant, non-malignant stroma, and non-malignant epithelium. All SCC tumor cell islands were marked as malignant. Non-malignant stroma included fibro-collagenous and adipose tissue and skeletal muscle. Non-malignant epithelium included all benign squamous epithelium adjacent to the tumor or from resection margins. All the tissue present in an image was annotated. Diagnosis of cancer requires distinction of cancer areas from the non-malignant epithelium and demonstration of invasion by cancer into the non-malignant stroma. Thus, delineating the three classes would aid a pathologist in identifying invasion by cancer cells. We use images from confirmed cases of cancers that have been surgically removed completely, and there is no ambiguity in the diagnosis, as we have ample tissue to study under the microscope. There are \(1584\) images in the collected dataset, of which \(507\) are annotated.
#### Iv-A2 Public Datasets
We analyze two publicly available datasets: GlaS [59] and MoNuSeg [60]. GlaS contains \(165\) images from \(16\) H&E stained sections of stage T3/T4 colorectal adenocarcinoma, showing notable inter-subject variability in stain distribution and tissue architecture. Images are mostly \(775\times 522\) in resolution. \(85\) images are for training and \(80\) for testing. We use the training set for self-supervised pre-training and split the \(80\) test images into train-test sets for segmentation. \(52\) visual fields from malignant and benign regions were selected for diverse tissue architectures. Pathologists annotated glandular boundaries, categorizing regions into malignant and benign classes.
The MoNuSeg dataset consists of H&E stained images at 40x magnification. For nuclear appearance diversity, one image per patient was chosen and cropped into 1000 \(\times\) 1000 sub-images dense in nuclei. Annotated classes cover epithelial and stromal nuclei, resolving overlaps in classes by assigning pixels to the largest nucleus. This dataset contains \(37\) training and \(14\) testing images. Initially, three classes are annotated: nucleus boundary, within-nucleus (foreground), and outside-nuclei (background). However, our study combines the nucleus boundary and nucleus into one class. Our pre-training stage involves all \(37\) training images, while the remaining 14 are used for segmentation. Sample images from all the evaluation datasets are shown in Fig. 2.
The GlaS and MoNuSeg datasets use fixed training and testing sets. The training set images serve as unlabeled data for self-supervised pre-training, while the testing set images provide labeled data for segmentation. This guarantees that pre-training and fine-tuning stages use mutually exclusive image sets. Labeled images are further divided into train and test subsets, with the latter used only for performance assessment. The HN cancer dataset also comprises separate labeled and unlabeled images. Patches of 256 \(\times\) 256 (stride of 64 for GlaS and MoNuSeg, 256 for HN cancer) are extracted for training and evaluation. Dataset details are in Table VI.
### _Baselines for Comparison_
We compare our framework with two fully supervised benchmarks: (a) UNet [5] - a standard model for medical image segmentation, and (b) Attention UNet [61] with Random initialization - a UNet incorporating an attention mechanism designed for CT abdominal image segmentation.
Fig. 2: Sample real and generated patches using diffusion on three datasets: GlaS [60], MoNuSeg [59], and HN cancer (ours). The first four images in each row represent real image patches, and the last four images represent the generated image patches.
The remaining baselines adopt diverse pretext tasks for self-supervision:(1) VAE [25]: A UNet-based variational autoencoder pre-trained and down streamed for segmentation. (2) Context Restoration [11]: Self-supervised learning through context restoration, targeted at medical image analysis. (3) Contrastive Learning [51]: Leveraging self-supervised contrastive learning for acquiring image representations. (4) CS-CO [52]: A histopathological image-specific contrastive learning technique utilizing novel image augmentations. (5) Deep InfoMax (DIM) [62]: Unsupervised representation learning by maximizing mutual information between input and output. (6) Inpainting [15]: UNet model trained using image inpainting as a pretext task. All methods are trained until loss convergence, followed by fine-tuning for segmentation.
### _Implementation Details_
We use the attention-based UNet as the encoder-decoder network for pre-training and fine-tuning. The time stamps are drawn uniformly randomly between \(0\) and \(T\) and then input to the corresponding embedding layer for the network during pre-training. We train the network for \(100\) epochs with a learning rate of \(0.0001\) using Adam optimizer and a batch size of \(8\). For the downstream segmentation, we initialize the UNet, except for the last few layers, with the pre-trained weights of the pretext task and then fine-tune the entire network end-to-end with the Adam optimizer for \(150\) epochs. The batch size is \(8\), and the learning rate is \(0.0001\). The multi-loss function's regularization scaling parameter \(\lambda\) is set to \(1.0\). We use random horizontal and vertical flips, color jittering, and Gaussian blur as augmentations. All the comparisons are with UNet-based networks except for CS-CO, a Resnet-based network. The hyperparameters are kept the same across all the methods for fair comparison. We use accuracy, precision, sensitivity (recall), and F1-score as the evaluation metrics for our segmentation task.
### _Quantitative and Qualitative Results_
We demonstrate the effect of self-supervision tasks by transferring the learned representations to the histopathological image segmentation task. Here, we compare our proposed diffusion-based self-supervision against other existing self-supervision tasks like context restoration [11], contrastive learning [51], and CS-CO [52]. The self-supervised approaches [51, 52] are pathology specific, whereas [11] is pathology agnostic but still related to medical image analysis. We also train a network with random initialization, which we use as our baseline method. Moreover, we compare our approach with various methods as described in Section IV-B. Our method shares certain similarities with [63, 28, 64], but these methods are mainly focused on generating high-quality images and learning good representations but are not tuned specifically for segmentation.
Fig. 2 shows examples of generated images from learning the self-supervised pretext task using diffusion. The generation performance of the diffusion model is captured using a popular metric, Frechet Inception Distance (FID) [65], which is measured between images from a dataset and a set of generated images. We use \(1000\) images from each dataset and generated images to compute FID scores, which are shown in Table IV. One can observe that the FID scores are lower when the datasets contain more samples, indicating that the generation performance increases with the number of training patches. This can also be qualitatively understood in Fig. 2, where the generated images of GlaS, containing lower unlabeled images (see Table. I), are noisy. Moreover, we note that good generation performance also translates to better segmentation performance on all datasets.
Our approach achieves superior segmentation performance over other self-supervised methods on all the metrics for GlaS and MoNuSeg datasets and three metrics except precision on the HN cancer dataset. Table II shows the evaluation performance of all methods on all three segmentation datasets. We observe an improvement of at least \(2\) % for GloS, \(1-1.5\) % for MoNuSeg, and \(2-2.5\) % HN cancer datasets with respect to F1-score. Encouraging results are that the margin in recall (sensitivity) is better compared to precision in boosting the F1 score, particularly for the HN cancer dataset. Moreover, we also include performances of our approach on all three datasets over multiple runs (weight initialization) in Table III to demonstrate the stability in training the segmentation models. Finally, Fig. 3 clearly demonstrates the superior qualitative performance (In comparison to the ground truth) of the proposed method over other self-supervised methods on all three datasets.
### _Cross-Dataset Segmentation_
Table I shows that the HN cancer dataset contains the most unannotated images, followed by MoNuSeg and GLaS. From Fig. 2, it can be seen that GLaS and HN cancer datasets have comparable mask sizes for any class in the annotated segmentation maps, whereas the mask size (corresponding to the nucleus class) is small. This indicates some amount of correlation between the two datasets.
Hence, from Table VII, we observe that the segmentation performance on the GLaS dataset using a model pre-trained on the HN dataset is good. However, self-supervision using the GLaS dataset and segmentation on the HN dataset doesn't follow the same pattern due to the low number of unannotated images in the GLaS dataset. The model achieves good performance on the MoNuSeg dataset when pre-trained and fine-tuned on MoNuSeg itself. This indicates that a certain degree of similarity between datasets aids in learning task-agnostic visual representations using diffusion-based self-supervision.
We also pre-train the diffusion model using a combination of unannotated images from all three datasets and then learn
separate segmentation models for each dataset. From Table VI, we notice a performance boost on all the datasets, validating that the performance of downstream tasks can be enhanced by learning more generalizable representations from diverse data distributions.
### _Ablations_
#### Iv-F1 Effect of Diffusion Time Steps
We observe the effect of varying the number of time steps \(T\) in the diffusion process during the pre-training stage on the performance of segmentation. The time steps are varied from \(50\) to \(5000\). Table V shows the F1 scores on MoNuSeg and GlaS datasets. We notice that the performance degrades when \(T\) is either
Fig. 3: Qualitative Results of the proposed method (diffusion) along with other methods: UNet [5], Context Restoration [11], Contrastive learning [51], CS-CO [52], VAE [25], DIM [62], Inpainting [15], and Diffusion (ours).
too high or too low for both datasets. When \(T\) is high, the noise addition in the later steps is a very slow process and less significant. As a result, most noising steps are high SNR processes, making the network learn many imperceptible details and content from these images. Hence, a performance drop is expected. On the other hand, when \(T\) is very low, the noise addition happens very fast, adding significant noise in each step. This causes the noising process to operate in a low SNR regime, making the model learn predominantly coarse details over content information, resulting in a performance drop [55]. Hence, a trade-off exists based on the choice of \(T\), impacting the segmentation performance. We observe that the F1 scores roughly peak at \(T=1000\) for both datasets, which we fix for pre-training our model.
#### Iv-B2 Effect of Loss Functions:
In our work, we use a combination of structural similarity and focal losses. In Table VIII, we explore the contribution of individual losses: cross-entropy (CE), SS, and FL. We observe that CE loss performs well only on MoNuSeg, where the training data is less, and poorly on the other two datasets. Qualitative results are presented in Fig. 4 to show the effect of different loss functions on segmentation performance. When we have access to more training data, both SS and FL perform better individually. Finally, the combination of SS and FL gives the best performance on all the segmentation datasets.
#### Iv-B3 Effect of Loss Scaling Factor
In order to evaluate the contribution of each loss term in our multi-loss function, we vary the hyperparameter \(\lambda\in\{0.1,0.5,1,10\}\). From Table IX, we observe that an equal weighting of both the SS loss and focal loss (\(\lambda=1\)) offers the best performance in terms of F1 scores, indicating that a balancing reconstruction error and class imbalance is necessary for efficient learning.
#### Iv-B4 Effect of Varying Architecture Size
We experiment with varying UNet block sizes and record the effect of network size on the segmentation performance. The number of downsampling (or upsampling) levels is varied between \(3\) to \(5\), along with the number of output channels in each of the levels in the UNet architecture. Table X illustrates the number of levels accompanied by the number of output channels in each level. It is observed that a UNet consisting of \(4\) levels with \((64,128,256,512)\) output channels performs the best. Moreover, within \(4\) level UNet architectures, we note that the standard UNet architecture performs the best, indicating that this network efficiently trades off bias and overfitting.
## V Conclusion and Discussion
In this work, We proposed a novel two-stage learning approach for segmenting histopathological images. The first stage consists of learning good visual representations of histopathological images through self-supervised learning. This is achieved through a diffusion model trained without annotated data. Since segmentation is an image-to-image task, it benefits from learning representations through image-to-image self-supervision. It is noteworthy that the quality of representations has a direct impact on the performance of downstream tasks such as classification and segmentation. Hence, the self-supervised pre-training stage learns good representations that can later be downstream to any task in general. Moreover, since datasets generally contain low amounts of annotated images, SSL methods can perform better through the use of an abundant number of unannotated images. Once the pre-training is complete, we fine-tune the UNet model for segmenting histopathological images using a novel multi-loss function composed of structural similarity loss and focal loss.
We also introduce a new head and neck cancer dataset consisting of annotated and unannotated histopathological images. The research community can use this dataset to develop unsupervised, self-supervised, and even fully supervised machine learning algorithms to analyze and understand histopathological images for segmentation, detection, classification, and many other tasks.
Fig. 4: Qualitative Results of the proposed multi-loss function on all three datasets. The losses are abbreviated as CE - cross-entropy, FL - focal loss, and SS - structural similarity.
The results presented in Section IV give us some interesting insights into the problem. Firstly, it can be seen that the generation performance of the DDPM impacts the learning of effective representations. Secondly, task-specific loss functions need to be carefully designed so that models are able to leverage task-relevant information during optimization. In summary, our approach serves as a stepping stone for exploring generative self-supervised training as a pretext task for learning-based medical applications.
|
2303.05862
|
Monitoring Gender Gaps via LinkedIn Advertising Estimates: the case
study of Italy
|
Women remain underrepresented in the labour market. Although significant
advancements are being made to increase female participation in the workforce,
the gender gap is still far from being bridged. We contribute to the growing
literature on gender inequalities in the labour market, evaluating the
potential of the LinkedIn estimates to monitor the evolution of the gender gaps
sustainably, complementing the official data sources. In particular, assessing
the labour market patterns at a subnational level in Italy. Our findings show
that the LinkedIn estimates accurately capture the gender disparities in Italy
regarding sociodemographic attributes such as gender, age, geographic location,
seniority, and industry category. At the same time, we assess data biases such
as the digitalisation gap, which impacts the representativity of the workforce
in an imbalanced manner, confirming that women are under-represented in
Southern Italy. Additionally to confirming the gender disparities to the
official census, LinkedIn estimates are a valuable tool to provide dynamic
insights; we showed an immigration flow of highly skilled women, predominantly
from the South. Digital surveillance of gender inequalities with detailed and
timely data is particularly significant to enable policymakers to tailor
impactful campaigns.
|
Margherita Bertè, Kyriaki Kalimeri, Daniela Paolotti
|
2023-03-10T11:32:45Z
|
http://arxiv.org/abs/2303.05862v1
|
# Monitoring Gender Gaps via LinkedIn Advertising Estimates:
###### Abstract.
Women remain underrepresented in the labour market. Although significant advancements are being made to increase female participation in the workforce, the gender gap is still far from being bridged. We contribute to the growing literature on gender inequalities in the labour market, evaluating the potential of the LinkedIn estimates to monitor the evolution of the gender gaps sustainably, complementing the official data sources. In particular, assessing the labour market patterns at a subnational level in Italy. Our findings show that the LinkedIn estimates accurately capture the gender disparities in Italy regarding sociodemographic attributes such as gender, age, geographic location, seniority, and industry category. At the same time, we assess data biases such as the digitalisation gap, which impacts the representativity of the workforce in an imbalanced manner, confirming that women are under-represented in Southern Italy. Additionally to confirming the gender disparities to the official census, LinkedIn estimates are a valuable tool to provide dynamic insights; we showed an immigration flow of highly skilled women, predominantly from the South. Digital surveillance of gender inequalities with detailed and timely data is particularly significant to enable policymakers to tailor impactful campaigns.
digital demography, LinkedIn Advertising Platform, social networks, gender gap +
Footnote †: journal: Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acmment of the Acment of the Acmment of the Acment of the Acmment of the Acment
are reflected in the data obtained through the LinkedIn social media platform. Italy is the third largest economy in the European Union (LaskedIn, 2013), and although in the last years, significant progress has been made in reducing gender inequalities in the labour market, they persist in being tightly woven into the social fabric. Here, we examine those from a sub-national point of view, relating LinkedIn's estimates to measures provided by traditional data obtained through the Italian national institute of statistics (ISTAT) (Krishnan, 2015) and the European Statistical Office (EUROSTAT) (Krishnan, 2015). Local perspective is particularly significant in Italy since, by Constitution, regional administrations can act directly to mitigate the problem. Finally, the need for a wide Italian presence on LinkedIn is met: it is the third European country by the number of members, with about 17 million users (Krishnan, 2015).
We aim to address the following core research questions:
**RQ1** How reliable are LinkedIn advertising audience estimates, especially concerning a country's labour force and the official demographic figures?
**RQ2** How can we enrich the current view of the gender gap in Italy as seen through LinkedIn?
**RQ3** Can we predict the employment gender gap leveraging LinkedIn estimates and sociodemographic data?
To answer the aforementioned questions, we estimated the Gender Gap Index (GGI) (Krishnan, 2015) on the numeric estimates of the potential audiences obtained via the platform's API. We show that LinkedIn's population estimates correspond to the official Italian labour force census. The gender distribution in each economic sector on LinkedIn positively correlates with the official data. Nonetheless, we observe that most sectors are under-represented, except those related to Technology, which is coherent with the highly skilled workforce the platform addresses to (Krishnan, 2015).
LinkedIn has its inherent population biases with gender and age ranges not uniformly represented (Krishnan, 2015). However, the data obtained by the platform are representative of the labour force in general, with the increased gender gap observed in the Southern part of Italy (Krishnan, 2015) and the more senior roles having higher male estimates in leadership positions (Krishnan, 2015). This aligns with broader known sociodemographic inequalities in Italy (Krishnan, 2015; Delfosse and Goudar, 2015). The digitalisation rate reported by ISTAT impacts the representativeness of the workforce in an imbalanced manner, with women needing to be more represented in Southern Italy. This is a crucial point to consider when leveraging this data source to assess the labour force gender gap in countries with a low digitalisation rate. Finally, we noticed that highly skilled employees (graduated or doctorate level) are more likely to move to other countries, undermining the development of the labour force, particularly in the most vulnerable areas (e.g. the South). The LinkedIn audience reflects this: the platform is more gender-balanced in the regions with more high-skilled female immigrants from abroad (e.g. North-Center). We contribute to the current literature by showing the importance of the LinkedIn advertising Platform in accurate monitoring the labour patterns within countries, focusing on Italy as a case study.
## 2. Related Work
Traditionally, the gender divide is monitored with data provided by Census. The Global Gender Gap Index (GGGI) was theorised in 2006 (Serban et al., 2006) and since then is employed to compare the different aspects of the gender gap (economy, health, education, politics) worldwide. In 2013, the Gender Equality Index was computed for the European Union, commissioned by the European Institute for Gender Equality (EIGE) (Krishnan, 2015). Those indices allow us to keep track of the gendered disparity and compare data worldwide and even across years; however, the sub-national disparities were not assessed.
Digital Gender GapsOver the last few years, scientists have relied increasingly on digital data estimates and SM advertising platform estimates to address complex demographic and social research questions. Early on, Garcia et al. (Garcia et al., 2016) suggested a global measure for gender disparity leveraging on Facebook and estimates for 217 countries, while more recently, Fatehka et al. (Fatehka et al., 2016) investigated the digital gender gap for 193 countries, confirming known trends obtained via traditional sources. Combining Facebook with Google advertising data, Kashyap et al. (Kashyap et al., 2017) assessed the worldwide digital divide. The high resolution and richness of SM advertising estimates allow for sub-national focus (Krishnan, 2015), but also for evaluating gender inequalities concerning specific interests such as the interest in the STEM disciplines (Krishnan, 2015; Krishnan, 2015).
Labour Market Gender GapsThe LinkedIn advertising platform offers critical demographic estimates facilitating the study of various topics relating to labour dynamics (Krishnan, 2015; Krishnan, 2015). LinkedIn estimates were informative to understand the variations of gender gaps in IT industries both globally (Krishnan, 2015; Krishnan, 2015) and sub-nationally (Krishnan, 2015). The LinkedIn Gender Gap Index (GGI) was initially proposed in (Krishnan, 2015; Krishnan, 2015) as the ratio between the estimated number of women with specific attributes over the estimated number of men with the same attribute, consisting the equivalent of GGGI for LinkedIn and estimates. Since then, varied metrics have been proposed, but the GGGI was shown to be the most accurate to measure the gender gap (Krishnan, 2015).
Although this study is close to the approach and methods presented in Verkroost et al. (Verkroost et al., 2015), and Kashyap et al. (Kashyap et al., 2017), the metric we employ to assess the labour gender gap slightly differs from the ones proposed in the present literature (Krishnan, 2015; Krishnan, 2015; Krishnan, 2015). In particular, to capture the interplay of the local dynamics in Italy, we dive into the Italian regions' labour market data, normalise the estimates according to the official population census data, and we do not a priori assume the female gender to be under-represented. The GGGI benchmarks the current state and evolution of women's situation in four key dimensions (Economic Participation and Opportunity, Educational Attainment, Health and Survival, and Political Empowerment); here we explore disparities for both genders.
## 3. Data Collection
For this study, we obtained data from the European and Italian official data statistics offices, i.e. EUROSTAT and ISTAT, respectively, and from the LinkedIn ads platform (Krishnan, 2015).
Official Census and Survey DataItaly can be divided into five main geographical zones. ISTAT computes the rate of regular Internet users through a survey to measure how many people have daily access to the Internet, regardless of the device. Based on this yearly report, there is a uniform digital gender gap throughout Italy (on average, six percentage points difference between the rate of regular male Internet users from the female rate); in the North,
the average rate of regular Internet users is 78% for men and 72% for women; in the South, the average is 71% for men and 65% for women.
* North-East: Emilia-Romagna, Friuli-Venezia Giulia, Trentino Alto Adige, and Veneto
* North-West: Liguria, Lombardia, Piemonte, and Valle d'Aosta
* Center: Lazio, Marche, Toscana, and Umbria
* South: Abruzzo, Campania, Molise, Puglia, Basilicata, and Calabria
* Islands: Sardegna and Sicilia
These five zones are merged into two main groups: North-Center (North-East, North-West, Center) and South (South and Islands), composed of regions quite similar in socioeconomic status. The locations for the data collection are chosen taking the European official territorial units for statistics (NUTS from the French acronym) of level 2 for 2021 provided by EUROSTAT [13].
LinkedIn ad Audience Estimates.Focusing on the Italian context, we collected aggregated counts of LinkedIn users, querying the ad campaign manager via the official application programming interface (API)1. Audiences are targeted based on geographic location, demographic criteria such as gender or age group, and job criteria such as company industry and job seniority. Here, we target the locations to a sub-national level to capture the local nuances and trends that are otherwise difficult to obtain. Overall, we gathered data for 20 regions in Italy from July to November 2022. In detail, we queried for the following characteristics (attributes).
Footnote 1: We were based on the open source code by Lucio Melitto [https://worldbank.github.io/connectivity_mapping/intro.html](https://worldbank.github.io/connectivity_mapping/intro.html)
* **Location.** According to LinkedIn official documentation [29], this attribute can be based on the location a member has included in their profile or their IP address. We collected the Italian data referring to the NUTS2, i.e. the basic regions for applying regional policies. For Italy, they are 20 regions.
* **Gender.** On the LinkedIn ads platform, gender is binary: Male, Female. In our study, hence we follow this binary approach. Among the know limitations of our work, we acknowledge the binary choice of gender and the overarching assumption that all LinkedIn users are active in the labour market.
* **Age range.** Age is provided in the following ranges 18-24, 25-34, 35-54, 55+. The Age range a member belongs to, is inferred on their first graduation date, but also the _Years of Experience_ can be used as a proxy as to what is reported by the official documentation [29].
* **Job seniority.** It "describes the rank and influence of a member's current role in their organization" (as stated in [29]). We target all seniority levels Unpaid, Training, Entry, Senior, Manager, Director, VP, CxO, Partner, and Owner.
* **Company industry.** The economic sector in which the employing company belongs to 2. Footnote 2: The list of the 20 main company industries chosen can be found in Table A4 of the Appendix A.
Table 1 summarises the data collection performed per age group and gender compared to the official population census (ISTAT). Further, we gathered the estimates by gender and age range per region. Figure 1 depicts the estimates obtained per Italian region.
### Methods
Data Preparation.Overall, for all the locations (20 regions in Italy), we queried by gender and age to obtain a "LinkedIn Census". At national level, keeping the location fixed (Italy), we targeted the audiences by gender, age-range and job seniority to get seniority data, while to collect estimates per economic sector, we queried by
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Age & F LinkedIn & M LinkedIn & F Census & M Census & F LinkedIn / F Census & M LinkedIn / M Census \\ \hline
18-24 & 1.300.000 & 1.200.000 & 1.963.785 & 2.136.690 & 66,20\% & 56,16\% \\
25-34 & 4.600.000 & 4.700.000 & 3.048.664 & 3.195.963 & 150,89\% & 147,06\% \\
35-54 & 1.700.000 & 2.100.000 & 8.371.114 & 8.279.348 & 20,31\% & 25,36\% \\
55+ & 180.000 & 380.000 & 12.371.753 & 10.396.360 & 1,45\% & 3,66\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Number of people by age range estimated by LinkedIn ads in September 2022 and in ISTAT as reported by 2021 data. We also report the percentages of LinkedIn’s estimated audience over the census population numbers reported by ISTAT for each age range.
Figure 1. Representation of LinkedIn GGI normalised in the Italian regions with exclusion query.
gender, age-range and company industry. We obtained estimates for all queries (19 regions) but one; the number of women on LinkedIn older than 55 years old located in Valle d'Aosta (the smallest Italian region). In that case, the estimated number of members was less than 300 users, and hence, the API returns 0. To overcome this issue, we employed the "query exclusion" method proposed by Rama et al. [36]. We obtained an estimate of over 200 members, still sufficient to ensure privacy. To apply the "query exclusion" approach, we chose several "reference cites" opting for large cities with a sufficient amount of users for the majority of the categories we are interested in. We decided to refrain from collecting data at the provincial level to avoid privacy concerns. For the same reason for queries with more specific targeted attributes (besides location, gender, and age range), we opted to obtain the data at a national level to avoid possible identification of users.
Gender Gap MetricsWe measured the gender divide in the LinkedIn community adapting the Global Gender Gap Index (GGGI) [47] proposed by Verkroost et al. [45]. First, we computed the gender gap index of the LinkedIn population normalising ads estimates by ISTAT population data:
\[\text{LinkedIn~{}GGI~{}normalised}=\frac{\text{F~{}LinkedIn}}{\text{M~{} LinkedIn}}\cdot\frac{\text{M~{}Census}}{\text{F~{}Census}} \tag{1}\]
where, F LinkedIn and M LinkedIn are the estimated number of women and men, respectively, in LinkedIn, while F Census and M Census are the estimated number of women and men, respectively reported by ISTAT.
For each attribute, we adapt the above equation normalising by the LinkedIn population,
\[\text{LinkedIn~{}GGI~{}(Attribute)}=\frac{\text{F~{}LinkedIn~{}(Attr)}}{ \text{M~{}LinkedIn~{}(Attr)}}\cdot\frac{\text{M~{}LinkedIn}}{\text{F~{} LinkedIn}} \tag{2}\]
where: F LinkedIn (Attr) refers to the estimated number of women in LinkedIn that have the specific attribute, and M LinkedIn (Attr) to the number of men, respectively. Contrary to the mainstream approach [47], where an over-estimation of women is not considered (GGI index greater than one), we opted for the whole range of values, also evidencing when men are the minority group. To measure the gender gap in employment status and economic section we defined the following ratios:
\[\text{Employment~{}GGI}=\frac{\text{\%~{}F~{}Working}}{\text{\%~{}M~{}Working}} \tag{3}\]
where % F Working (resp. % M Working) is the percentage of working women (resp. men). Similarly, for each economic sector, we estimated the index as the gender ratio between the EUROSTAT percentage of employed women and men in each Nomenclature of Economic Activities (NACE):
\[\text{NACE~{} employment~{}GGI}=\frac{\text{\%~{}F~{}NACE~{}Working}}{\text{\%~{}M~{}NACE~{}Working}} \tag{4}\]
where % F NACE Working (resp. % M NACE Working) indicates the percentage of women (resp. men) working in a particular economic sector. All the indices introduced in this section were computed by age range and location to unveil discrepancies that may occur based on those factors.
Prediction of Gender GapFinally, we assessed the digital estimates' prediction capabilities of the gender gap. We built a multilinear regression model that leverages LinkedIn estimates and sociodemographic data to predict the gender gap in the workforce (Eq. 3). We evaluated the model performance employing the Mean Absolute Error (_MAE_) and the \(R^{2}\)-adjusted (\(R^{2}_{adj}\)) accounting for the number of predictors. To reduce the dimensionality, we implemented a step-wise feature selection. The set of features considered included the age range and several socio-economic regional indicators, such as the percentage of Gross domestic product (GDP) per capita in Purchasing Power Standards (PPS) expressed to the European Union average (equal 100), the gender ratio of digitization level, the gender ratio of the percentage of young NEET (Not [engaged] in Education, Employment or Training) and a welfare policy marker: the percentage of children under two years old going to the kindergarten. We applied also 5-fold cross validation method to check overfitting and the Isolation Forest Algorithm (with contamination 0.05) to detect outliers before the model training.
Figure 2. LinkedIn **GGI normalised regional estimates (y-axis) versus Employment GGI (x-axis) by age range. Data will be collected for Italian regions in September 2022. The dark red line shows a regression line with 95% confidence; the light blue line is the equality line. Values above the dashed line (y-axis) indicate an over-representation of women.**
## 4. Results & Discussion
### Socioeconomic & Demographic Representativity.
This work explores an increasingly popular data source in digital Demography, the LinkedIn Advertising estimates. These data provide cost-effective insights into gender inequalities in the labour market in a detailed and timely manner, essential to crafting impactful policies, especially when the official data are sparse or hard to obtain. Our findings confirm the reliability and potential of the LinkedIn advertising audience estimates to the labour force dynamics recorded from the official sources.
Demographic RepresentativityFor a digital data source to be reliably employed in policy and decision-making, assessing its demographic representativity is fundamental. Comparing the LinkedIn GGI normalised index (Eq. 1) with the Employment GGI index (Eq. 3) per gender and age range (aligning the age groups are reported in Table A1), we noticed a strong positive correlation for all the age groups indicating that the obtained estimates were demographically representative of the Italian workforce (Table A5, row 2).
Figure 2 depicts the relationship of the gender gap observed in LinkedIn to the official STAT employment data. Women are vastly over-represented in the younger age group (LinkedIn GGI normalised above the equity line), while they are under-represented in the elder age group (LinkedIn GGI normalised under the equity line). For the STAT data, none of the age ranges reaches the equity line (value one on the x-axis). The percentages of women working are lower than those of men. In Figure 2, we also noticed that the Italian regional divide is captured: the Southern regions lack a workforce with respect to the Northern ones (southern regions are grouped in the lower part of the regression line). This aligns with the general employment trends in Italy (Borda et al., 2017; Borda et al., 2017).
Focusing on the intrinsic biases of the medium, we observed more male than female users on LinkedIn (Figure 3) for all the age ranges from 25 years old and elder, opposite to what happens in other social media as META or Instagram (Kashyap et al., 2018). From a geographical point of view, LinkedIn GGI normalised's distribution varies across the country. On average, in the South, we have less gender parity, especially as the audience gets older. Consistently with the findings of Kashyap et al. (Kashyap et al., 2019), the most accurate age range representation is observed for the group of 25 to 34 years old, whereas the category of the users being more than 55 years old is strongly limited, for women in particular. In Table 1, the age range 25-34 is over-represented in LinkedIn to ISTAT data in Italy. The same behaviour was observed for the Facebook ad estimates (Kashyap et al., 2018). This can be partly attributed to the fact that people may be temporarily located in Italy and hence not recorded by the official Census. Fake accounts may also be part of the equation (Borda et al., 2017). Overall, the LinkedIn estimates are shown to capture the employment frame in Italy reliably.
Company Industry RepresentativityTo assess possible biases within the economic sectors, we aggregated the LinkedIn users by gender, age, and per each economic sector3. For each group, we estimated the percentages of employed individuals over the total number of employees for LinkedIn and EUROSTAT. Correlating these percentages, we found positive relationships reported in Table A5.
Footnote 3: Since age ranges and economic sectors do not correspond perfectly between EUROSTAT and LinkedIn, we grouped them as reported in Table A2 and A4.
Figure 4 depicts the relationship between the two data sources, LinkedIn and EUROSTAT. We notice that on LinkedIn, several economic sectors are under-represented (e.g. Farming-A, Construction-F, Administration-N, Transportation-H), while the Technology sector is over-represented (e.g. Technology Information and Media - J, Professional Services - M).
A critical gender gap emerges in several sectors (e.g. Construction-F for men, Education-P for women, Health-Q for women), reflecting the general employment trends from official data (Borda et al., 2017) (see Figure 5). To assess these gender gaps, we compared NACE employment GGI (Eq. 4) index with LinkedIn GGI (Company industry) (Eq. 2). We found a strong and statistically significant correlation (Table A5 row 6) confirming that the gender gap in the LinkedIn data mirrors the participatory gender gap in the labour market. Overall, diving into the distinct economic sectors, we observe that LinkedIn estimates are a reliable sensor for near real-time monitoring of the labour market dynamics in Italy.
### Gender Gap Insights.
LinkedIn Insights: SeniorityIn Table A3, we notice that the leadership positions are not gender balanced (e.g. Manager, Owner, etc.), a finding also confirmed by Kashyap et al. (Kashyap et al., 2019) on LinkedIn estimates at a worldwide level, and social science literature on the topic (Kashyap et al., 2018; Kashyap et al., 2018). This may be due to the lack of economic participation and opportunity for women in apical positions (Borda et al., 2017).
Digital DivideTo ensure the effectiveness of the digital divide on the obtained LinkedIn ads estimates, we employed data about the regional level of digital penetration reported by ISTAT 4. After computing the respective ratios per gender, age, and region, we compared them to the LinkedIn GGI normalised (Eq. 1). We did not observe a statistically significant correlation between the gender gap in the digitalisation data and the LinkedIn GGI normalised for Italy's Northern and Central regions. On the contrary, for the Southern regions, a strong correlation between the gender gap and
Figure 3. Distribution of the LinkedIn GGI normalised measure across different age ranges (y-axis in log scale) aggregated by zones of Italy.
digitalisation level did emerge (see Table A5 row 4 and Figure 6). Interestingly, Fatehka et al.[(16)], employing data from the Facebook advertising platform, also found that the digitalisation level is a good proxy for the gender gap worldwide.
_Mobility._ We obtained the mobility data for younger age groups (25 to 39 years old) with tertiary education (e.g. graduated, PhD) from the ISTAT, aggregated by gender and region. The data are provided as a migration net of highly educated individuals, indicating the difference between incoming and outgoing migration flows over one thousand individuals staying in Italy. A negative value indicates more people moving abroad, and a positive otherwise.
Table A5 (row 5), shows a significant positive correlation between the youth mobility measures of women and the LinkedIn GGI normalised (Eq. 1). Figure 7 shows the relationship between the most affected age ranges (25-34) (see Appendix A for the full report of age groups). In relationship with the official data, the LinkedIn GGI normalised is higher where more young graduated women arrived from abroad (Centre, North regions), and lower otherwise (South, Islands). This finding reflects a phenomenon typical for the Italian labour panorama, known as "brain drain," where the highly educated youth seeks employment abroad [(39; 7)].
### Predicting Gender Gap.
_Predictive Modelling of the Labour Gender Gap._ Where traditionally reported data are sparse or hard to obtain, inferring the gender gap in the labour market through the lenses of LinkedIn estimates could provide insights at scale and granularity for timely and accurate policies. Here, we were interested in predicting the gender gap from the LinkedIn estimates not simply as population aggregates but to age group and geographic location. We postulated the task as a multi-linear regression problem where leveraging the LinkedIn estimates; we predicted the actual gender gap. Before training, we performed outlier detection using Isolation Forest Algorithm. Removing the outliers, the obtained result (\(R^{2}=0.77\), \(MAE=0.05\)) showed no improvement and let us deduce that the gender gap is uniform without off-scale points. The model was trained on a dataset of four age ranges in 20 regions. The results yielded a \(R^{2}_{adj}=0.75\) (\(R^{2}=0.78\), \(MAE=0.05\)). The results of the 5-fold cross validation yielded an average \(R^{2}=0.72\) (with standard deviation 0.03) and average \(MAE=0.05\). Out of all data points, three residuals (difference between actual and predicted values) were more than two times greater than the residual standard error (0.06) of the distribution (Liguria 18-24; Umbria 18-24, Puglia 55+). These were all associated with the more sparse age ranges in LinkedIn (18-24 and 55+), for which the accuracy of the LinkedIn GGI normalised index falters.
Figure 4. LinkedIn percentage estimates (y-axis) versus employment percentage (x-axis) by age range, gender, and NACE. The over-represented economic sectors are highlighted. Data were collected for Italy in October 2022. The dark red line shows a regression line with 95% confidence and the labels refer to the EUROSTAT NACE codes as reported in Table A4, such as: A: Agriculture, C: Manufacturing, F: Construction, G: Sales, H: Transportation, I: Accommodation, J: IT and Media, K: Financial Services, M: Professional Service, P: Education, Q: Health, R: Entertainment
Figure 5. Comparison of LinkedIn percentage estimates by age range, gender, NACE.
## 5. Conclusions
Gender inequalities and discrimination in the labour market still hold despite significant advances worldwide. Given the slow pace and high cost of official surveying approaches, we assessed whether estimates from the LinkedIn advertising platform might be employed to obtain insights into the gender gaps observed in the work environment.
Here, we focused on Italy as a case study, a country for which we have detailed census data, which is at the forefront of the European economic scene but at the same time presents substantial sociodemographic inequalities. Touching upon a series of characteristics, such as seniority level and the various industrial domains, we shed light on the strengths and limitations of such tools for understanding the sub-national labour markets.
More precisely, we showed that LinkedIn advertising audience estimates could be employed as a proxy for the Italian labour market as it reflects the official statistics concerning age and gender and demographic distribution of the labour force. At the same time, the intrinsic biases of the platform - it is mainly adopted by highly skilled professionals and is less popular to older age groups - present several shortcomings to the industry category and seniority level, with industries such as Technology being more represented while others such as Farming to be underrepresented.
We also highlighted essential aspects related to the gender gap in Italy; the digital divide that varies significantly within the country can be an explanatory factor of the gender gap observed in LinkedIn, as women in the Southern regions of Italy have the lowest digitalisation rate. Last but not least, our findings underline the phenomenon of "brain drain", of significant concern for Italy, as the younger and skilled generation is migrating abroad. Our data show that this trend is particularly intense for the female population of the Southern areas of Italy.
###### Acknowledgements.
The authors gratefully acknowledge the support from the Lagrange Project of the Institute for Scientific Interchange Foundation (ISI Foundation) funded by Fondazione Cassa di Risparmio di Torino (Fondazione CRT).
|
2305.08010
|
ProKnow: Process Knowledge for Safety Constrained and Explainable
Question Generation for Mental Health Diagnostic Assistance
|
Current Virtual Mental Health Assistants (VMHAs) provide counseling and
suggestive care. They refrain from patient diagnostic assistance because they
lack training in safety-constrained and specialized clinical process knowledge.
In this work, we define Proknow as an ordered set of information that maps to
evidence-based guidelines or categories of conceptual understanding to experts
in a domain. We also introduce a new dataset of diagnostic conversations guided
by safety constraints and Proknow that healthcare professionals use. We develop
a method for natural language question generation (NLG) that collects
diagnostic information from the patient interactively. We demonstrate the
limitations of using state-of-the-art large-scale language models (LMs) on this
dataset. Our algorithm models the process knowledge through explicitly modeling
safety, knowledge capture, and explainability. LMs augmented with ProKnow
guided method generated 89% safer questions in the depression and anxiety
domain. The Explainability of the generated question is assessed by computing
similarity with concepts in depression and anxiety knowledge bases. Overall,
irrespective of the type of LMs augmented with our ProKnow, we achieved an
average 82% improvement over simple pre-trained LMs on safety, explainability,
and process-guided question generation. We qualitatively and quantitatively
evaluate the efficacy of the proposed ProKnow-guided methods by introducing
three new evaluation metrics for safety, explainability, and process knowledge
adherence.
|
Kaushik Roy, Manas Gaur, Misagh Soltani, Vipula Rawte, Ashwin Kalyan, Amit Sheth
|
2023-05-13T21:31:02Z
|
http://arxiv.org/abs/2305.08010v2
|
ProKnow: Process Knowledge for Safety Constrained and Explainable Question Generation for Mental Health Diagnostic Assistance
###### Abstract
Current Virtual Mental Health Assistants (VMHAs) provide counseling and suggestive care. They refrain from patient diagnostic assistance because they lack training on safety-constrained and specialized clinical process knowledge (ProKnow). In this work, we define ProKnow as an ordered set of information that maps to evidence-based guidelines or categories of conceptual understanding to experts in a domain. We also introduce a new dataset of diagnostic conversations guided by safety constraints and ProKnow that healthcare professionals use (ProKnow-data). We develop a method for natural language question generation (NLG) that collects diagnostic information from the patient interactively (ProKnow-algo). We demonstrate the limitations of using state-of-the-art large-scale language models (LMs) on this dataset. ProKnow-algo models the process knowledge through explicitly modeling safety, knowledge capture, and explainability. LMs with ProKnow-algo generated 89% safer questions in the depression and anxiety domain. Further, without ProKnow-algo generations question did not adhere to clinical process knowledge in ProKnow-data. In comparison, ProKnow-algo-based generations yield a 96% reduction in averaged squared rank error. The Explainability of the generated question is assessed by computing similarity with concepts in depression and anxiety knowledge bases. Overall, irrespective of the type of LMs, ProKnow-algo achieved an averaged 82% improvement over simple pre-trained LMs on safety, explainability, and process-guided question generation. We qualitatively and quantitatively evaluate the efficacy of ProKnow-algo by introducing three new evaluation metrics for safety, explainability, and process knowledge adherence. For reproducibility, we will make ProKnow-data and the code repository of ProKnow-algo publicly available upon acceptance.
## 1 Introduction
Mental health disorders such as Major Depressive Disorder (MDD)1 and Anxiety Disorder (AD)2 are widespread; 20.6% and 4.3% in the USA before the pandemic3. The current pandemic has further aggravated this issue. To address the key challenge of the overburdened healthcare system, there has been an increasing interest in AI-powered VMHA solutions as one alternative. For example, bots that administer Cognitive Behavioral Therapy (CBT) are programmed based on established medical guidelines, thus making them safe.
Footnote 1: [https://tinyurl.com/yckkp386](https://tinyurl.com/yckkp386)
Footnote 2: [https://tinyurl.com/5c646cf8](https://tinyurl.com/5c646cf8)
Footnote 3: [https://adaa.org/understanding-anxiety/facts-statistics](https://adaa.org/understanding-anxiety/facts-statistics)
As CBT is a template-based therapy, clinicians scrutinize patients by checking their behavior against rules. If a conversational AI (convAI)4 agent is put in place, there isn't a necessity to ask follow-up questions. However, to provide diagnostic support for MDD and AD, an AI system would require a validation between the patient's response and medical knowledge and the clinician's expertise. This is required to ensure safe and explainable conversations between the patient and a
convAI agent. Without explicit supervision from an external knowledge source, the convAI is susceptible to ignoring medical knowledge, being unsafe, and failing to capture cues from the patient's response that explains its decision, leading to poor explainability. Most often, clinicians leverage clinical guidelines or questionnaires to gather first-hand information on patients' mental health. For instance, for MDD, Patient Health Questionnaire (PHQ-9), and for AD, the Generalized Anxiety Disorder Questionnaire (GAD-7) is often used to measure the severity of mental health conditions. These questionnaires are what we consider process knowledge (ProKnow)[1, 2, 3, 4]. Incorporating ProKnow as an additional component in convAI can steer the natural language generation (NLG) to capture information relevant to diagnosis and constrains the topic of conversation. This is defined as (_medical knowledge capture_). Further, it would enforce safe and explainable mental health diagnostic assistance with minimal clinical involvement. In this research, we would be focusing on _follow-up question generation_, a task within conversational AI targeted toward improving engagement between agent and user [3].
Current research in question generation by large language models is at the mercy of datasets that must represent safe and valid responses for adequate quality control. Nabla, a Paris-based Healthcare Technology firm, leveraged GPT-3 for preventive care. To their surprise, GPT-3's response, _"I think you should"_ to the user's query "_Should I kill myself?_" raised concerns for the immediate adoption of GPT-3-like language models in mental healthcare5. Additionally, the black-box nature of GPT-3 and GPT-3-like neural NLG models causes significant difficulty in evaluating and explaining factually incorrect or erroneous generations. More generally, it isn't easy to evaluate the computational method's adherence to acceptable safety standards even if the data points in the dataset have been proven safe [5]. We define safety as the concept-by-concept match between a lexicon and the generated sentence. We term _Safety Lexicon_ as a dictionary of concepts that a clinician would be able to relate to a mental health condition. For instance, concepts like 'anxiety', 'anxiousness', 'anxious', 'agita', 'agitation', 'prozac','sweating', and 'panic attacks' in question are safe as they would infer AD. Concepts like 'depression', 'depressed', 'antidepressant', 'depressant', and others would describe MDD. ProKnow-driven NLG enhances **medical knowledge capture**, and leads to considerable reduction in harmful conversation (**safety**). Since ProKnow-driven NLG leverage questionnaires or clinical guidelines, every generation can be matched for explainability.
Footnote 5: [https://tinyurl.com/bdryre38](https://tinyurl.com/bdryre38)
Figure 1 illustrates a scenario where a convAI tasked
Figure 1: An illustration of safe and medically appropriate natural language question generated by an agent trained with ProKnow-algo.
to assess the severity of a user's anxiety generates questions that are risky and potentially won't be asked by a clinician. Whereas, if the same convAI is augmented with safety checks, like, generated questions matched with questionnaires or clinician-approved safety lexicons, it would endorse safe and explainable generation ([6]). Incorporating these checks into existing language models would facilitate better follow-up question generation.
In this research, we would demonstrate a process of creating ProKnow-data and a feasible ProKnow-algo for safety-constrained and explainable mental health diagnostic assistant. Incorporating process knowledge and corresponding algorithmic development addresses the following research questions:
**RQ1: Adherence to Process Knowledge:**
Does ProKnow-data impose constraints on conceptual flow on questions generated by ProKnow-algo-based LMs and pre-trained LMs?
**RQ2: Patient safety in conversation:** Does ProKnow-algo constrain the safety of the generated questions? Additionally, does augmentation of a _Safety Lexicon_ enhance the safety of ProKnow-algo's question generation?
**RQ3: User and clinician-focused explanations:**
We define a generated follow-up question to be explainable if it is understandable to the clinician and gathers informative responses from the patient. Do the tags ProKnow-data help the explanation of ProKnow-algo's question generation? Further, does semantic annotation of ProKnow-algo's question generation using **KB** enhance explanation quality as judged qualitatively by domain experts?
In the process of addressing these RQs, we introduce three application-specific metrics to assess whether the algorithm follows a process (Average Square Rank Error), is safe (Average Unsafe Matches), and explainable (Average Knowledge Context Matches). Through the constructed ProKnow-data and an adapted ProKnow-algo, we were able to enforce 96% better conceptual flow in language models. Further, the generations were 89% safe and statistically significant in capturing clinically explainable questions while outperforming state-of-the-art large language models without ProKnow. It is important to note that our task is to generate information-seeking follow-up questions. We use the term "question generation" or "follow-up question generation", interchangeably. This work is based on research conducted in [7, 4, 8, 1, 9, 2, 10, 3, 11, 12, 13]
**Data:** The existing mental health datasets are summarized in Table 1. To the best of our knowledge, no dataset exists that incorporates ProKnow into the dataset. [17] developed a rich annotation scheme that labeled strategies corresponding to 44 counseling conversations from among "domain, strategy, social exchange, and task-focused exchange" and trained a classifier to predict the counseling strategy. While the datasets contain reasonably rich annotation, they do not capture ProKnow.
**Algorithms:** If the dataset contains ProKnow or created using an external ProKnow, an algorithm can embed such annotations in a vector space for use by the NLG pipeline. However, such a strategy still leads to a black-box approach as it is difficult to comprehend how the algorithm is adapting to the ProKnow. As a result, the algorithm won't be explainable to the clinicians. Prior studies on transformer or sequence-to-sequence based question generation models have described their question generation function as conditional probability depending on (a) contextual passage, and (b) a ground truth answer. This scenario is very similar to SQUADv1, Natural Questions, WebQuestions, etc ([21, 22]). However, models trained on either of these datasets or similar _won't_ be able to generate a sequential list of questions that are required in clinical triage. Every set of questions in a clinical questionnaire is designed to judge the severity of the mental condition of an individual. In suicide-risk severity conditions, there is a flowchart representing a set sequence of questions, whereas, in anxiety or depression triage, the next question depends on the preceding question ([23]). Hence, along with the contextual passage and answer, we condition the current question generation
on the previously generated question.
Reinforcement Learning (RL) approaches have tried to model a generation process ProKnow by rewarding the model with adherence to ground truth using general language understanding evaluations (GLUE) task metrics such as BLEU-n and ROUGE-L. However, they do not _explicitly_ model clinically practiced ProKnow which enables explainable NLG that end-users and domain experts can trust ([24, 25, 26]). Hence, a method that effectively utilizes ProKnow will contribute to algorithmic explainability in the NLG process ([27, 28]). We demonstrate that the use of explicit clinical knowledge in both datasets and methods would yield a convAI agent that can yield safe and explainable generation.
**Human Biases through ProKnow:** Pre-trained attention-based language models are biased toward the lexical and syntactic co-occurrences between words in the training corpora. The loss function of language models learns human biases, which are not well-documented. In such a scenario, when such models are fine-tuned on Mental Health-like sensitive domains, they tend to generate sentences following the nature of the fine-tuning corpus. Hence, clinically verifiable learnable heuristics are desired to improve fine-tuning. Let me direct you to ProKnow-algo (Section 4). **Heuristic 1** (point 2 in algorithm) enforces the question generation should be of a particular tag (e.g., symptoms, cause, medication, etc.) and rank, which regulates the order in which the generated question should appear. Without these heuristics, generated questions can lose semantics and order. **Heuristics 2** (refer to point 3) ensure the generated question has entities in the mental health knowledge base (Mayo Clinic, in our proposed method). This enforces the preservation of context in the generated question, given the user's content. **Heuristic 3** (refer to point 4) include semantic lexicons built from PHQ-9 and the GAD-7, with support from involved clinicians. The purpose of lexicons is to ensure that terms that refer to question 1 in the questionnaire are present in the generated question. Without this heuristic, it would not be easy to rank the generated question. Prior studies like Retrofitting ([29]), CounterFitting ([30]), BERT-refinement ([31]) uses semantic lexicons.
In our proposed ProKnow-algo, we incorporate Human Biases that are well documented in clinical literature. These biases help language models focus on those clinically-relevant sentences in the posts that can contribute toward safe and diagnostically relevant questions ([32]).
## 2 ProKnow-data Construction
We followed a well-defined and expert-regulated method to create ProKnow-data for MDD and AD. It is a 2-step process with four rounds of annotations involving two senior psychiatrists (SPs) and two resident psychiatrists (RPs). SPs are responsible for defining the guideline for creating the questions a clinician
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Datasets** & \begin{tabular}{l} **Process-** \\ **Guided** \\ \end{tabular} & \begin{tabular}{l} **Safety** \\ **strained** \\ \end{tabular} & \begin{tabular}{l} **Con-** \\ **edge** \\ \end{tabular} &
\begin{tabular}{l} **Medical Know-** \\ **explainable** \\ \end{tabular} \\ \hline Counsel Chat [14] & ✗ & ✗ & ✗ & ✗ \\ CBT [15] & ✓ & ✗ & ✗ & ✗ \\ CC [16] & ✗ & ✗ & ✓ & ✗ \\ CC-44 [17] & ✗ & ✗ & ✗ & ✗ \\ Role Play[18] & ✗ & ✓ & ✗ & ✗ \\ SNAP [19] & ✓ & ✓ & ✗ & ✗ \\ Reddit C-SSRS [20] & ✗ & ✗ & ✓ & ✓ \\
**Proposed** & ✓ & ✓ & ✓ & ✓ \\
**Dataset**(ProKnow- \\ data) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(\blackcheck\) indicates a dataset has the feature, and \(\bigstar\) that it does not. ProKnow component: PG: Process Guided; SC: Safety Constrained; MK: Medical Knowledge; E: Explainability.
would ask when examining patients with depression or anxiety. They referred SCID-defined guidelines (an example of ProKnow) to create questions that elaborate on the queries in PHQ-96 and GAD-77. An elongated list of questions follows a causal pattern of questions. Together with MDD and AD-defined questions, information from SCID would create a considerable size dataset. However, it would not be sufficient in training a convAI agent. Hence, we are challenged with two hurdles: (a) How to create a richer dataset that would enable a convAI to generate information-gathering questions whose responses from patients would be assistive to the psychiat?, and (b) How to scale it to a larger number of samples?
Footnote 6: [https://tinyurl.com/5y7rp5sw4](https://tinyurl.com/5y7rp5sw4)
Footnote 7: [https://tinyurl.com/ycxww2u](https://tinyurl.com/ycxww2u)
**Formal description of ProKnow-data**: We define each data point in our dataset \(\mathbf{D}\) to be a triplet \(\langle x,\mathbf{Y},\mathbf{P}\rangle\), where \(x\) is a question from a medical questionnaire (PHQ-9 or GAD-7), \(\mathbf{Y}\) is a set of questions that elaborate on \(x\) (by RPs), and \(\mathbf{P}\), the process knowledge, is a set of (_Tag,Rank_) tuples corresponding to the elaboration questions in \(\mathbf{Y}\) (by an SP). An example triplet \(\langle x,\mathbf{Y},\mathbf{P}\rangle\) is seen in Table 2.
As writing down questions from scratch would be tedious, to address **(a)** we supported RPs with questions from Google's SERP-API and Microsoft People Also Ask API. Our extraction process involves a set of seed questions from RPs and then iteratively gathering a set of 40 questions that RPs approve or disapprove. Further, from the approved set of questions for each query in either PHQ-9 or GAD-7, they ordered the questions giving them a causal _Tag_. The causal tag explains the process, and the ranking and relevance help the neural NLG model capture relevant and meaningful sequences. In the first round of annotation, Cohen's Kappa score was 0.72 on the relevancy of questions, and Krippendorff alpha score was 0.68 on ranking the questions based on causal tags. In subsequent rounds of annotations, the SPs were asked to approve or disapprove RPs annotation, and in case of major conflict, seek re-annotations. The final dataset
\begin{table}
\begin{tabular}{l|l|l} \hline \hline GAD-7 Question (x) & Paraphrases (\(\mathbf{Y}\)) & Process Knowledge (\(\mathbf{P}\)) \\ & Do you feel nervous anxious & (Yes/No,1) \\ Feeling nervous, anxious, or on edge & How likely are you to feel this way & (Degree/frequency,2) \\ & Any ideas on what may be causing this & (Causes,3) \\ & Have you tried any remedies to feel less nervous & (Remedies,4) \\ & Are you also feeling any other symptoms such as jitters or dread & (OSI, 5) \\ \hline & Do you feel not able to stop & (Yes/No,1) \\ Not being able to stop & or control worrying & (Degree/frequency,2) \\ or control worrying & Any thoughts on what may be causing this & (Causes,3) \\ & Have you tried any remedies to stop worrying & (Remedies,4) \\ & Are you also feeling any other symptoms & (OSI, 5) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of ProKnow-data for GAD-7. OSI: Other Symptoms or Information
recorded 0.805 and 0.811 Cohen agreement among SPs and RPs respectively on relevancy criteria. In causal tag annotation, 0.733 and 0.748 Krippendorff agreement was achieved among SPs and RPs respectively.
To address **(b)** we expand this dataset using a T5 paraphrasing model to obtain 800,000 data points that contain conversations similar to the annotated dataset8. Such paraphrasing is required to train the branching models to generate natural language text that captures the essence but isn't repetitive during communication with the patient. Table 2 shows an example row in ProKnow-data.
Footnote 8: [https://huggingface.co/prithivida/parrot_paraphraser_on_T5](https://huggingface.co/prithivida/parrot_paraphraser_on_T5)
## 3 Proposed Approach (ProKnow-algo)
The parametric knowledge within pre-trained language models (LMs) have often been exploited in downstream task through distillation ([33, 34]) or fine-tuning ([35]). However, enforcing conceptual flow in question generation, adherence to prior knowledge, and safety have not been explored. This is because these properties required a specialized dataset and training process. So, to make LMs functional over the ProKnow-data, we propose a search algorithm mounted over pre-trained LMs that explicitly compares the generated question against the ProKnow-data ground-truth questions, _Safety Lexicon_, and a knowledge base (**KB**). This introduce an additional loss function along with cross-entropy loss that promotes **medical knowledge capture** and **safety**. Further ProKnow-algo enforces conceptual flow in question generation, thus capturing precise, relevant information through the use of the rank in ProKnow-data.
At the center of ProKnow-algo are a branch and bound method which is a conditional probability-based scoring function that takes as input the previous question (\(Q_{k}\)), the tag and rank of \(Q_{k}\), **KB**, and safety lexicon (\(L\)) to compute a score that reflects on safety, medical knowledge capture, and explainability of the generated question. The **KB** comprises comprehensive mental health lexicons that have been built using PHQ-9, GAD-7, and other questionnaires ([6])9. If the score is above a threshold, the question is generated else the model is penalized for such generations. We break down the ProKnow-algo into four components and formalize them in Algorithm 1.
Footnote 9: Some of the lexicons are built as a part of this study and would be made public.
Using ProKnow-algo, we propose two novel architectures:
**QG-LSTM:**: \(Q^{k}\) is passed as input to the LSTM Cell Type 1, which generates the first token for \(\hat{Q}_{k+1}\). LSTM Cell Type 2 then generates the remaining tokens of \(\hat{Q}_{k+1}\) until \(\langle EOS\rangle\) token is seen. LSTM Cell Type 1 stops generating questions when the _end of list_ sentence is seen (the _end of list_ sentence is appended to the set \(\mathbf{Y}\) in \(\langle x,\mathbf{Y},\mathbf{P}\rangle\) for all triples) to signify the end of the questions set for a query \(x\) similar to a \(\langle EOS\rangle\) token. Figure 2 illustrates the working architecture of QG-LSTM.
**QG-Transformer (QG-T):**: This model has the identical architecture to QG-LSTM, except that the LSTMs are replaced with Transformers. Our experiments find that the QG-T and T5-FT perform best. \(Q^{k}\) is passed as input to the Transformer Type 1, which generates the first token for \(\hat{Q}_{k+1}\). Transformer Type 2 then generates
the remaining tokens of \(\hat{Q}_{k+1}\) until \(\langle EOS\rangle\) token is seen. Transformer Type 1 stops generating questions when the _end of list_ sentence is seen (the _end of list_ sentence is appended to the set \(\mathbf{Y}\) in \(\langle x,\mathbf{Y},\mathbf{P}\rangle\) for all triples) to signify the end of the questions set for a query \(x\) similar to a \(\langle EOS\rangle\) token.
**On the Utility of Algorithm 1:** Through intersectionality with the knowledge base (KB) shown in **point 3** of ProKnow-algo, we seek _specificity_ in the generated questions, as shown in the following examples. The generated question "Do you feel anxious or nervous?" _is better than_ one from the vanilla transformer/sequence-to-sequence model "Do you feel afraid of something?". Another example from the depression context is "Is depression medication helping with the things bothering you?" _is better than_ "how many antidepressants are you taking for the things that are bothering?". (b) Through intersectionality with the Lexicon, as shown in **point 4** of ProKnow-algo, we made sure the generated questions are as diagnostic as the medical questionnaire. For instance, "How long have you struggled with sleep difficulties" is _clinically more relevant_ than "Would you like to know about some major sleep disorders?". Another example of the generated question by including point 4 in ProKnow-algo is "how often did you miss the medication?". It is information seeking and more relevant compared to "do you know about prozac?". Through Tag and Rank Heuristic, as shown in **point 2** of ProKnow-algo, we made sure the questions have a conceptual flow that follows the medical questionnaires.
Figure 2: An illustration of a LSTM-cell in QG-LSTM. Similar is the architecture of QG-T.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Lexicon Category** & **Concepts** \\ \hline
**Anxiety Disorder** & Cognitive distortions, panic attacks, hopelessness, physical sensations, \\
**(AD)** & Depressed mood, Dejection, Feel no pressure, Melancholy, Feeling blah, Nothing to live for, Feeling blue, Low spirit \\ \hline
**Major Depressive** & Petrified, Shaken, Terrified, Fear, Scared, Panicky, On edge, With \\
**Disorder (MDD)** & my stomach in knots, Freftul, Tense, Edgy, Antsy, Troubled, Panic \\ & attacks, Hopelessness, Physical sensations \\ \hline \hline \end{tabular}
\end{table}
Table 3: A snapshot of safety lexicon to constrain question generation in depression and anxiety context.
We reviewed prior studies that utilize principles of natural language inference to achieve conceptual flow. For instance, RoBERTa trained on SNLI and MNLI datasets is used in downstream applications requiring flow in question generation or response generation ([36]). However, the performance of RoBERTa on entailment is underpinching and unstable. After experimenting on ProKnow-data, which yielded sub-optimal results, we asked annotators to annotate the questions by providing us with rank. Hence, in our manuscript, we report Cohen's Kappa and Krippendorff alpha agreement scores. **Point 1** in ProKnow-also is the standard scoring function to generate questions in vanilla transformers or sequence-to-sequence models.
To validate the two novel architectures of ProKnow-algo: the QG-LSTM's or QG-T's question generation, we compute the cosine similarity between the context vector (QG-LSTM) or attention matrix (QG-T) with numerical representation of concepts in KB.
## 4 Novel Evaluation Metrics
There are three evaluation metrics that we introduce in this research to assess the model's performance in capturing knowledge context, being safe, and explainable in question generation.
**Average Number of Unsafe Matches (AUM):** This is defined as the number of named entities, n-grams, and longest common subsequence in the generated questions that do not have an exact match or partial match with the concepts in the safety lexicon. This is computed as an average over all the model-generated questions against the concepts in the safety lexicon. Such a measure provides a means to measure harmfulness in the generated question or the potency of severe consequences. This subjective inference would require expert validation. The range of AUM lies between 0.0 and the maximum number of tokens present in the question. Lower the AUM, the better the model.
**Average Number of Knowledge Context Matches (AKCM):** Further to AUM, AKCM focuses specifically on triples comprising of subject, predicate, and object extracted from the generated question. Thereafter, computing word more distance between the embedding of triples (BERT(s;p;o)) and concepts in the lexicon (BERT(concepts)). The range of AKCM is between 1.0 and 3.0, and the higher AKCM, the better the model. However, we found that not always a higher AKCM signifies a better model as a small addition of a meaningful concept can increase AKCM. Thus, we perform a statistical student t-test over multiple rounds of training and cross-validation results. We do the same for AUM.
**Average Square Rank Error (ASRE):** This metric measures the model's tendency to generate questions following causal tag and rank. For example, if Q1, Q2, Q3, Q4 are generated in the correct order for a patient, then the total rank is 4. For another patient, if Q2, Q1, Q3, and Q4 are generated then only Q3 and Q4 are in the correct order, giving a rank of 2. The range of ASRE is 0.0 to 1.0, where lower is better. Further, we used Wilcoxon signed-rank test to measure the statistical significance of the model's generated sequence of questions over multiple cross-validation turns.
## 5 Results and Discussion
Table 4 and 5 record the experiments with a vanilla transformer models [37], transformer T5 fine-tuned for question generation, and our proposed models: QG-LSTM and QG-T. We conducted the experiments by augmenting ProKnow-algo to every variant of _seq2seq_ and transformer model to show generalizability.
(**RQ1) Evaluating Explainability**: If the generated questions have concepts that have clinical relevance and significance, they are recorded in AKCM. Through AKCM we found that \(T^{*}\dagger\) and T5-FT\(\dagger\) showed statistically significant generations compared to QG-LSTM\(\dagger\) and QG-T\(\dagger\). This metric contributes to explainability as the recorded patient response to these generated questions would help clinicians in informed decision-making. Hence, questions with clinically-relevant concepts would seek informative responses. For instance, a response to "Do you feel afraid of
something?" would be less explainable compared to "Do you feel anxious or nervous?". The latter is more specific and matched with a query in GAD-7. Likewise, "Do you feel nervous often?" would yield a less informative response than "Do you feel anxious about something?".
**(RQ2) Evaluating Safety**: The questions generated using ProKnow-ago-based LMs are 89% safer than LMs that compute standard cross-entropy loss. The addition of an extra loss component, as described in Algorithm 1 allows the model to generate a safer question. For example, when a patient says "I feel bothered by little interest and have the least pleasure in doing anything", then a QG-T without ProKnow
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline
**Model** & **ProKnow-**ago Points & **Rouge-L** & **BLEU-1** & **AUM** & **AKCM** & **ASRE** \\ \hline T5-FT & - & 0.71 & 0.59 & 2.5 & 1.0 & 0.0001 \\ T5-FT & Point 2 & 0.77 & 0.63 & 2.5 & 1.0 & 0.0001 \\ T5-FT & Point 2 and 3 & 0.77 & 0.63 & 2.5 & 1.3 & 0.0001 \\ T5-FT† & Point 2, 3, and 4 & 0.77 & 0.63 & 0.2 & 1.3 & 0.0001 \\ \hline QG-LSTM & - & 0.85 & 0.82 & 1.6 & 1.0 & 0.01 \\ QG-LSTM & Point 2 & 0.85 & 0.82 & 1.6 & 1.0 & 0.0004 \\ QG-LSTM & Point 2 and 3 & 0.85 & 0.82 & 1.6 & 1.12 & 0.0004 \\ QG-LSTM† & Point 2, 3, and 4 & 0.85 & 0.82 & 0.1 & 1.12 & 0.0004 \\ \hline QG-T & - & 0.87 & 0.82 & 1.32 & 1.0 & 0.1 \\ QG-T & Point 2 & 0.87 & 0.82 & 1.32 & 1.0 & 0.0007 \\ QG-T & Point 2 and 3 & 0.87 & 0.82 & 1.32 & 1.27 & 0.0007 \\ QG-T† & Point 2, 3, and 4 & 0.87 & 0.82 & 0.133 & 1.27 & 0.0007 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation Study on the QG-T, QG-LSTM, and T5 Models. For Points 2, 3, and 4 refer to ProKnow-ago in the submitted manuscript. If the table cannot be included due to space limitations, it will be provided in the accompanying Github resource. FT: Fine Tuned for Question Generation.
\begin{table}
\begin{tabular}{l l l l|l l l} \hline \hline
**Methods** & **AUM** & **AKCM** & **ASRE** & **Methods** & **AUM** & **AKCM** & **ASRE** \\ & \(\downarrow\)**Safety** & \(\uparrow\)**MKC** & \(\downarrow\)**ProKnow** & \(\downarrow\)**Safety** & \(\uparrow\)**MKC** & \(\downarrow\)**ProKnow** \\ \hline
**T\({}^{*}\)** & 2.2 & 1.0 & 0.0134 & **T\({}^{*}\)**† & 0.306 & 1.522 & 0.0001088 \\ & & & & & (✓) & (✓) & (✓) \\
**T5-FT** & 2.0 & 1.0 & 0.008 & **T5-FT†** & 0.171 & 1.412 & 0.000124 \\ & & & & & (✓) & (✓) \\
**QG-** & 1.167 & 1.0 & 0.007 & **QG-** & 0.106 & 1.123 & 0.000453 \\ & & & & & (✓) & (✓) \\
**QG-T** & 1.32 & 1.0 & 0.006 & **QG-T†** & 0.133 & 1.273 & 0.000712 \\ & & & & & (✓) & (✓) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison between models with the heuristic (†) and without the heuristic. ✓/✗ indicates statistically significant/insignificant improvement over the baselines at \(p<0.05\). \(\uparrow\) denotes that a higher score is better and \(\downarrow\) denotes that a lower score is better. MKC: Medical Knowledge Capture. T\({}^{*}\): [37]
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Methods** & **Rouge-L** & **BLEU-1** & **Methods** & **Rouge-L** & **BLEU-1** \\ \hline
**T\({}^{*}\)** & 0.63 & 0.49 & **T\({}^{*}\)**† & 0.67 & 0.55 \\
**T5-FT** & 0.71 & 0.59 & **T5-FT†** & 0.77 & 0.63 \\
**QG-LSTM** & 0.85 & 0.73 & **QG-LSTM†** & 0.90 & 0.78 \\
**QG-T** & 0.87 & 0.82 & **QG-T†** & 0.90 & 0.85 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The models without heuristics are evaluated by generation metrics.
also select from the following top-3 generated questions: (a) "Did you check your dopamine?", (b) "Do you feel your brain is affected?", and (c) "Did you intend to indulge in risky behaviors?". Whereas, QG-T\(\dagger\) selects from the following top-3 generated questions: (a) "What does lack of pleasure mean to you?", (b) "Do you feel little pleasure doing things you used to enjoy?", and (c) "How long have you struggled with lack of interest in things you used to enjoy?". AUM measured generations from QG-T\(\dagger\) to be safer than QG-T because terms like _dopamine, brain, risky behaviors_ do not show up in the safety lexicon. Likewise, among the generated, _"Do you feel irritable?"_ and _"Do you feel easily annoyed or destructive?"_, the former scored a higher probability of being safe. This is because _destructive_ is associated with more unsafe phrases and is not present in the _Safety Lexicon_. Thus, the ProKnow-algo steered the generation to the former sentence.
**(RQ3) Evaluation of Process in Generation**: ASRE recorded that questions generated using models with \(\dagger\) had almost 96% reduction in ordinal error. This implies that ProKnow-algo enforced checks on conceptual flow in pre-trained LMs in the last hidden state before question generation. In the following example, a user mentions that "He is bothered by trouble concentrating while reading the newspaper or watching television", then T5-FT generated question in the following order: (1) "Do you have a hard time falling asleep and staying asleep?", (2) "Do you feel like you sleep a lot but are still tired?", (3) "Would you like to know about some major sleep disorders?, and (4) "Would you like to know about the 5 major sleep disorder types?". If you observe carefully, these questions have following _tagged_ order: _Symptoms_\(\rightarrow\)_Symptoms_\(\rightarrow\)_Yes/No_ (Also an irrelevant generated question). Whereas the questions generated by T5-FT\(\dagger\) are in the following order: (1) "How many hours of sleep do you get on average each night?", (2) "Do you feel like you sleep a lot but are still tired?", (3) "How long have you struggled with sleep difficulties", and (4) "Have you been diagnosed with any sleep disorder?". The process followed by these questions are: _Cause_\(\rightarrow\)_Symptoms_\(\rightarrow\)_Cause and Symptoms_\(\rightarrow\)_Diagnosis_, which is a process-guided question generation. Further, among the generated text, "Do you feel nervous often?" and "Do you feel anxious about something?", the former scored a higher probability of being the next sentence. However, as the former is associated with a _tag_ of _Degree/frequency_ and the latter is associated with a _tag_ of _Yes/No_, the ProKnow-algo leads the algorithm to choose the latter sentence. Overall, 82% of the time the ProKnow-algo-based question generations were safe, explainable, and follows the clinical guidelines.
**Negative outcomes:** Among the generated text, "Do you feel nervous?" and "Do you feel nervous often?" both sentences scored a _rank_ 2. This is erroneous as the former is of _rank_ 1. Thus, we see that due to the lack of variety in the phrasing of certain sentences generated, the rank in the heuristic is wrongly computed. Further, among the generated \(\hat{Q_{k}}\), "Do you feel fearful?" and "Do you feel nervous a lot?", the former scored a _rank_ 2 and the latter scored a _rank_ 1. This is erroneous as the former is of _rank_ 1. Once again, we see that the rank in the heuristic is wrongly computed. In our experiments, we see a negative outcome 18% of the time, which implied we need to conduct more studies with more diverse datasets. We find that these errors occur when sentence generation requires relatively high semantic variations.
## 6 ProKnow Prototype for Mental Health Diagnostic Assistance
We prototype the text generation system trained using the ProKnow-algo and data and compare the text generation quality against the T5 model fine-tuned on the ProKnow-data. We see that the prototype's generations are safer in terms of the evaluation metrics defined in Section 4.The ProKnowalgo is incorporated in the question generation component of the mental health chatbot demonstrated here: **Pro
**Know Demo[7]**. We see that high-stakes use-cases such as mental health assessment from text data can benefit immensely from the use of constrained generation through the use of ProKnow both in model learning and dataset construction.
## 7 Conclusion
Developing models with process knowledge (e.g. clinical knowledge) is critical in making AI safe and explainable. Existing pre-trained language models have yielded out-of-context or factually incorrect results10. We believe that by enforcing order and relevance in addition to standard cross-entropy loss would support language models in following a sequence, that humans often follow. Further, safety and explainability can also be enforced by introducing additional scores in the loss, such as medical knowledge capture. However, to demonstrate such functionality, we require a specialized dataset that exhibits process knowledge. In this research, we projected on an inter-twined contribution of ProKnow-data and a generic ProKnow-algo that capture specialized medical process knowledge for safe and explainable diagnostic NLG for MDD and AD. First, we constructed an expert-annotated dataset ProKnow-data that explicitly captures ProKnow. Further, an algorithmic approach ProKnow-algo is developed to effectively utilize ProKnow-data using a search strategy, neural language models, and heuristic to account for safety, medical knowledge capture, and explainability in diagnostic NLG outcomes. To the best of our knowledge, we are the first to produce mental health data for improving NLG in the mental health sphere. Additionally, we create safety lexicons and KB to support safety and explainability in statistical AI when used to create convAI agent in mental health. Our experiments with statistical significance demonstrate that this research ProKnowis a concrete first step towards promoting trustworthy AI systems for mental health using such a framework. Additional examples of ProKnow-data are provided in the supplementary material.
Footnote 10: [https://blog.google/technology/ai/lamda/](https://blog.google/technology/ai/lamda/)
**Implementation Details:** We implemented our method using PyTorch on top of the HuggingFace Transformer Library [38] for T5-Fine Tuned and QG-T. For LSTM and QG-LSTM, we implemented our own method. The hyperparameter tuning was performed using python library "ray", setting the learning rate to 1.21e-5. QG-LSTM took 4 hours of training with cross-validation intervals in each epoch, whereas QG-T took 6 hours of training. All the models have been trained-tested on NVIDIA Tesla V100 GPUs, each with 16 GB RAM.
**Limitations:** Although our proposed approach offers several advantages over the existing models for question generation in the mental health domain, there are several limitations as well. Since the main idea behind our approach is the usage of the "process knowledge", it can be computationally expensive and time-consuming to generate the follow-up questions. Further, we demonstrated the efficacy of our approach in a closed domain task, its utility in an open domain hasn't been explored. The ProKnow-data construction took a considerable amount of effort and covered depression and anxiety. Creating a similar dataset for other mental health conditions like schizophrenia, and suicide can be more challenging. This also implies that there is a huge scope for improvement and extension in ProKnow-driven mental health assistance.
**Ethical Considerations:** This paper provides a novel mental health dataset constructed using our proposed ProKnow-algorithm. The medical guidelines for the construction of this dataset were given by the Senior Psychiatrist adhering to the PHQ-9 and GAD-7 questionnaires. Further, two Resident Psychiatrists from different hospitals created detailed questions. The dataset is annotated using expert annotators. Possible biases in our model predictions could be due to the annotation techniques and are not deliberate. The content concerning AD and MDD result in unfavorable real-life interaction scenarios. However, the current research aims to establish a claim that clinical process knowledge can be infused into deep language models to make them explainable and safe. In our algorithm, we mitigate the unfavorable cases as un
favorable sentences are not diagnostically acceptable to clinicians using AI-based assistance. The ProKnow-data will be made publicly available by following best-practices of ethical research ([39, 40]). Finally, we do not make any kind of medical recommendation or diagnosis and this dataset should be purely used for research purposes.
## 8 Acknowledgement
We want to thank Dr. Meera Narasimhan for helpful insights on constructing ProKnow guidelines for ProKnow-data. Also, we would like to thank her team for helping us with multiple annotation efforts. The prototype to be released will be deployed in Prisma Health, the largest healthcare provider in the state of South Carolina. We acknowledge partial support from National Science Foundation (NSF) awards #1761931 and #2133842 [27, 41].
|
2304.00765
|
Managing power grids through topology actions: A comparative study
between advanced rule-based and reinforcement learning agents
|
The operation of electricity grids has become increasingly complex due to the
current upheaval and the increase in renewable energy production. As a
consequence, active grid management is reaching its limits with conventional
approaches. In the context of the Learning to Run a Power Network challenge, it
has been shown that Reinforcement Learning (RL) is an efficient and reliable
approach with considerable potential for automatic grid operation. In this
article, we analyse the submitted agent from Binbinchen and provide novel
strategies to improve the agent, both for the RL and the rule-based approach.
The main improvement is a N-1 strategy, where we consider topology actions that
keep the grid stable, even if one line is disconnected. More, we also propose a
topology reversion to the original grid, which proved to be beneficial. The
improvements are tested against reference approaches on the challenge test sets
and are able to increase the performance of the rule-based agent by 27%. In
direct comparison between rule-based and RL agent we find similar performance.
However, the RL agent has a clear computational advantage. We also analyse the
behaviour in an exemplary case in more detail to provide additional insights.
Here, we observe that through the N-1 strategy, the actions of the agents
become more diversified.
|
Malte Lehna, Jan Viebahn, Christoph Scholz, Antoine Marot, Sven Tomforde
|
2023-04-03T07:34:43Z
|
http://arxiv.org/abs/2304.00765v2
|
Managing power grids through topology actions: A comparative study between advanced rule-based and reinforcement learning agents
###### Abstract
The operation of electricity grids has become increasingly complex due to the current upheaval and the increase in renewable energy production. As a consequence, active grid management is reaching its limits with conventional approaches. In the context of the Learning to Run a Power Network (L2RPN) challenge, it has been shown that Reinforcement Learning (RL) is an efficient and reliable approach with considerable potential for automatic grid operation. In this article, we analyse the submitted agent from Binbinchen and provide novel strategies to improve the agent, both for the RL and the rule-based approach. The main improvement is a N-1 strategy, where we consider topology actions that keep the grid stable, even if one line is disconnected. More, we also propose a topology reversion to the original grid, which proved to be beneficial. The improvements are tested against reference approaches on the challenge test sets and are able to increase the performance of the rule-based agent by 27%. In direct comparison between rule-based and RL agent we find similar performance. However, the RL agent has a clear computational advantage. We also analyse the behaviour in an exemplary case in more detail to provide additional insights. Here, we observe that through the N-1 strategy, the actions of the agents become more diversified.
Deep Reinforcement Learning Electricity Grids Learning to Run a Power Network Topology Control Proximal Policy Optimisation
## 1 Introduction
### Acronyms
\(Do\_Nothing\) Do-nothing Agent; \(Expert\) Expert Agent; \(Senior_{N-1,Topo}\) Senior Agent with N-1 action and topology reversion; \(Tutor_{N-1,Topo}\) Tutor Agent with N-1 strategy and the topology reversion; \(Tutor_{N-1}\) Tutor Agent with N-1 strategy; \(Tutor_{original}\) Original Tutor Agent; BOHB Bayesian Optimisation Hyperband; DDQN Dueling Deep Q Network; DRL Deep Reinforcement Learning; GNN Graph Neural Network; L2RPN Learning to Run a Power Network; MDP Markov Decision Process; ML Machine Learning; OPF Optimal Power Flow; PBT Population Based
Training; PPO Proximal Policy Optimisation; RL Reinforcement Learning; SMAAC Semi Markov Afterstate Actor Critic; TSO Transmission System Operators.
### Overview
With the growth of renewable energies in the electricity mix and their volatile behaviour, the complexity of operating an electricity grid is constantly increasing for Transmission System Operators (TSO) [1]. As a result, it is not only necessary to plan production capacities and predict the demand correctly, but also consider new options to manage grids in times of instabilities. Topological changes at substation level are an option that is gaining increasing attention as this is an existing cheap but underutilised flexibility. Their usage in controlling the stability of electricity grids has been discussed by several researchers over the last decades [2]. However, changes in topology also have a major drawback, namely that their optimisation requires a large amount of computational resources [3]. In the past, Optimal Power Flow (OPF) optimisation has often been used, but researchers have noted potential difficulties with the increase in renewable energy and smart grids [4] as well as criticised its inability to cope with large, non-linear combinatorial action spaces [5]. To solve this problem, the French TSO RTE proposed with the Learning to Run a Power Network (L2RPN) a Reinforcement Learning (RL) challenge that aims to join forces between the grid experts and the Machine Learning (ML) community.4 In the challenge, [6] provided a RL environment with the Grid2Op package [7], which allowed research on real data, while allowing the training of different algorithms on electricity grids.5 Grid2Op has become one of the leading frameworks for RL grid control in scenario-based simulations. We therefore conduct our research in this environment in order to provide comparable results.
Footnote 4: L2RPN challenges: [https://l2rpn.chalearn.org/](https://l2rpn.chalearn.org/) (last access 20/03/2023).
Footnote 5: Grid2Op: [https://github.com/rte-france/Grid2Op](https://github.com/rte-france/Grid2Op) (last access 20/03/2023).
### Research Contribution
While the first contributions to the automation of grid control have been made [8], it is not yet clear how large the impact of the RL solution will be. Is the RL approach solely responsible, or can heuristic and rule-based approaches achieve a similar solution if their behaviour is sophisticated enough?
In our work, we contribute to this discussion by providing a systematic analysis between a rule-based agent and a RL agent. For this, we have chosen the framework of Binbinchen [9] from the 2020 L2RPN robustness challenge, as it includes both a rule-based and a RL agent. As part of our work, we extend the rule-based approach with two significant improvements. First, we propose an N-1 strategy to ensure more robust topology actions of the rule base agent. This N-1 strategy favours topology actions that ensure a stable grid, even if one line of the grid is disconnected. Second, we encourage the agent to revert to the original topology when the state of the grid is relatively stable. We analysed the effect of these improvements in an experiment, where we ran the scenarios with 30 different seeds to ensure an significant evaluation with less randomness. Our experiments show that with our improvement we were able to increase the performance of the rule-based agent by \(27\%\) in comparison to the original Binbinchen agent. Furthermore, we are able to show that our advanced rule-based approach is able to achieve similar performance to the RL agent. However, when analysing the computational cost, the RL agent is still advantageous. In order to use the agents on all Grid2Op environments, we reworked the original code and published it as an open-source package on Github.6 Overall, our contributions can be summarised as follows:
Footnote 6: CurriculumAgent: [https://github.com/FraunhoferIEE/CurriculumAgent](https://github.com/FraunhoferIEE/CurriculumAgent) (last access 20/03/2023).
1. We analysed the solution of [9] and transferred its contents from a rapid prototyping state to a Python package. With various algorithmic changes, we ensure the usability for other grids without hard-coded implications based solely on the robustness challenge. This allows the community to include methods from the CurriculumAgent and compare their agents against our baseline.
2. We introduce two important improvements (N-1 strategy and topology reversion) to the rule-based agent that dramatically increase its performance by up to \(27\%\) in comparison to the original agent.
3. We apply the above modifications to the RL agent and benchmark its performance with the rule-based method.
4. We propose a test framework with multiple seeds to ensure significant comparable results.
The remainder of this article is structured as follows. In Section 2, we provide the related research and afterwards outline the structure of the Grid2Op environment in Section 3. In Section 4, we then introduce the _Teacher-Tutor-Junior-Senior_ framework, followed by our novel improvements in Section 5. In Section 6, we present the experimental setup of this article and afterwards, in Section 7, we discuss the results. Lastly, we provide a conclusion in Section 8.
## 2 Related Work
In recent years, there has been an increasing interest in the usage of Reinforcement Learning (RL) for the control of power systems. One early paper was published by [10] in 2004, where the authors proposed to use the RL approach for both offline and online applications in the context of power systems. However, the use of RL was still constrained due to the computational limitations of that time. Everything changed with the breakthrough of Deep Reinforcement Learning (DRL), first introduced by [11; 12]. Their work set off a wave of research, with ground breaking results in various fields [13; 14; 15]. The idea behind DRL is that the policies and/or value functions of the algorithms are not computed directly, but instead are approximated through a deep neural network. This allows the learning of advanced strategies, even for complex problems.7
Footnote 7: Note that for the sake of simplicity, we henceforth refer to DRL as RL in this paper.
Following these breakthroughs, researchers began addressing grid control problems with RL, especially driven by the introduction of the L2RPN challenge [5; 16; 17; 8]. The first successful RL solution was presented by participants in the L2RPN challenge 2019 [18], who used the Dueling Deep Q Network (DDQN) agent as their RL approach and were able to win the competition. The authors in [18] also introduced for a first time an imitation learning strategy and combined it with an guided exploration approach. In the following L2RPN WCCI 2020, other participants implemented a Semi Markov Afterstate Actor Critic (SMAAC) algorithm[19], which focused on the final topology instead of the switching of individual actions at the substation level to learn the different topology actions. By applying a Graph Neural Network (GNN) to the observation space, [19] further transformed the original grid into an afterstate representation. With their approach, [19] outperformed all other candidates and became the first winner of the challenge. Following the first challenge, [5; 8] increased the difficulty for RL agents in the L2RPN NeurIPS 2020 challenge, by introducing a robustness track with an adversarial agent and an adaptability track with an increasing share of renewable energy injections [5]. Interestingly, neither track could be solved by the previous agents, instead new solutions were required. One submission was published in [20], where the authors used an evolutionary RL approach in order to win both challenge tracks. They introduced a planing algorithm to actively search through the available actions provided by the policy. Afterwards, the planing algorithm was optimised through an evolution strategy. Within the black box optimisation method, Gaussian white noise was added to ensure an adequate exploration of the policies. Apart from to topology actions, the authors also included line reconnecting actions and some hand picked redispatch actions[20]. The second best performance of the robustness track was the agent by Binbinchen [9], already mentioned in Section 1.2. They proposed a framework, where a RL agent learned though imitating a rule-based agent, ensuring a better handling of the large action space. From a methodological perspective, their agent was based on the Proximal Policy Optimisation (PPO) algorithm by [21]. Due to their performance and their straightforward framework, the approach of [9] is the foundation of our work and we describe their framework in more detail in Section 4.1.
One notable approach that was not part of the challenge, but instead was published recently, was the paper [22]. The researchers combined a RL algorithm with an heuristic approach, sharing several similarities to the agent of [9] and our work. With their combination, they were able to achieve better results than any previous agents on the legacy data set of the 2020 L2RPN robustness challenge, while at the same time reducing the overall run-time of their agent. Recently, another agent was proposed in [23], based on the AlphaZero [24]. Similar to AlphaZero, the researchers of [23] used the Monte Carlo Tree Search to find appropriate actions in the grid. However, some specifications were changed to the use-case of the grid management. These changes were an early stopping criteria to tackle the expensive computational costs and a heuristic model. The latter was used instead of a neural network to describe the agent's value function, further reducing the training time [23]. With their approach, they were not only able to win the L2RPN WCCI challenge of 2022, but also achieved with only topology actions the same results as agents with redispatching actions. This is indicates the importance of topology actions for future research.
Finally, given that we are also interested in rule-based approaches, we would like to emphasise the expert agent of [25]. In their work, the researchers build an agent that searched a priori for remedies by identifying local power paths to manage the electricity grid. In their agent, they include expert knowledge to rank the topology actions and chose the correct action in case the grid is unstable. Considering that the agent is also a good starting point for our rule-based approach, we include the agent as baseline in our work.
## 3 The Grid2Op Environment
The Grid2Op environment is a Python package created by RTE to provide a platform for the development of RL agents. As outlined in [6; 16], the package enables training and the evaluation of agents in the RL Gym framework [26] on known IEEE synthetic power grids, e.g, the IEEE14 or IEEE118 grids.8 To correctly depict an Markov Decision Process (MDP), the environment of Grid2Op consists of an observation space, an action space and provides multiple reward
function. The action space is divided into three action types. The first type of action is the line action \(a^{(line)}\in~{}\mathcal{A}^{(line)}\) that connects and disconnects lines between two substations. The second type of action is a topological action \(a^{(topo)}\) from the set of all possible topology actions \(a^{(topo)}\in~{}\mathcal{A}^{(topo)}\). The topology action changes the node configuration on a substation level. This is visualised in Figure 1, where the node splitting is demonstrated on a simple example case, based on [6]. Note that the complexity increases with more lines connected to a substation [17]. As a third type of action, Grid2Op offers the possibility to change the power injection through redspatch actions with \(a^{(redisp)}\in\mathcal{A}^{(redisp)}\), by adjusting the production of the generators in the grid. While the action spaces \(\mathcal{A}^{(line)}\) and \(\mathcal{A}^{(topo)}\) are each discrete action spaces, \(\mathcal{A}^{(redisp)}\) is considered continuous. With respect to the observation space, Grid2Op provides various information on the grid that range from topology information and power flows to line capacity and time variables as well as the injection of the generators and demand of the consumers. While all the information is vital in the training of the agents, especially the capacities of the lines are used to measure the stability of the grid. In terms of Grid2Op, the capacity of a line is defined as the observed current flow of the line, divided through its thermal limit. We denote the capacity of a line \(l\) as \(\rho_{l,t}\in\mathbb{R}^{+}\), with \(l=1,\ldots,L\) from the set of all lines \(l\in\mathcal{L}\). We further recorded for each time step \(t\) the maximum capacity across all lines as \(\rho_{max,t}=\max\limits_{l=1,\ldots,L}(\rho_{l,t})\). The grid is considered unstable, if at least one line is above its capacity of 100%. In addition to the actual observation, Grid2Op offers the possibility to simulate the effect of an action on the grid. This simulation method is not fully accurate, as it relies on forecasted values. We denote this simulation of the line capacity as \(\hat{\rho}_{l,t+1}\) and the maximum of the simulated value as \(\hat{\rho}_{max,t+1}=\max\limits_{l=1,\ldots,L}(\hat{\rho}_{l,t+1})\).
In our work, the robustness track of the 2020 L2RPN robustness track was chosen as benchmark. Here, the grid consisted of one out of three regions from the IEEE118 grid, shown in Figure 2. This corresponds to an observation space of size \(1429\) and an action space with \(59\) line actions (discrete actions), \(66,918\) topology actions (discrete actions) and \(80\) redispatch actions (continuous actions). The key challenge of the L2RPN robustness track, however, was an adversarial agent described in [27]. The adversarial agent disconnected lines in the grid quasi-randomly, simulating unforeseeable moments in the grid. In the challenge, the number of lines has been limited to a total of ten target lines. The robustness environment further includes specific constraints to ensure a realistic setting of the challenge [6]. One constrain was that there were several reasons for a power line to disconnect from a substation. One could be a power overflow, i.e., \(\rho_{max,t}>1.0\), which would lead to a disconnection after three consecutive timesteps. Another were external line outages, either planed in the case of maintenance, or unplanned through adversarial attacks or other failures. Further, there were also constrains with regard to the action of the agent. Agents could not repeat an action on the same target (line, substation, redispatch) and therefore had to wait for a specific cooldown period. Again, as outlined in Section 2, there were differences between the cooldown periods of forceful line disconnection and active line disconnection by the agent, leading to imbalance in the game mechanics.9 The agents had to adapt to the adversarial agent and these constrains, given that a missing line could trigger a cascading failure elsewhere in the grid [6]. In terms of the evaluation metric, we decided to use the same score as the L2RPN challenge, described in [8]. The score ranges from \([-100,0,80,100]\), where \(-100\) corresponds to an initial failure, \(0\) corresponds to the survival of the Do-Nothing Agent and \(80\) indicates a completion of all scenarios. The maximum score of \(100\) could be achieved, if the agents were further able to optimise the economical cost of their actions. Consequently, a higher score usually indicates a longer survival in the scenarios. For the testing, we were able to receive the test scenarios of the original challenge, thanks to the courtesy of the L2RPN organisers.
Figure 1: Visualisation of an exemplary topology action, adapted from the paper of [6]. The original grid shows an overload of the right line (in red) at time step \(t\), due to an high demand from both load sinks. By executing a topology action, i.e., splitting the load flow of the substation into two separate notes, the bottom-right substation can divert the power and the grid returns to a more stable state without an overflow.
## 4 The Teacher-Tutor-Junior-Senior Framework
### Solution by Binbinchen
With the Grid2Op configuration outlined above, we analysed Binbinchen's solution [9]. They proposed a _Teacher-Tutor-Junior-Senior_ framework in which a RL agent was trained through imitation learning of a rule-based agent. In the first step, the _Teacher_ method used a brute force approach to gain experience and select topological actions by simulating all \(66,918\) actions. The most frequently used actions of the _Teacher_ were then selected (208 by Binbinchen). With the reduced set of actions, the rule-based _Tutor_ gathered experience consisting of observation-action pairs. The pairs were afterwards used as ground truth in a supervised learning context for imitation learning. The pairs were fed into the _Junior_ model, which was a feed-forward network. In the last step, the RL approach, called _Senior_, was trained on an adapted Grid2Op environment. The _Senior_ itself was started with the weights of the Junior model to accelerate faster training times and convergence.
We decided to analyse this approach in depth for the following reasons: First, in this paper we focus on unitary topology actions, i.e., a single topology action that changes the grid, rather than multiple consecutive actions. This is because we want the agent to identify importance based on its own strategy, rather than giving it a pre-selected order. According to [8], both the first place agent [20] and the third place agent [28] used more sequential approaches. Further, [20] combined their actions with redispatching actions. In this work, we only allow topology actions and line actions in order to reconnect disconnected lines, thus favouring the Binbinchen approach. As a second reason, we chose the framework because of its clear structure and comprehensibility. In the case of [20], interpretation is rather difficult, as the underlying method is based on a black-box optimisation approach. This is supported by the fact that Binbinchen's code is fully available, which allows for in-depth analysis and replicability. Finally, another participant reached second place with an adapted version of the Binbinchen agent in the 2021 L2RPN Trust challenge [29]. Therefore, the _Teacher-Tutor-Junior-Senior_ framework is explained in detail in the following.
### The Teacher
Although the robustness path of Grid2Op has a total of \(66,918\) topological actions, not all actions are feasible. Looking at Figure 2, one can see that substation 16 is connected to several other substations, resulting in many possible combinations. In fact, more than 97.9% of all topology actions are topology configurations at substation 16. It is therefore clear that there is a need to reduce the number of actions. Thus, the _Teacher_ model searches through all
Figure 2: The electricity grid of the robustness track, based on a subset of the IEEE118 grid. In the grid, a total of 35 substations exist that are interconnected with power lines. The grid has both generators and load sinks in different parts of the grid. As original state, all power lines are connected to bus 1 on the substations. Through topology actions, these can be changed to bus 2. The figure was created with the internal plot method of Grid2Op.
available actions in a brute force manner and selects the best candidate. More precisely, the _Teacher_ iterates through different scenarios of the environment until the threshold of \(\rho_{teacher}=0.925\) is exceeded by \(\rho_{max,t}\).10 In that case, the _Teacher_ searches through all actions and evaluates them with the pre-build simulation method of the Grid2Op environment. The action which results in the lowest \(\hat{\rho}_{max,t+1}\) is selected, executed and saved (excluding do-nothing actions). After simulating a large number of scenarios, all the saved actions are evaluated. The actions are sorted by their frequency, i.e., the number of occurrences across the scenarios. Afterwards, one manually selects a subset of the most frequent actions. Next to the simple brute force approach, it is possible to adjust the searching algorithm to gather more advanced actions. Binbinchen created an adversarial _Teacher_ that forcefully disconnected relevant lines, simulating more frequently the behaviour of the adversarial agent. This resulted in two different action sets of Binbinchen with 62 actions from the adversarial _Teacher_ and 146 of the normal _Teacher_. In our work, we also use these 208 actions and build our enhancements on top.
Footnote 10: Note that the observation of the _Tutor_ were only a subset of all available observations, see A for more information.
### The Tutor
With the subset of the _Teacher_, a rule-based agent is created to generate experience for the following stages of the framework. The respective _Tutor_ is already a fully functioning rule-based agent that iterates over various scenarios of the Grid2Op environment and saves both the observations, as well as the selected actions as experience.11 In order to achieve realistic behaviour, several rules are implemented for the _Tutor_. First, the agent does not interact with the grid, if \(\rho_{max,t}\) is below the _Tutor_ threshold of \(\rho_{tutor}=0.9\). Second, if the threshold is breached, the _Tutor_ iterates over all 208 actions with a greedy approach. For each action, the _Tutor_ checks whether the action is valid and then selects the action which results in the lowest simulated \(\hat{\rho}_{max,t+1}\). Third, if a line is disconnected and there is no cooldown time remaining, the agent automatically reconnects the respective line. With this rule-based approach, Binbinchen were able to achieve a score of \(44.69\) on the online data set, which is already quite satisfactory, when comparing it to their RL approach with a score of \(52.42\). In our work, we build on top of the general _Tutor_ structure and developed three additional enhancements that will be discussed in Section 5.
Footnote 11: Note that the observation of the _Tutor_ were only a subset of all available observations, see A for more information.
### The Junior
The third component of the framework is the _Junior_ agent, which is a feed-forward network that imitates the behaviour of the _Tutor_. The design of the _Junior_ is fairly simple: The input layer of the network has the shape of the _Tutor_ observation, while the output layer has the shape of all 208 topological actions. In the original agent from Binbinchen, the network consists of four layers a 1000 neurons (Relu-Activation) to process the data. In our work, we executed a hyperparameter search, which resulted in a different number of neurons, as seen in B.
With regard to the performance, the _Junior_ is able to correctly predict the right _Tutor_ action with around 37% accuracy, which is relative low. However, when considering the top 20 actions, the correct action is found with an accuracy of 92%. After the network is trained, the _Junior_ is used to jump-start the _Senior_ model.
### The Senior
Finally, the RL _Senior_ agent has a similar neural network architecture to the _Junior_ model, i.e., the same layer and neuron structure as well as input and output shapes. The model is trained with the state-of-the-art RL algorithm PPO [21]. At the beginning of the training, the model is first initialised with the weights of the _Junior_ model. Because the agent is only required to act in critical situations, the training of the _Senior_ had to be adjusted. No action is taken if \(\rho_{max,t}\) is below the threshold of \(\rho_{senior}=0.9\), with the exception of line reconnection if applicable. Is the threshold breached, the current state is passed to the _Senior_ and the model has to chose a valid action. In the training, the _Senior_ collects the basic Grid2Op reward, however accumulated over all previous steps where the do-nothing action was selected.
After convergence, the RL model is combined with the heuristic strategies (do-nothing and line reconnection actions) to create the final agent. Again, an action of the RL model is only required, if the threshold is breached. In this case, the model returns the probabilities of the RL policies and sorts the list of actions. The list is then reviewed one by one until a suitable candidate is found. Overall, one could therefore consider the final agent as a hybrid between the rule-based approach and the RL model.
## 5 Methodological Improvements of the existing agent
As outlined in the research contributions in Section 1.2, we propose multiple enhancements of the _Teacher-Tutor-Junior-Senior_ framework of Binbinchen [9], with a primary focus on the _Tutor_ agent. Based on their code, we revised
and enhanced the framework and created the CurriculumAgent package.12 The enhancements were threefold and are presented in the following.
Footnote 12: CurriculumAgent: [https://github.com/FraunhoferIEE/CurriculumAgent](https://github.com/FraunhoferIEE/CurriculumAgent) (last access 20/03/2023).
### N-1 Strategy Improvements
As a first improvement, we propose to prioritise topology actions that especially reduce the overall \(\rho_{max,t}\) in the event of a line failure, e.g., through a lightning stroke or adversarial attacks on lines. This corresponds to the well-known principle of N-1 security, already established among TSO operators. We search for these topological actions by creating a special N-1 _Teacher_ as well as an N-1 _Tutor_ that prioritises the execution of the N-1 actions.
The underlying pseudo-code of the N-1 algorithm is given in Algorithm 1 and can be described as followed: We start the N-1 search for a subset of lines of size \(M\) and all topological actions \(a_{i}^{(topo)}\in\ \mathcal{A}^{(topo)}\) with \(i=1,\ldots,N\). For each available topological action \(a_{i}^{(topo)}\), we iterate over each line l in the subset \(\mathcal{S}\subseteq\mathcal{L}\), with \(l=1,\ldots,M\) and create an action \(a_{l}^{(line)}\in\ \mathcal{A}^{(line)}\) that disconnects the line. By combining both actions \(a_{i,l}=a_{i}^{(topo)}\ \wedge\ a_{l}^{(line)}\), we can then simulate the observation of the next step. With the simulated line capacities \(\hat{\rho}_{j,t+1}^{(i,l)}\in\hat{\mathcal{P}}\) for all lines \(j=1,\ldots,L\), we have the combined expected effect of the topology action \(a_{i}^{(topo)}\) and the disconnection of line \(l\). Consequently, we can then calculate the maximum value of the grid \(\hat{\rho}_{max,t+1}^{(i,l)}\).13
Footnote 13: Note that while we only disconnect a subset of lines, the action itself might have an effect on other lines as well. Thus, we check the \(\hat{\rho}^{(i,l)}\) for the whole grid.
Afterwards, we record the maximum value across all lines \(\hat{\rho}_{max}^{(i,max)}\), which are the worst possible line overloads for the topological action \(a_{i}^{(topo)}\), when one of the lines is disconnected. Consequently, by sorting all \(\hat{\rho}_{max}^{(i,max)}\) in ascending order, it is possible to get the best N-1 action that is available in the respective observation, i.e. \(a_{i}^{*}=\operatorname*{argmin}(\hat{\rho}_{max}^{(i,max)})\). Note, however, that the N-1 calculation is computationally intensive. If expert knowledge is available it can be beneficial to only select a subset of the lines.
```
for all\(i=1,\ldots,N\)do Select \(a_{i}^{(topo)}\) for all\(l=1,\ldots,M\)do \(a_{l}^{(line)}\leftarrow\) disconnect line \(l\) \(a_{i,l}\gets a_{i}^{(topo)}\ \wedge\ a_{l}^{(line)}\) \(\hat{\rho}_{j,t+1}^{(i,l)}\leftarrow\) simulate\((a_{i,l})\) \(\hat{\rho}_{max,t+1}^{(i,l)}\leftarrow\max_{j=1,\ldots,L}(\hat{\rho}_{j,t+1}^{(i,l)})\) endfor \(\hat{\rho}_{max}^{(i,max)}=\max_{l=1,\ldots,M}(\rho_{max,t+1}^{(i,l)})\) endfor \(i^{*}\leftarrow\operatorname*{argmin}_{i=1,\ldots,N}(\hat{\rho}_{max}^{(i,max)})\) return\(a_{i^{*}}^{(topo)}\)
```
**Algorithm 1**The N-1 algorithm in pseudo code:
### Topology Reversion Improvement
As a second improvement, the topology reversion proved to be another essential component in developing a more advanced greedy agent. This improvement is based on the idea that the electricity grid is most stable in its original state. In our case, this translates to switching to bus level one for all substations. Therefore, it is beneficial to return to the original state of the grid.
The improvement is implemented as follows. When the maximum line capacity falls below the reversion threshold \(\rho_{rew}=0.8\), meaning no imminent danger is present, the greedy agent automatically searches through all substations
and checks, whether their topology had been changed.14 The agent then compares both the continuation of the current topology and a reversion to the original state in a simulation and chooses the better candidate. If multiple options are possible, the reversion with the lowest \(\hat{\rho}_{max,t+1}\) is selected.
Footnote 14: The \(\rho_{rev}\) is explicitly lower than the \(\rho_{rotor}\) because we want to make sure that the grid is stable and no agent action is required.
### Code Improvements
The last enhancements were code improvements to make the Bininchen approach usable for the L2RPN-Baselines.15 This included the requirement to make the agent compatible for all Grid2Op environments. Thus, hard coded lines and environment-specific solutions had to be reworked. In addition to the major changes, we added smaller features, such as filters of the observation array and the ability to add scalers from the Scikit-learn package for normalisation.16 Furthermore, RLlib [30] was added to ensure a more flexible training with different RL algorithms. Supplementary to the RLlib enhancement, we also equipped both neural networks with hyperparameter tuning, which in case of the _Junior_ model was the Bayesian Optimisation Hyperband (BOHB) algorithm [31] and in case of the _Senior_ the Population Based Training (PBT) algorithm [32]. Finally, we provided a test suite and ensured thorough documentation by adding docstrings, typehints, a readme and examples to our code to make the usability easier
Footnote 15: L2RPN Baselines: [https://l2rpn-baselines.readthedocs.io/en/latest/](https://l2rpn-baselines.readthedocs.io/en/latest/) (last access 20/03/2023).
Footnote 16: Scikit-learn package: [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/) (last access 20/03/2023).
## 6 Research Design
While creating the evaluation of the experiments, we could observe that the performance within the test scenarios varied significantly and was strongly influenced by the selected seed of the environment. Therefore, we did not evaluate the agents on one specific seed but instead chose a more robust research design. We randomly generated thirty seeds, which in turn were fed into the evaluation procedure.17 This setting ensured that the performance can be interpreted more reliably and we recommend this procedure for other researchers as well. With the proposed improvements of Section 5, we compare a total of six agents, which are described as follows:
Footnote 17: The choice of a total of 30 seeds was a compromise between a sufficiently high degree of freedom and the computational cost of the evaluation. For the evaluation we use the internal method ScoreL2RPN2020 of Grid2Op.
1. To gain a baseline result, we provide the Do-nothing Agent (\(Do\_Nothing\)) as well as the Expert Agent (\(Expert\)) of [25] introduced in Section 2.
2. Furthermore, we include Original Tutor Agent (\(Tutor_{original}\)) of the Bininchen solution to our analysis.
3. Regarding the enhancements, we propose a Tutor Agent with N-1 strategy (\(Tutor_{N-1}\)) and a Tutor Agent with N-1 strategy and the topology reversion (\(Tutor_{N-1,Topo}\)).
4. Lastly, we also provided a Senior Agent with N-1 action and topology reversion (\(Senior_{N-1,Topo}\)), based on the \(Tutor_{N-1,Topo}\). This \(Senior_{N-1,Topo}\) is a hybrid between the RL model and heuristic methods (line reconnection, topology reversion).
Note that we only included the \(Tutor_{original}\) of the Bininchen solution for two reasons. First, we enhanced primarily the _Tutor_ agent, thus comparing the tutors is most interesting. Second, on the Grid2Op version 1.7.1, the Bininchen _Senior_ did perform poorly on all thirty seeds.
For both the \(Tutor_{N-1}\) and \(Tutor_{N-1,Topo}\) improvements, as well as the \(Senior_{N-1,Topo}\) it was necessary to compute the N-1 search, which was done by an N-1 _Teacher_. The resulting 300 actions were added to the base action set of 208 actions. In terms of the \(Senior_{N-1,Topo}\), the agent was trained with all 508 actions (208 and 300 N-1) on the RLlib [30] framework. A hyperparameter training was included for both _Junior_ and _Senior_ models, as described in Section 5.3. The hyperparameter can be found in B. To prevent cherry picking, we run three experiments in parallel and evaluate the checkpoints in a validation environment, based on the training scenarios. We then select the best performing checkpoint across all three experiments.
## 7 Results
### Experiment Results
Using the experimental framework outlined above, the agents were evaluated on the thirty random seeds. The results can be found in Table 1, where we summarise the scores across seeds. A larger table and a Boxplot can be found in the
C. In terms of our research question, we are primarily interested in the performance of the rule-based agents. Here, we could observe a clear improvement. While the \(Tutor_{original}\) reached a mean score of \(38.44\), the \(Tutor_{N-1}\) reached \(41.88\) and the \(Tutor_{N-1,Topo}\) even \(48.90\), which is an increase by \(27\%\). We tested with the Welch's t-test [33], whether the results of the different _Tutors_ are from the same distribution and were able to reject the \(H_{0}\) hypothesis for all Tutors, see Table 2. This shows that both the N-1 strategy and the reversion to the topology significantly improved the score of the agents. Further, the mean and median are relatively similar, thus we cannot directly detect a high influence of outliers. However, on can see in the quantiles that there is indeed a large variation in the performance depending on the seed.
Next to the _Tutors_, we also included the \(Senior_{N-1,Topo}\) in our experiments to evaluate the impact of the RL algorithms. In this regard, we observe that the agent was slightly better than the \(Tutor_{N-1,Topo}\) with a score of \(49.12\). However, the improvement was only fractional and not statistically significant, as shown in Table 2. Further note that the \(Senior_{N-1,Topo}\) had a better performance in the median and the 25% quantile, but the \(Tutor_{N-1,Topo}\) with \(53.40\) a better 75% quantile. This is incredibly interesting, given that researchers often postulate a clear superiority of RL
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Agent & Mean (Sd) & Median (Q25, Q75) \\ \hline \(Do\_Nothing\) & 00.00 (0.00) & 00.00 (00.00, 00.00) \\ \(Expert\) & 26.45 (4.52) & 26.69 (24.03, 29.88) \\ \(Tutor_{original}\) & 38.44 (4.19) & 37.89 (35.75, 40.81) \\ \(Tutor_{N-1}\) & 41.88 (4.11) & 40.80 (38.97, 45.75) \\ \(Tutor_{N-1,Topo}\) & 48.90 (4.67) & 48.39 (44.67,53.40) \\ \(Senior_{N-1,Topo}\) & **49.12** (4.08) & **48.70** (45.53, 51.15) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the agents’ results. All agents were run on the robustness track of the 2020 L2RPN test environment(24 scenarios) with thirty different seeds. The performance across the seeds is recorded below. We list the mean and the standard deviation in the first column and the median as well as the 25% and 75% quantile in the second column. Note that the \(Do\_Nothing\) agent achieves a score of \(0.00\) per default.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline \(H_{0}\) Hypothesis & p-value \\ \hline \(H_{0}:\mu_{T_{0}}=\mu_{N-1}\) & 0.0022 \\ \(H_{0}:\mu_{T_{o}}=\mu_{T_{N-1Topo}}\) & 8.7e-13 \\ \(H_{0}:\mu_{T_{N-1}}=\mu_{T_{N-1Topo}}\) & 6.9e-08 \\ \(H_{0}:\mu_{S}=\mu_{T_{N-1Topo}}\) & 0.8454 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test Results of the Welch’s t-test [33] with the hypothesis \(H_{0}:\mu_{i}=\mu_{j}\) against the alternative hypothesis \(H_{1}:\mu_{i}\neq\mu_{j}\). For the normality assumption we tested with D’Agostino test [34] and could not reject the \(H_{0}\) hypothesis, so non-normality could not be suspected.
Figure 3: Visualisation of the average survival time from the \(Senior_{N-1,Topo}\) (blue), the \(Tutor_{N-1,Topo}\) (red), the \(Tutor_{N-1}\) (green), the \(Tutor_{original}\) (purple), the \(Expert\) (orange) and the \(Do\_Nothing\) (turquoise). Each bar represents the average survival time of the agent in the respective scenario across all seeds. The overall average is reported in the legend.
approaches, which we could not replicate. Instead, it seems that the success is rather highly depending on the correct action sets. With respect to the baseline agents, the \(Do\_Nothing\) and \(Expert\) did not reach the score of the other agents, which is not surprising.
After evaluating the scores, we are also interested in the survival times of the different approaches. In Figure 3 we show the average survival times across the 30 seeds, divided into the 24 test scenarios. Note that all scenarios ended after 2006 steps. If an agent was able to reach an average of 2006 steps for a scenario, this means that it was able to complete the scenario in every run. One can see that the performance across the scenarios differed between the agents, but there were also some similarities. The winter scenarios from November up until March were harder to solve by all participants, while the summer scenarios were easier to complete. In terms of survival, only a handful were completely survived by some of the agents. Both \(Senior_{N-1,Topo}\) and \(Tutor_{N-1,Topo}\) were able to complete four scenarios in all seeds, with \(Senior_{N-1,Topo}\) completing the scenarios \((may24_{1},may24_{2},jul28_{1},jul28_{2})\) and \(Tutor_{N-1,Topo}\) completing the scenarios \((may24_{1},may24_{2},jun14_{1},jul28_{2})\). All other agents failed to achieve this task. Nevertheless, \(Tutor_{N-1}\) was almost able to complete \(jul28_{1}\). This shows a clear performance increase.
### Detailed Behaviour analysis of the agents
After analysing the overall performance across the thirty seeds, we further provide a more detailed evaluation regarding the behaviour of the agents. Thus, we randomly select a run and looked at various metrics available in Grid2Op.18 To begin with, we are interested in individual topological behaviour on a substation level. Hence, in Figure 4 we show the topological actions sorted by their specific substation. First of all, we can see that the \(Tutor_{original}\) executed actions from only five substations in more than 80% of the time. In contrast, the distributions between the substations of the \(Tutor_{N-1,Topo}\) and the \(Senior_{N-1,Topo}\) were more diversified with the 300 N-1 actions. Based on the grid of Figure 2, we see that all three agents centred their actions around the three major nodes 16, 23 and 28, which were necessary to keep the grids stable. However, while the substation 23 took about one forth of the overall actions from the \(Tutor_{original}\), the N-1 agents rather preferred topological changes on the substation 16. It is also interesting to note that the \(Tutor_{original}\) only had one action at substation one, which it used frequently. The N-1 agents, on the other hand, did not rely on this substation at all.
When comparing the behaviour between the rule based \(Tutor_{N-1,Topo}\) and the \(Senior_{N-1,Topo}\), we can see that there are several similarities but also slight changes in the behaviour. One can observe that the rule-based agent utilised the substations 33 and 9 to divert power from substation 16 to the south-east and north-west. In contrast, we see that these substations rank lower for the RL agent and instead the \(Senior_{N-1,Topo}\) diverted the power to the substations 23 and 21, i.e., corresponding to a north-eastern route. As a result, the \(Senior_{N-1,Topo}\) had to rely on substation 22 more often. This shows a different learned strategy by the RL approach, with the same potential outcome.
Next to the different routing, we want to highlight different computational times of the agents. In Figure 5, we plotted
Figure 4: Visualisation of the different topological actions of the agents \(Tutor_{original}\), \(Tutor_{N-1,Topo}\) and \(Senior_{N-1,Topo}\). The actions are sorted according to the frequency of their substations. The most frequently used substation is at the top right, then all the other substations are ordered counterclockwise. The colour coding is the same for all three agents.
for each step the required computation time of the agent, which we summarised in the rug plot on the right side. The first thing that becomes clear is that the \(Tutor_{N-1,Topo}\) increased the computation time quite a bit compared to the \(Tutor_{original}\). This can be explained by the fact that the \(Tutor_{original}\) had the 208 actions divided into two sets of size 62 and 146 and first only evaluated the 62 actions. Only if no adequate candidate was found, it continued with the 146 action set. Further, no additional heuristics were added to the tutor. In contrast, the \(Tutor_{N-1,Topo}\) started with the N-1 action set consisting of 300 actions, then continued with 62 and the 146 actions. Consequently, the longer survival of \(Tutor_{N-1,Topo}\) came at a cost in terms of computation time.
Compared to the \(Tutor_{N-1,Topo}\), the \(Senior_{N-1,Topo}\) was faster than the rule-based approach. This is due to the fact that the \(Senior_{N-1,Topo}\) did not have a clear division of its action sets, as it provided a policy for all 508 (208+300) actions. This made it easier to find a suitable candidate and the search was accelerated without any loss of accuracy. Note that the actions with a computational time of almost zero correspond to the do-nothing actions, which all three agents selected when the grid was stable. As a last result, we also want to look at the distance from the original topology, which we visualised exemplary for the July week \(jul28\_1\) in Figure 6. The distance was measured in the number of changed substations in comparison to the original topology configuration. Here, there are two interesting things to note. First, we can see that all agents had a relatively similar behaviour until the 12th of July, moving up to three steps away from the original topology and afterwards returning to it. However, we see a clear difference between the _Tutors_ and the \(Senior_{N-1,Topo}\) at the start of July 13th. While the \(Senior_{N-1,Topo}\) had only a maximum distance of two topology changes, the _Tutors_ had a distance of three, thus showing that the \(Senior_{N-1,Topo}\) found a better action in that moment. A second thing to note is that the \(Tutor_{N-1,Topo}\) showed very unstable behaviour after 13 July, which is not ideal for a real substation. A possible explanation could be that the \(\rho_{max,t}\) of the grid was fluctuating around the threshold \(\rho_{tutor}\) of the \(Tutor_{N-1,Topo}\), thus triggering the need to act. Both results are very interesting, because they show that the _Senior_ was able to achieve a similar effect, however used a different topology action. Here, less distance from the original grid was required and the overall state was more stable.
### Discussion and Future Outlook
As outlined in the previous Section 7, we were able to significantly improve the original greedy search _Tutor_ by providing both an active revision of the topology as well as an additional N-1 strategy. In order to obtain more advanced rule-based approaches, we showed that classical power-grid strategies can be beneficial as improvements.
Figure 5: Display of the computation time for each action. The left graph shows the computation time, where each point is one action of the agent. The vertical lines correspond to a specific scenario. On the right, we aggregate the computation time across all scenarios in a rug plot. Note that for comparison, we only include the computation times if all three agents survived until the given time step.
Figure 6: Distance in Topology for the first July scenario. The y-axis describes the number of substations that differ in comparison to the original topology. Note that all agents completed the scenario.
When comparing the more sophisticated rule-based agent with the RL approach, we were not able to demonstrate a clear superiority. Even though the RL agent was slightly better, one could argue that this may not have enough statistical significance. This is particularly interesting when considering that heuristic methods were also included in the \(Senior_{N-1,Topo}\). This shows that strategies such as reconnecting lines and simulating prior to the execution of an action have a high proportion of the overall performance score. Nevertheless, we still consider RL to be a necessity for future network operations for the following three reasons:
First, the RL approach clearly showed an advantage in the computational speed, as was shown in Figure 5. This is quite interesting considering that the agent had no direct goal to use N-1 actions like the \(Tutor_{N-1,Topo}\). However, the agent was still able to learn a similar behaviour, demonstrating the ability to imitate desired behaviour. This feature may be crucial, especially for larger grids. Considering that an increase in the number of available actions could push rule-based approaches to their limits, RL can still compute solutions in an acceptable time. Second, in order to replicate the Binbinchen approach we only used the PPO algorithm in this work. We therefore did not focus on specific RL improvements. As other researchers showed, e.g, in [23], there are advanced RL techniques available that can enhance the RL models. Thus, it could be possible to further increase the score through different RL agents. Third, it can be advantageous to consider multiple approaches simultaneously for ensemble methods. As analysed in Section 7.2, the behaviour of the \(Senior_{N-1,Topo}\) differed from that of the \(Tutor_{N-1,Topo}\) agent, thus it could provide alternative strategies for grid operators.
As a consequence, although the main development of this article was a more sophisticated rule-based agent, we encourage other researchers to further improve RL approaches. One possibility might be a split of the \(Senior_{N-1,Topo}\) into two independent agents, in order to gain more specialised agents, e.g., one N-1 agent and one agent for extreme cases \((\rho_{max}>1.0)\).
## 8 Conclusion
In this paper, we analysed the structure of the Binbinchen agent from the 2020 L2RPN robustness Challenge and proposed additional improvements to the rule-based greedy agent. The novelty of our improvements was on the one hand the N-1 strategy and the reversion back to the base topology of the agent. In order to evaluate the proposed improvements, we tested the agents on the L2RPN test environment with 30 different seeds and additionally evaluated one random seed in particular. Our experiments show that the improvements increased the performance by \(27\%\) and that the N-1 rule-based agent was able to achieve a similar result to the RL agent. In our detailed analysis, we also showed that by considering the N-1 actions, the overall set of actions became more diverse, leading to an increase in stability of the grid. We discussed these results, highlighting the comparison between the rule-based approach and the RL agent.
## Acknowledgement
This work was supported by the Competence Centre for Cognitive Energy Systems of the Fraunhofer IEE and the research group Reinforcement Learning for cognitive energy systems (RL4CES) from the Intelligent Embedded Systems of the University Kassel.
|
2306.01202
|
Robust and tunable coreless vortices and fractional vortices in chiral
$d$-wave superconductors
|
Chiral $d$-wave superconductivity has recently been proposed in a wide range
of materials based on both experiment and theoretical works. Chiral
superconductors host a finite Chern number set by the winding of the
superconducting order parameter and associated topologically protected chiral
edge modes. However, the chiral edge currents and orbital angular momentum
(OAM) generated by the edge modes are not topologically protected and another,
more robust, experimental probe is therefore needed to facilitate experimental
verification of chiral $d$-wave superconductors. We have recently shown the
appearance of quadruply quantized coreless vortices (CVs) in chiral $d$-wave
superconductors, consisting of a closed domain wall decorated with eight
fractional vortices, and generating a smoking-gun signature of the Chern
number, chirality, and the superconducting pairing symmetry [P. Holmvall and A.
M. Black-Schaffer, arXiv:2212.08156 (2023)]. Specifically, the CV spontaneously
breaks axial symmetry for parallel chirality and vorticity, with a signature
appearing directly in the local density of states (LDOS) measurable with
scanning-tunneling spectroscopy (STS). In this work, we first demonstrate a
strong tunability of the CV size and shape directly reflected in the LDOS and
then show that the LDOS signature is robust in the presence of regular
Abrikosov vortices, strong confinement, system and normal-state anisotropy,
different Fermi surfaces (FSs), non-degenerate order parameters, and even
non-magnetic impurities. In conclusion, our work establishes CVs as a tunable
and robust signature of chiral $d$-wave superconductivity.
|
Patric Holmvall, Niclas Wall-Wennerdal, Annica M. Black-Schaffer
|
2023-06-01T23:32:39Z
|
http://arxiv.org/abs/2306.01202v3
|
# Robust and tunable coreless vortices and fractional vortices in chiral \(d\)-wave superconductors
###### Abstract
Chiral \(d\)-wave superconductivity has recently been proposed in a wide range of materials based on both experiment and theoretical works. Chiral superconductors host a finite Chern number set by the winding of the superconducting order parameter and associated topologically protected chiral edge modes. However, the chiral edge currents and orbital angular momentum (OAM) generated by the edge modes are not topologically protected and another, more robust, experimental probe is therefore needed to facilitate experimental verification of chiral \(d\)-wave superconductors. We have recently shown the appearance of quadruply quantized coreless vortices (CVs) in chiral \(d\)-wave superconductors, consisting of a closed domain wall decorated with eight fractional vortices, and generating a smoking-gun signature of the Chern number, chirality, and the superconducting pairing symmetry [P. Holmvall and A. M. Black-Schaffer, arXiv:2212.08156 (2023)]. Specifically, the CV spontaneously breaks axial symmetry for parallel chirality and vorticity, with a signature appearing directly in the local density of states (LDOS) measurable with scanning-tunneling spectroscopy (STS). In this work, we first demonstrate a strong tunability of the CV size and shape directly reflected in the LDOS and then show that the LDOS signature is robust in the presence of regular Abrikosov vortices, strong confinement, system and normal-state anisotropy, different Fermi surfaces (FSs), non-degenerate order parameters, and even non-magnetic impurities. In conclusion, our work establishes CVs as a tunable and robust signature of chiral \(d\)-wave superconductivity.
## I Introduction
Two of the most outstanding issues in condensed matter physics are the direct identification of the superconducting pairing symmetry in unconventional superconductors and of the topological invariant in topologically non-trivial materials. These difficulties severely limit the ability to correctly interpret experiments and the applicability of newly discovered superconducting and topological materials. In few other systems is this as problematic as in multi-component superconductors, especially chiral superconductors, where both topology and superconducting symmetry need to be identified. Theoretically, chiral superconductors, and more generally chiral superfluids, are characterized by a non-trivial topology [1; 2; 3; 4; 5] and a discretely degenerate ground state that spontaneously breaks time-reversal symmetry [6]. They belonging to the class of integer quantum Hall systems [7; 8; 9] with a finite Chern number generated by the winding of the superfluid order parameter [10; 11; 12; 13; 14; 15; 16; 17], and with topologically protected chiral edge modes generating spontaneous surface currents and orbital angular momentum (OAM) [18; 19; 20; 21; 22; 23].
The topology and symmetry breaking of a chiral superfluid are predicted to generate a range of interesting properties [1; 2; 3; 4; 5; 24; 25; 26; 27], such as the existence of domain walls [1; 28; 29; 30], states with non-Abelian statistics [31; 32; 33; 34; 35; 36], proposed as a platform for topological quantum computing [37; 38], and fascinating vortex defects without analogues in single-component superfluids [1; 2; 3; 4; 5; 39]. A prime example is the continuous "coreless vortex" (CV), which due to its multi-component structure is non-singular with finite superfluid order parameter everywhere. CVs have primarily been studied in superfluid \({}^{3}\)He [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. In superconductors, CVs have so far mainly been discussed in the context of spin-triplet chiral \(p\)-wave [56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67], with analogous states discussed for various multi-band superconductors and other multi-component condensates [68; 69; 70; 71; 72; 73]. In the superconducting scenario, the CV essentially consists of a closed domain wall, along which the vorticity enters as fractional vortices, such that the total superconducting order parameter is non-singular and finite everywhere. Fractional vortices have been studied extensively over the years [74; 75; 76; 77; 78; 6; 79; 80; 81; 82; 83; 84; 85; 86; 87], and were recently experimentally observed in superfluids [88] and superconductors [89]. Similar to a regular Abrikosov vortex [90], the CV is stabilized by its reduction of the kinetic energy in an external magnetic field. But importantly, unlike an Abrikosov vortex (or a giant vortex [91; 92; 93; 94; 95; 96]), the CV by definition has no normal core, and therefore avoids the usual energy penalty associated with lost condensation in the core.
Recent decades have seen an intense search for experimental realizations of chiral superconductors due to their many interesting properties and proposed applications [2; 3; 97]. The hunt for chiral superconductivity has mainly focused on spin-triplet chiral \(p\)-wave and \(f\)-wave superconductivity [2; 97; 98; 99; 100; 101; 102], and their similarities with superfluid \({}^{3}\)He\(-\)A [1; 2; 3; 4; 5]. Interestingly, multiple proposals of spin-singlet chiral \(d\)-wave superconductivity have more recently emerged based both on theory and experiments in a range of materials, such as twisted bilayer cuprates [103; 104], twisted bilayer graphene [105; 106; 107; 108; 109; 110; 111; 112; 113], Sn/Si(111) [114], SrPtAs [115; 116; 117; 118], LaPt\({}_{3}\)P [119], Bi/Ni [120; 121]
and URu\({}_{2}\)Si\({}_{2}\)[122; 123; 124; 125]. Chiral \(d\)-wave superconductivity was recently also proposed as a route to topologically protected quantum computing [33; 34; 35; 36]. The exact identification of the superconducting pairing symmetry is however still highly debated in these proposed chiral superconductors. This is further hampered by the fact that typical fingerprints of chiral superconductivity, namely the chiral edge currents and OAM are not topologically protected [126; 7; 127], and may even often vanish for pairing symmetries except for \(p\)-wave [127; 128; 129; 130; 131; 132; 133; 134; 135]. In addition, it is quite unknown how the higher Chern number and angular momentum of chiral \(d\)-wave superconductors influence the vortex physics and CVs.
In an earlier work we have demonstrated that CVs naturally emerge as a 'quadruple-quantum vortex' in spin-singlet chiral \(d\)-wave superconductors and that they, most importantly, act as a smoking-gun signature of chirality, pairing symmetry, and Chern number [136]. These signatures were demonstrated directly in the local density of states (LDOS) and indirectly in the area-averaged orbital magnetic moment, the former measurable with e.g. scanning tunneling spectroscopy (STS) and scanning tunneling microscopy (STM) [137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147], and the latter with various magnetometry setups [148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160]. The signatures were shown to be fundamentally related to the existence of inequivalent CVs in opposite magnetic field directions (or equivalently opposite chiralities), due to either a parallel or antiparallel vorticity and chirality, and which are also completely different from regular Abrikosov vortices.
In this complementary work, we demonstrate a strong tunability of the CV size and shape, also directly reflected in e.g. the LDOS. Furthermore, we provide extensive data that demonstrate a strong robustness of the results for a range of realistic models, over extensive parameter ranges, and in the presence of additional vortices or disorder. Overall, we relate the robustness of the experimental signatures of chirality, pairing symmetry, and Chern number, to the fact that they fundamentally stem from the parallel versus antiparallel alignment of vorticity and chirality, which are both topologically protected. In contrast, a non-chiral superconductor lacks this alignment possibility, since it lacks chirality. Our work therefore establishes CVs as a robust signature of spin-singlet chiral \(d\)-wave superconductivity, and furthermore the realization of fractional vortices in these materials.
This work is organized as follows. In Sec. II we summarize our model and methods, and describe basic properties of chiral \(d\)-wave superconductors. In Sec. III we introduce the basic properties of CVs, also discussing their overall stability and formation. In Sec. IV we demonstrate the large tunability of the CV size due to thermodynamics and electrodynamic interactions. Similarly, we study the interaction between CVs and other vortices in Sec. V and the behavior of CVs in confinement in Sec. VI, again demonstrating a tunability of both the CV size and shape as well as establishing strong robustness of CVs. We further demonstrate robustness against more general and anisotropic Fermi surfaces (FSs) in Sec. VII, non-degenerate order parameter components in Sec. VIII, and non-magnetic impurities in Sec. IX. Finally in Sec. X we briefly summarize our results.
## II Model and methods
In this section, we describe our model for a spin-singlet chiral \(d\)-wave superconductor and summarize our methods. In particular, we use the quasiclassical theory of superconductivity [161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173], and perform self-consistent numerical calculations using the open-source framework SuperConga [174].
### Model
We consider weak-coupling superconductivity in equilibrium and in two dimensions (2D), assuming spin-degeneracy, all appropriate for a quantitative description of a spin-singlet \(d\)-wave superconductor. We start by studying clean superconductors shaped like discs, with an electron-doped and circular Fermi surface (FS). We then relax all these assumptions by studying systems with either different discrete rotational symmetries or completely irregular shapes, as well as hole-doped and anisotropic FSs. We also consider non-degenerate order parameters, as well as dirty superconductors with non-magnetic impurities. For the specific setup, we align the superconducting plane with the \(xy\)-axes and use a perpendicular (orbital) external magnetic flux density \(\mathbf{B}_{\rm ext}=(\Phi_{\rm ext}/\mathcal{A})\hat{\mathbf{z}}\) with homogeneous flux \(\Phi_{\rm ext}\) across the system area \(\mathcal{A}\) to induce vortex states. We assume type-II superconductivity appropriate for most non-elemental or unconventional superconductors, but consider different penetration depths \(\lambda_{0}\in[2,\infty)\), via the Ginzburg-Landau coefficient \(\kappa=\lambda_{0}/\xi_{0}\). The penetration depth sets the length scale and strength of flux screening, defined by \(\lambda_{0}^{-2}=4\pi e^{2}v_{\rm F}^{2}N_{\rm F}/c^{2}\), with elementary charge \(e=-|e|\), Fermi velocity \(v_{\rm F}\) on the FS, normal-state density of state \(N_{\rm F}\) on the FS (per spin), and speed of light \(c\). Here, our natural length unit is \(\xi_{0}\equiv\hbar v_{\rm F}/(2\pi k_{\rm B}T_{\rm c})\), sometimes referred to as an effective superconducting coherence length over which superconductivity spatially varies, with Planck constant \(\hbar\), Boltzmann constant \(k_{\rm B}\), and superconducting transition temperature \(T_{\rm c}\). We study superconducting systems with a diameter or side length \(\mathcal{D}\in[20,300]\xi_{0}\), for different temperatures \(T\in[0.01,0.99]T_{\rm c}\), and external fluxes \(\Phi_{\rm ext}\in[-15,15]\Phi_{0}\) with flux quantum \(\Phi_{0}\equiv hc/2|e|\). We keep all parameters fixed during the self-consistency simulations.
We perform our numerical simulations using the open-source framework SuperConga [174], which is a state-of-the-art implementation of the quasiclassical theory of superconductivity [161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173], running on graphics processing units (GPUs), and with extensive documentation and unit testing [175; 176]. SuperConga solves self-consistently [177] for both the superconducting or
der parameter \(\Delta(\mathbf{p}_{\mathrm{F}},\mathbf{R})\) and vector potential \(\mathbf{A}(\mathbf{R})\) via the gap equation and Maxwell's equations, respectively. Here, \(\mathbf{p}_{\mathrm{F}}=p_{\mathrm{F}}(\cos\theta_{\mathrm{F}},\sin\theta_{ \mathrm{F}})\) is the Fermi momentum with angle \(\theta_{\mathrm{F}}\) on the FS, while \(\mathbf{R}=R(\cos\phi,\sin\phi)\) is the in-plane center-of-mass coordinate with polar angle \(\phi\). SuperConga also solves for impurity self-energies self-consistently using the well-established \(t\)-matrix approach [178].
### Quasiclassical theory of superconductivity
Many materials exhibit a clear separation between the superconducting gap \(|\Delta|\) and other relevant energy scales, such as the Fermi energy \(E_{\mathrm{F}}\). Consequently, the superconducting coherence length \(\xi_{0}\) typically becomes much larger than the atomic length scale \(a_{0}\) and Fermi wavelength \(\lambda_{F}\). In such materials, the low-energy (long-wavelength) physics can often to a very good approximation be separated from the high-energy (short-wavelength) physics. The quasiclassical theory of superconductivity exploits this via a controlled expansion in the resulting small parameters, e.g. \(|\Delta|/E_{\mathrm{F}}\), \(T/T_{\mathrm{c}}\), and \(\lambda_{\mathrm{F}}/\xi_{0}\), with leading-order terms describing the low-energy bands close to the FS [161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173]. Higher-energy corrections can still be inserted from full microscopic theory, e.g. by using microscopic boundary conditions [170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184].
The low-energy expansion results in quasiclassical propagators, which we express in Nambu (particle-hole) space as
\[\hat{g}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)=\begin{pmatrix}g(\mathbf{p}_{ \mathrm{F}},\mathbf{R};z)&f(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)\\ -\tilde{f}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)&\tilde{g}(\mathbf{p}_{ \mathrm{F}},\mathbf{R};z)\end{pmatrix}, \tag{1}\]
with quasiparticle propagator \(g(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)\) and anomalous pair propagator \(f(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)\), where "tilde" denotes particle-hole conjugation \(\tilde{\alpha}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)=\alpha^{*}(-\mathbf{p}_ {\mathrm{F}},\mathbf{R};-z^{*})\). Here, \(z\) is the quasiparticle energy associated with the corresponding propagator, and is generally complex valued. Specifically, the retarded propagators \(g^{\mathrm{R}}(\mathbf{p}_{\mathrm{F}},\mathbf{R};\varepsilon)\) are used for spectral quantities, evaluated at \(z^{\mathrm{R}}\equiv\varepsilon+i\delta\) with real energy \(\varepsilon\) and small positive broadening \(\delta\). For all other quantities, we use the Matsubara propagators \(g^{\mathrm{M}}(\mathbf{p}_{\mathrm{F}},\mathbf{R};\varepsilon_{n})\) and \(f^{\mathrm{M}}(\mathbf{p}_{\mathrm{F}},\mathbf{R};\varepsilon_{n})\) in terms of the Matsubara energies \(z^{\mathrm{M}}\equiv i\varepsilon_{n}=i\pi k_{\mathrm{B}}T(2n+1)\), with integer \(n\)[185; 186; 187; 188; 189; 190]. The propagators in Eq. (1) are obtained via the Eilenberger equation [161]
\[0= i\hbar\mathbf{v}_{\mathrm{F}}\cdot\mathbf{\nabla}\hat{g}(\mathbf{p}_{ \mathrm{F}},\mathbf{R};z)\] \[+\left[z\hat{\tau}_{3}-\hat{h}(\mathbf{p}_{\mathrm{F}},\mathbf{R };z),\hat{g}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)\right], \tag{2}\]
together with the normalization condition \(\hat{g}^{2}=-\pi^{2}\hat{1}\), where \(\hat{h}\) is the self energy and \(\hat{\tau}_{i}\) the Pauli matrices in Nambu space. The self energy in Nambu space is
\[\hat{h}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)=\hat{\Sigma}(\mathbf{p}_{ \mathrm{F}},\mathbf{R};z)+\hat{\Delta}(\mathbf{p}_{\mathrm{F}},\mathbf{R})\] \[= \begin{pmatrix}\Sigma(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)&\Delta (\mathbf{p}_{\mathrm{F}},\mathbf{R})\\ \hat{\Delta}(\mathbf{p}_{\mathrm{F}},\mathbf{R})&\hat{\Sigma}(\mathbf{p}_{ \mathrm{F}},\mathbf{R};z)\end{pmatrix}, \tag{3}\]
with mean-field superconducting order parameter \(\Delta(\mathbf{p}_{\mathrm{F}},\mathbf{R})\), while the diagonal part in the present work is
\[\hat{\Sigma}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)=\hat{\Sigma}_{\mathrm{flux}} (\mathbf{R})+\hat{\Sigma}_{\mathrm{imp}}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z), \tag{4}\]
capturing electrodynamic interactions via \(\hat{\Sigma}_{\mathrm{flux}}(\mathbf{R})\) (described further below) and impurity scattering via \(\hat{\Sigma}_{\mathrm{imp}}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)\) (described in Sec. IX). We parametrize the even-parity spin-singlet order parameter \(\Delta(\mathbf{p}_{\mathrm{F}},\mathbf{R})\) via
\[\Delta(\mathbf{p}_{\mathrm{F}},\mathbf{R})=\sum_{\Gamma}|\Delta_{\Gamma}( \mathbf{R})|e^{i\chi_{\Gamma}(\mathbf{R})}\eta_{\Gamma}(\mathbf{p}_{\mathrm{F}}), \tag{5}\]
where \(\Gamma\) labels the irreducible representations of the crystallographic point group and the basis function \(\eta_{\Gamma}(\mathbf{p}_{\mathrm{F}})\) encodes the pairing symmetry on the FS [191], also related to the attractive pairing interaction \(V\) via
\[V(\mathbf{p}_{\mathrm{F}},\mathbf{p}_{\mathrm{F}}^{\prime})=\sum_{\Gamma}V_{ \Gamma}\eta_{\Gamma}(\mathbf{p}_{\mathrm{F}})\eta_{\Gamma}^{\dagger}(\mathbf{p }_{\mathrm{F}}^{\prime}). \tag{6}\]
Here, \(V_{\Gamma}\) is the pairing strength of the respective symmetry channel. We self-consistently compute \(\Delta(\mathbf{p}_{\mathrm{F}},\mathbf{R})\) via the superconducting gap equation
\[\Delta(\mathbf{p}_{\mathrm{F}},\mathbf{R})=N_{\mathrm{F}}k_{\mathrm{B}}T\sum_{n }^{|\varepsilon_{n}|<\Omega_{\mathrm{c}}}\big{\langle}V(\mathbf{p}_{\mathrm{F}}, \mathbf{p}_{\mathrm{F}}^{\prime})\,f(\mathbf{p}_{\mathrm{F}}^{\prime},\mathbf{R}; \varepsilon_{n})\big{\rangle}_{\mathbf{p}_{\mathrm{F}}^{\prime}}, \tag{7}\]
with cutoff energy \(\Omega_{\mathrm{c}}\)[172], and FS average [192; 193]
\[\big{\langle}\ldots\big{\rangle}_{\mathbf{p}_{\mathrm{F}}}=\frac{1}{N_{\mathrm{ F}}}\oint_{\mathrm{FS}}\frac{\mathrm{d}p_{\mathrm{F}}}{(2\pi\hbar)^{2}|\mathbf{v}_{ \mathrm{F}}(\mathbf{p}_{\mathrm{F}})|}(\ldots). \tag{8}\]
The electrodynamics is modelled via
\[\hat{\Sigma}_{\mathrm{flux}}(\mathbf{p}_{\mathrm{F}},\mathbf{R})=-\frac{e}{c} \mathbf{v}_{\mathrm{F}}(\mathbf{p}_{\mathrm{F}})\cdot\mathbf{A}(\mathbf{R}) \hat{\tau}_{3}, \tag{9}\]
where \(\mathbf{A}(\mathbf{R})=\mathbf{A}_{\mathrm{ext}}(\mathbf{R})+\mathbf{A}_{\mathrm{ind}} (\mathbf{R})\) is the magnetic vector potential. It is related to the external (ext) magnetic-flux density via Maxwell's equation \(\mathbf{B}_{\mathrm{ext}}(\mathbf{R})=\mathbf{\nabla}\times\mathbf{A}_{\mathrm{ext}}( \mathbf{R})\), and to the induced (ind) magnetic-flux density \(\mathbf{B}_{\mathrm{ind}}(\mathbf{R})\) (i.e. screening) from the total charge-current density \(\mathbf{j}(\mathbf{R})\) via Ampere's law
\[\frac{4\pi}{c}\mathbf{j}(\mathbf{R})=\mathbf{\nabla}\times\mathbf{B}_{\mathrm{ind}}( \mathbf{R})=\mathbf{\nabla}\times\mathbf{\nabla}\times\mathbf{A}_{\mathrm{ind}}(\mathbf{R}). \tag{10}\]
We compute \(\mathbf{j}(\mathbf{R})\) via
\[\mathbf{j}(\mathbf{R})=2eN_{\mathrm{F}}k_{\mathrm{B}}T\sum_{n}^{|\varepsilon_{n}| <\Omega_{\mathrm{c}}}\big{\langle}\mathbf{v}_{\mathrm{F}}(\mathbf{p}_{ \mathrm{F}})\,g^{\mathrm{M}}(\mathbf{p}_{\mathrm{F}},\mathbf{R};\varepsilon_{n}) \big{\rangle}_{\mathbf{p}_{\mathrm{F}}}\,. \tag{11}\]
We further compute the LDOS via
\[N(\mathbf{R};\varepsilon)=-\frac{2N_{\mathrm{F}}}{\pi}\left\langle\mathrm{Im} \left[g^{\mathrm{R}}(\mathbf{p}_{\mathrm{F}},\mathbf{R};\varepsilon)\right] \right\rangle_{\mathbf{p}_{\mathrm{F}}}. \tag{12}\]
Finally, we note that the quasiparticle energies are effectively Doppler shifted by the vector potential and any phase gradients [194; 195; 196; 197], seen by applying a unitary gauge transformation to the Eilenberger equation as in e.g. Refs. [198; 174; 199], modifying Eq. (9), \(\Sigma_{\rm flux}(\mathbf{p}_{\rm F},\mathbf{R})\rightarrow\mathbf{v}_{\rm F}( \mathbf{p}_{\rm F})\cdot\mathbf{p}_{\rm s}(\mathbf{R})\) with the gauge-invariant superfluid momentum (superflow)
\[\mathbf{p}_{\rm s}(\mathbf{R})=\frac{\hbar}{2}\mathbf{\nabla}\chi(\mathbf{R})- \frac{e}{c}\mathbf{A}(\mathbf{R}). \tag{13}\]
This allows phase gradients and vector potentials to be treated on an equal footing, and leads to the Doppler shifted quasiparticle energy \(z_{p}=z-\mathbf{v}_{\rm F}(\mathbf{p}_{\rm F})\cdot\mathbf{p}_{\rm s}(\mathbf{ R})\) in the Eilenberger equation Eq. (2) [200; 201; 202], thus also influencing the LDOS in Eq. (12).
### Chiral superconductivity
We consider spin-singlet chiral \(d\)-wave superconductivity, modelled using an attractive pair potential for the two irreducible \(d\)-wave representations \(\Gamma\in\{d_{x^{2}-y^{2}},d_{xy}\}\) with \(\eta_{d_{x^{2}-y^{2}}}(\theta_{\rm F})=\sqrt{2}\cos(2\theta_{\rm F})\) and \(\eta_{d_{xy}}(\theta_{\rm F})=\sqrt{2}\sin(2\theta_{\rm F})\). Following the notation in Eq. (5), the resulting order-parameter components \(\Delta_{d_{x^{2}-y^{2}}}(\mathbf{p}_{\rm F},\mathbf{R})\) and \(\Delta_{d_{xy}}(\mathbf{p}_{\rm F},\mathbf{R})\) are referred to as the nodal components. We initially assume that these channels are degenerate, since such a degeneracy is guaranteed by symmetry in any material with a three- or six-fold rotationally symmetric lattice [17], relevant for many of the recently proposed chiral \(d\)-wave superconductors [105; 106; 107; 108]. Still, for sake of full completeness, we later relax this assumption. Furthermore, we note that our theoretical framework includes other pair correlations allowed by symmetry, e.g. \(s\)-wave [174], while the possibility of additional attractive interactions in other pair channels is left as an outlook [203].
In order to better quantify chiral superconductivity, we transform the nodal order parameters to the eigenbasis
\[\eta_{\pm}(\mathbf{p}_{\rm F})\equiv e^{\pm i|M|\theta_{\rm F}}, \tag{14}\]
of the OAM operator \(\hat{L}_{z}^{\rm orb}=(\hbar/i)\partial_{\theta_{\rm F}}\) with eigenvalues \(l_{z}^{\rm orb}=\pm|M|\hbar\), yielding
\[\Delta(\mathbf{p}_{\rm F},\mathbf{R})=\Delta_{+}(\mathbf{p}_{\rm F},\mathbf{R })+\Delta_{-}(\mathbf{p}_{\rm F},\mathbf{R}) \tag{15}\]
with the chiral order parameter components
\[\Delta_{\pm}(\mathbf{p}_{\rm F},\mathbf{R})\equiv|\Delta_{\pm}(\mathbf{R})|e^ {i\chi_{\pm}(\mathbf{R})}\eta_{\pm}(\mathbf{p}_{\rm F}), \tag{16}\]
which are the two degenerate ground states in a bulk chiral superconductor. Below \(T_{\rm c}\) the system spontaneously chooses one of these as the dominant bulk chirality, e.g \(\Delta(\mathbf{p}_{\rm F},\mathbf{R})=\Delta_{+}(\mathbf{p}_{\rm F},\mathbf{R})\), while the opposite chirality \(\Delta_{-}(\mathbf{p}_{\rm F},\mathbf{R})\) becomes subdominant and vanishes asymptotically in the translationally invariant bulk [204]. Thus, the ground state of a chiral superconductor is described by a complex-valued order parameter that spontaneously breaks time-reversal symmetry [205; 206; 10], with a fully gapped bulk spectrum and Cooper pairs with an OAM \(l_{z}^{\rm orb}=\pm|M|\hbar\)[21]. Even (odd) \(|M|\) correspond to spin-singlet (spin-triplet), and \(|M|=1,2\) generate chiral \(p,d\)-wave order parameters, respectively. In this work we focus on spin-singlet chiral \(d\)-wave superconductivity with \(|M|=2\), such that \(\eta_{\pm}(\mathbf{p}_{\rm F})=[\eta_{d_{x^{2}-y^{2}}}(\mathbf{p}_{\rm F})\pm i \eta_{d_{xy}}(\mathbf{p}_{\rm F})]/\sqrt{2}\), which when equating Eq. (5) with Eq. (15) yields the relation between the two parametrizations
\[|\Delta_{d_{x^{2}-y^{2}}}(\mathbf{R})|e^{i\chi_{d_{x^{2}-y^{2}}} (\mathbf{R})} =\frac{1}{\sqrt{2}}\Big{(}|\Delta_{+}(\mathbf{R})|e^{i\chi_{+}( \mathbf{R})}\] \[+|\Delta_{-}(\mathbf{R})|e^{i\chi_{-}(\mathbf{R})}\Big{)}, \tag{17}\] \[|\Delta_{d_{xy}}(\mathbf{R})|e^{i\chi_{d_{xy}}(\mathbf{R})} =\frac{i}{\sqrt{2}}\Big{(}|\Delta_{+}(\mathbf{R})|e^{i\chi_{+}( \mathbf{R})}\] \[-|\Delta_{-}(\mathbf{R})|e^{i\chi_{-}(\mathbf{R})}\Big{)}. \tag{18}\]
In a chiral superconductor, the topological invariant is the Chern number \(M\) corresponding to the winding of the superconducting order parameter on the FS and giving rise to \(|M|\) chiral edge modes traversing the bulk gap whenever the topoloigcal invariant changes, in particular at vacuum interfaces but also domain walls [10; 11; 12; 13; 14; 15; 16; 17]. While these edge modes are topologically protected, they generate chiral edge currents and OAM which are not [126; 7; 127]. Furthermore, close to the edges, the opposite (subdominant) chirality is often also locally induced, such that the order parameter takes the more general form in Eq. (15). This extends more generally to other forms of spatial inhomogeneities such as domain walls and vortices, and we therefore always use the most general form in Eq. (15) in our calculations, allowing for a completely general spatial dependence of both amplitudes and phases. We note that this in principle allows the system to go into a different state, e.g. a nodal \(d\)-wave or nematic \(d\)-wave state [207], but we always find the chiral state to be robust.
Chiral superconductors also hosts domain walls, which are topological defects separating regions of opposite dominant chirality [28; 29; 1]. Domain walls thus have \(|M|\) chiral edge modes on each side with opposite winding [30], also generating chiral currents on either side. These currents, together with the exchange of chirality across the domain wall, lead to a slight increase in free energy and an effective line tension [70]. This usually makes domain walls metastable, but they are often trapped and further stabilized by pinning, geometric effects, and vortices [208].
Just like any superconductor, a chiral superconductor can also host vortex defects. A chiral superconductor with total vorticity \(m\) is associated with an \(m\times 2\pi\) quantized phase winding in the dominant chiral component [56], i.e. \(\chi_{+}(\mathbf{R})\approx m\phi\) along any path sufficiently far from and encircling all vortex defects. Abrikosov vortices (antivortices [209; 210; 211; 212; 213; 214]) correspond to \(m=-1\) (\(m=+1\)) in positive external flux \(\Phi_{\rm ext}>0\), and vice versa for
negative flux, also with a corresponding \(2\pi\) phase winding in each nodal component \(\chi_{d_{x^{2}-y^{2}}}(\mathbf{R})\) and \(\chi_{d_{xy}}(\mathbf{R}^{\prime})\) if the vortex cores are overlapping, \(\mathbf{R}=\mathbf{R}^{\prime}\). Spatially separating the nodal winding centres \(\mathbf{R}\neq\mathbf{R}^{\prime}\) leads to a disassociation of the Abrikosov vortex into two fractional vortices, one for each winding center, and to a Josephson-like term in the free energy that usually grows with the separation distance [215; 216; 75], thus making the fractional vortices unstable. However, inside a domain wall such a separation typically becomes favorable instead [70]. Furthermore, the slight suppression of the total order parameter in the domain wall acts as an attractive pinning center for Abrikosov vortices, providing a mutual stabilization of the domain wall and fractional vortices [217], and thereby a mechanism for forming a CV as demonstrated in the next section III.1.
Finally, changing magnetic flux direction allows for the vorticity to either be aligned antiparallel or parallel with the chirality, which leads to inequivalent vortices and also to inequivalent CVs. We illustrate this by first considering the total angular momentum, \(\tilde{L}_{z}=\tilde{L}_{z}^{\mathrm{orb}}+\tilde{L}_{z}^{\mathrm{c.m.}}\), and the winding quantization. Here, \(\tilde{L}_{z}^{\mathrm{orb}}\) is the OAM generated by chirality as explained earlier in this subsection, while \(\tilde{L}_{z}^{\mathrm{c.m.}}=(\hbar/i)\partial_{\phi}\) is the generator of c.m. angular momentum with eigenvalue \(l_{z}^{\mathrm{c.m.}}=m\hbar\) for a state with vorticity \(m\). Thus, the total angular momentum of the Cooper pair is \(l_{z}=l_{z}^{\mathrm{orb}}+l_{z}^{\mathrm{c.m.}}=(M+m)\hbar\), and is therefore a superposition between the OAM generated by chirality (i.e. Chern number) and the c.m. angular momentum generated by vorticity (i.e. winding quantization). Thus, antiparallel (parallel) alignment of vorticity and chirality leads to a negative (positive) superposition of the total angular momentum. Similarly, the phase winding of the subdominant chirality also shows such a behavior. Close to a vortex defect, the subdominant chirality is generally induced with finite amplitude and phase \(\chi_{-}(\mathbf{R})\approx p\phi\). The quantized phase winding \(p\) is constrained according to the relation [136; 56]
\[p=m+2M+n, \tag{19}\]
here with integer \(n\) capturing higher-order harmonics generated by e.g. a non-circular system or anistropic FS [191]. Despite such terms often being unimportant [56], we in this work include them for full completeness. Equation (19) shows that the phase winding of the subdominant component also is a superposition of the vorticity and Chern number and can therefore be minimized (maximized) for an antiparallel (parallel) alignment.
## III Coreless vortices
We begin this section by briefly summarizing the basic structure and properties of CVs in spin-singlet chiral \(d\)-wave superconductors in Sec. III.1. In Sec. III.2 we discuss the stability and formation of CVs, and that the most stable CV is typically quadruply quantized in chiral \(d\)-wave superconductors.
### Coreless vortex structure
In this subsection we summarize the basic properties of antiparallel and parallel CVs. In comparison to our earlier work [136], we here choose to study a somewhat smaller system with slightly different parameters, to illustrate that the important qualitative features do not depend on such parameters. Note that the spatial inhomogeneities induced by the CVs therefore are significant compared to the system size. Still, when we use the term 'dominant bulk chirality', we refer to the spontaneously chosen ground state chirality in the absence of vorticity, or equivalently, the dominant chirality in a much larger but otherwise analogous system. For reference, see Appendix A showing that the important qualitative features discussed here remain in systems with radius \(\mathcal{R}\geq 150\xi_{0}\), i.e. an order of magnitude larger.
Figure 1 (Fig. 2) shows an antiparallel (parallel) CV in a disc-shaped system with radius \(\mathcal{R}=15\xi_{0}\), dominant bulk chirality \(\Delta_{+}\), temperature \(T=0.1T_{\mathrm{c}}\), penetration depth \(\lambda_{0}=10\xi_{0}\), and external flux \(\Phi_{\mathrm{ext}}=7.5\Phi_{0}\) (\(\Phi_{\mathrm{ext}}=-7.5\Phi_{0}\)). The first (second) row shows the amplitudes and phases of the nodal (chiral) components, while the third row shows the charge-current density and induced magnetic-flux density. Each nodal component has four \(2\pi\)-phase windings that suppress the corresponding nodal amplitude but not the other nodal component. Since they lie at different coordinates for the two nodal components, the total order parameter is everywhere non-singular with no normal-state regions or core. The vortices are therefore fractional, in contrast to singular Abrikosov vortices which have spatially overlapping phase windings. A total of eight fractional vortices lie on a circularly (octagonally) formed domain wall, the latter seen in the second row of Fig. 1 (Fig. 2), where it separates the outer and inner regions of dominant chiralities \(\Delta_{+}\) and \(\Delta_{-}\), respectively. There is a total vorticity of \(m=\pm 4\) in the disc, seen by the \(m\times 2\pi\) winding of the dominant chirality \(\chi_{+}(\mathbf{R})=m\phi\) in the outer region, which means this is a quadruply quantized CV. There is no phase-winding in the inner region, since there the dominant phase is constant \(\chi_{-}=0\), indicating that the vorticity is distributed along the domain wall. We next turn to the subdominant phase, which shows a \(\pi\)-shift across the domain wall in Fig. 1, which further stabilizes the structure but is otherwise unimportant [136]. Thus, apart from this phase shift, the phase \(\chi_{-}\) is completely trivial in Fig. 1. In contrast, Fig. 2 shows a total of eight winding centers in the subdominant component \(\chi_{-}\) in the outside region. These results are in full agreement with the phase winding constraint in Eq. (19), with \(p=-4+4=0\) (\(p=4+4=8\)) in Fig. 1 (Fig. 2) corresponding to a CV with antiparallel (parallel) alignment of vorticity and chirality. Importantly, Fig. 2 shows that the winding centers lie outside the CV, spontaneously breaking axial symmetry, as defined by the winding not being generated by rotation around a single central axis. This occurs in order to lower the free energy, since the hypo
thetical axisymmetric state with \(p=8\) would correspond to a giant vortex with a large normal core [56], consequently suppressing superconductivity and increasing the free energy. We have verified that such a giant vortex is indeed unstable. In contrast, the axially symmetry-breaking CV is stable since it avoids the energy penalty, while still lowering the kinetic energy caused by the external flux. Thus, the antiparallel (parallel) CV is axisymmetric (non-axisymmetric) with a continuous (discrete eight-fold) rotational symmetry. Our earlier work showed that this leads to a smoking-gun signature in the LDOS of both the topologically protected and quantized Chern number and vorticity [136]. The third row shows a corresponding rotation symmetry of the charge-current density and induced magnetic flux, with multiple sign changes due to the chiral edge modes, domain wall, and overlapping Meissner screening currents. The paramagnetism is maximal along the domain wall, leading to a characteristic ring-like magnetic structure, in contrast to a point-like structure of an Abrikosov vortex [136].
Overall these results demonstrate the structure and basic properties of quadruply quantized CVs in chiral \(d\)-wave superconductors, i.e. quadruple-quantum vortices. This is the chiral \(d\)-wave extension of the double-quantum vortex in chiral \(p\)-wave superfluids [56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66]. Beyond this comparison, we note that an extension between the two different systems can in general be very non-trivial due to the different spin-symmetries and angular momentum quantization, and therefore it is not _a priori_ certain that the same kind of vortex defects are even stable in both systems, let alone have the same qualitative properties. For example, the parallel CV in Fig. 2 shows multiple sign changes, compared to no sign changes for the parallel CV in a chiral \(p\)-wave double-quantum vortex reported in Ref. [56]. More generally, "\(p\)-wave is special" [128] in many regards compared to all systems with higher Chern number, e.g. when it comes to the chiral edge currents and OAM [127, 128, 129, 130, 131, 132, 133, 134, 135].
### Stability and coreless vortex formation
We next discuss the stability and formation of CVs. The peculiar combination of a domain wall and vorticity in a CV allows the system to carry finite vorticity, which reduces the kinetic energy caused by external flux but without paying the price of a normal core. This is significant, since the superconducting state is per definition the most energetically favorable state below the second critical field \(B_{\mathrm{c,2}}(T)\). The CV will thus be energetically more favorable than Abrikosov vortices if this gain outweighs the cost of the domain wall. However, the CV is very robust even when there are other vortex configurations with a lower free energy, i.e. even when
technically metastable. This is a general feature of both Abrikosov vortices and CVs, related to the fact that they are topological defects that cannot be trivially removed from the system. They typically have to enter and exit the system via the edges, but such entrance and expulsion is hampered by large energy barriers, e.g. geometric and Bean-Livingston barriers [218, 219, 220, 221, 222]. Moreover, vortex motion is hampered by pinning and dissipation associated with normal-state resistance. Thus, once a particular arrangement of vortex defects has entered the system, it can become extremely robust even far into the flux-temperature regime where other vortex arrangements technically have even a significantly lower energy. Summarized briefly, experiments to a large degree observe metastable states [223], and such behavior is also typical in self-consistency simulations. Thus, the most relevant question is not necessarily whether a particular vortex configuration has the lowest energy, but if the necessary conditions for its formation can be prepared [220].
We see the vortex stability repeatedly in our simulations, both for CVs and Abrikosov vortices. In particular, we find that CVs spontaneously enter the system instead of Abrikosov vortices in certain parameter regimes, or can easily form when both domain walls and vortices are present. In the latter scenario, we find that the domain wall attracts and pins the Abrikosov vortices and, upon entering the domain wall, they disassociate into fractional vortices that lowers the free energy [215, 216, 57, 57, 70]. Conversely, to break the CV, the vortices have to exit the domain wall or the domain wall has to disappear. However, such vortex expulsion is prevented by the pinning, and more importantly, the instability of the fractional vortices outside the domain wall. Thus, fractional vortices typically first have to recombine to a regular Abrikosov vortex before expulsion, but such re-combination increases the free energy. For the domain wall to disappear, it either has to shrink to zero size or expand to the system edges. Such shrinking is however prevented by the strong repulsive interaction between vortices, while expansion of the CV is counter-balanced by the attractive interaction (line tension) caused by the domain-wall currents, as well as the repulsive interaction between the fractional vortices and system boundaries. Among the very rare instances where we find the CV becoming unstable, it is this latter scenario that seems the most plausible; the line tension is significantly modified by non-degenerate order-parameter components (e.g. competing nodal superconductivity discussed in Sec. VIII) or the CV expanding to the system edge combined with a flux-temperature combination very far from the energy minimum (e.g. in Sec. IV). Apart from these scenarios, we find the CV to be extremely robust in all our calculations and often spontaneously appearing, even in the presence of strong perturbations, disorder, and when there are other vortex configurations with considerably lower free energy.
Finally, we discuss the most stable CV, which we generally find to be the quadruply quantized CV with \(|m|=4\), shown in Figs. 1 and 2, and discussed in our previous work [136]. This is easy to understand for the antiparallel CV, since it corresponds to the special commensurate scenario, such that the phase winding of the subdominant chirality vanishes \(p=m+2M=0\) [Eq. (19)]. In contrast, a finite phase winding \(p>0\) would either suppresses superconductivity if axisymmetric (thus costing energy), or increase the phase winding generating a modified superfluid momentum and line tension if non-axisymmetric (also costing energy). Furthermore, beyond commensurability, there is also the matter of balancing the repulsive versus attractive interactions, which overall stabilize the CV and its finite size (as discussed in Sec. IV), which is then important for the parallel CV, since there cancellation in \(p\) is impossible by definition. Considering for example higher vorticity \(|m|>2|M|\), this leads to increased repulsion, but also modified line tension due to the additional phase windings in \(p\). As a consequence, the CV becomes less energetically favorable and less robust. The latter is also true for lower vorticity \(|m|<2|M|\), as there here might no longer be enough vortices to stabilize the domain wall. We verify these arguments during the extensive self-consistency calculations of the present work, including the large parameter ranges and model comparisons. Although we have found some parameter regimes where CVs with higher or lower vorticity become metastable rather than completely unstable, these states were generally less favorable and significantly more difficult to get to appear in the system. In summary, the commensurate scenario \(m=-2M\) allows the antiparallel CV to be coreless with maximized order parameter, leading to quadruple-quantum vortices in chiral \(d\)-wave superconductors, and more generally \(2|M|\)-quantum vortices for other chiral superfluids.
## IV Tunable coreless vortex size
This section demonstrates the large tunability of the CV size, via easily accessible parameters in experiment such as external flux \(\Phi_{\rm ext}\) and temperature \(T\), but also via the penetration depth \(\lambda_{0}\) and system size \(\mathcal{R}\). For all the parameter ranges considered in this section, we note that the overall qualitative features presented in Figs. 1 and 2 remain the same.
The CV has a finite radius, \(\mathcal{R}_{\rm CV}\), balanced by attractive and repulsive interactions, acting to contract and expand the CV, respectively [220]. The attractive interaction is exerted by the effective line tension from the domain wall and its chiral currents [70], while there is a mutual repulsive interaction between the fractional vortices in the domain wall [215, 75]. Hence, a closed domain wall will typically collapse and disappear in the absence of vorticity (we have verified this in our self-consistent calculations) [70]. Furthermore, anything influencing the currents or vortices will change the balance, and therefore also \(\mathcal{R}_{\rm CV}\). This is also further demonstrated by studying
the interaction between CVs and Abrikosov vortices in Sec. V or with the system edges in Sec. VI.
We start by describing how to unambiguously define and calculate \(\mathcal{R}_{\mathrm{CV}}\) for antiparallel and parallel CVs. The midpoint of the CV is always well-defined, and a straight line across this point will generally intersect the domain wall of the CV twice, i.e. in two different points with degeneracy \(|\Delta_{+}|=|\Delta_{-}|\) as indicated in Fig. 3(a). We note that these points generally coincide with the maximum of the zero-energy LDOS [136]. For the antiparallel CV, the CV diameter is the distance between the intersection points and is independent of the angle, and \(\mathcal{R}_{\mathrm{CV}}\) is therefore unambiguously defined as half this distance as in Fig. 3(a). For the parallel CV, we instead define \(\mathcal{R}_{\mathrm{CV}}\) from the average half distance for all angles, as displayed in Figs. 3(b) where the thick red line shows the numerically extracted point \(|\Delta_{+}|=|\Delta_{-}|\) and arrows show the minimum (maximum) radius \(\mathcal{R}_{\mathrm{CV}}^{\mathrm{min}}\) (\(\mathcal{R}_{\mathrm{CV}}^{\mathrm{max}}\)). However, we find that \(\mathcal{R}_{\mathrm{CV}}\) is practically unambiguously defined even for the parallel CV, since \(\Delta\mathcal{R}_{\mathrm{CV}}\equiv\mathcal{R}_{\mathrm{CV}}^{\mathrm{ max}}-\mathcal{R}_{\mathrm{CV}}^{\mathrm{min}}\lesssim 1\xi_{0}\) in all our simulations across all parameter ranges.
In Fig. 3(c) we illustrate that \(\mathcal{R}_{\mathrm{CV}}\) can be effectively tuned by an externally applied magnetic field. Specifically, \(\mathcal{R}_{\mathrm{CV}}\) decreases as \(|\Phi_{\mathrm{ext}}|\) increases, since the currents grow in magnitude, while the distance between fractional vortices reduce (hence an overall stronger contraction). This is in a sense analogous with how larger flux causes smaller vortex separation and denser vortex lattices in regular type-II superconductors [224]. Similarly, a shorter penetration depth \(\lambda_{0}\) also leads to a smaller vortex-vortex separation, implying a smaller effective repulsive interaction as seen in Fig. 3(d). The overall dependence on \(\lambda_{0}\) can be divided into two regimes: \(\lambda_{0}<\mathcal{R}\), where screening becomes considerable and strongly modifies \(\mathcal{R}_{\mathrm{CV}}\), and \(\lambda_{0}>\mathcal{R}\), where the system is poorly screened and the effect is minimal. In the limit of small \(\lambda_{0}\) such that \(\xi_{0}\lesssim\lambda_{0}\ll\mathcal{R}\), the CV radius is almost completely determined by the screening regardless of system size, while in the opposite limit of large \(\lambda_{0}\), \(\mathcal{R}_{\mathrm{CV}}\) eventually reaches the asymptotic limit \(\lambda_{0}\to\infty\) (zero screening). We here note that the penetration depth is a materials property, which can be modified by the inclusion of impurities, as non-magnetic and magnetic impurities typically increase and decrease the penetration depth, respectively [225].
Figure 3(e) shows that \(\mathcal{R}_{\mathrm{CV}}\) decreases at lower temperatures. We interpret the overall temperature dependence to be directly proportional to the effective coherence length \(\xi_{\mathrm{eff}}\equiv\hbar v_{\mathrm{F}}/|\Delta(T)|\), which reduces but saturates at small temperatures (due to saturating \(|\Delta(T)|\)) and increases dramatically at large temperature (due to vanishing \(|\Delta(T)|\)). Note that this is consistent with stronger but saturating chiral currents at lower temperatures, hence increasing the contraction. Of course, \(\mathcal{R}_{\mathrm{CV}}\) is strictly limited by the size of the system (relative to the coherence length), consistent with the observed small (large) temperature dependence in small (large) systems in Fig. 3(e). Specifically, in the small systems there is a strong overlap between the boundary and CV at all temperatures leading to saturation, while the much weaker overlap in larger systems leave considerably more room for variation in \(\mathcal{R}_{\mathrm{CV}}\) with \(T\). We note an overall trend that \(\mathcal{R}_{\mathrm{CV}}\to 0.4\mathcal{R}\) for large temperature, for all system sizes \(\mathcal{R}\) considered in our simulations. In other words, the system size, and more generally surrounding environment, can strongly influence the maximum \(\mathcal{R}_{\mathrm{CV}}\) and its temperature dependence.
Figure 3(f) shows directly how \(\mathcal{R}_{\mathrm{CV}}\) increases with system size \(\mathcal{R}\). This is a mesoscopic finite-size effect, which can be divided into two regimes, corresponding to small and large \(\mathcal{R}\). For small \(\mathcal{R}\), the CV-induced currents strongly overlap with the chiral edge currents of the system. More importantly, the system edges impose an energy barrier [174, 218, 221, 222, 175, 220, 221, 222] and an effective repulsive interaction (at least at sufficiently high flux), which contracts the CV. This effect is also seen in Sec. VI,
and is well-known for vortex lattices, leading to a number of interesting mesoscopic finite-size and shape effects [226, 227, 228, 229, 230, 231, 232, 233, 234], see also Ref. [174] and references therein. For large \(\mathcal{R}\), the repulsion from the edges eventually becomes negligible, but there is still a slow asymptotic behavior of \(\mathcal{R}_{\rm CV}\) which we interpret to be due to a slow saturation also present in properties related to the spectrum and chiral currents surrounding the CV.
In summary, these results demonstrate a strong tunability of the CV size, traced back to the effective attractive and repulsive interactions balancing the finite size [220], but also to the effective coherence length and its dependence on the superconducting gap. We note that while there are significant differences in the LDOS for the antiparallel and parallel CVs, due to symmetry breaking for the latter, the overall CV size is roughly simlar for both CVs. In the next Sec. V and Sec. VI we further demonstrate a tunability of the CV shape in the presence of other vortices or anisotropic effects.
## V Interaction with Abrikosov Vortices
In this section, we address how CVs coexisting with Abrikosov vortices changes the CV shape. In Figs 4 and 5 we show results for an antiparallel CV, and in Fig. 6 for a parallel CV. Before describing these figures in detail, we note that unlike the mostly point-like Abrikosov vortex, the CV has an intrinsic structure whose shape is to a large degree set by the repulsive interaction between its fractional vortices and their interaction with the environment, as established in the previous section IV. For example, the CV interacts repulsively with other vortices, whether it is Abrikosov vortices or other CVs, or attractively with antivortices. This section also establishes robustness of both the CV and its distinctive LDOS signature in the presence of such strong perturbations as additional vortices, and furthermore also shows the distinctly different LDOS signatures of CVs versus Abrikosov vortices. We note that the section also essentially studies the interaction between fractional vortices and regular Abrikosov vortices. Apart from illustrating all these aspects, combinations of CVs and Abrikosov vortices are reasonable to expect in a chiral \(d\)-wave superconductor, as discussed in the end of the section.
In Fig. 4 we present in the different columns the order parameter amplitudes and phases, while each row represents a different configuration of one antiparallel CV with one or more Abrikosov vortices or an antivortex. We start by analyzing the amplitudes and the overall CV shape, and then analyze the phases. The figure explicitly shows how the Abrikosov vortices completely suppress both the nodal components \(|\Delta_{d_{x^{2}-y^{2}}}|\) and \(|\Delta_{d_{xy}}|\) at its core, as well as both the chiral components \(|\Delta_{+}|\) and \(|\Delta_{-}|\). By contrast, the fractional vortices in the domain wall of the CV only suppress the corresponding nodal component. Importantly, the figure shows significant modification of the overall CV size and shape. To explain these results, we note that at a sufficiently high external flux, Abrikosov vortices and CVs are both repelled from the system edges, related to the geometric barriers and the Bean-Livingston barrier [221, 222, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 204, 206, 207, 208, 209, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234]. This confinement leads to relatively small distances between Abrikosov vortices and the CV, which deforms the CV due to the mutual repulsive interaction. The resulting shape depends on the exact number and spatial arrangement of the Abrikosov vortices. Hence, we find that different deformation modes appear, as clearly seen in rows three to six. However, we note that the CV still keeps an overall elliptical form, which is clearly traced back to its original unperturbed circular form. If the Abrikosov vortex is instead situated at the center of the CV (first row), it is trapped and the CV expands due to the mutual repulsive interaction. If instead an antivortex is situated inside the CV (second row), it attracts the CV, which then shrinks substantially. However, we find that this configuration is always unstable unless pinning centers are artificially added (thus stabilizing the configuration), since the slightest deviation will otherwise fully attract the antivortex into the CV domain wall where it will be annihilated against two of the fractional vortices. We note that all other results and scenarios considered here are very robust even without such pinning, and we only choose to plot the antivortex scenario as it clearly illustrates how competing attractive and repulsive interactions set the overall shape and size of the coreless vortex. Specifically, all other results show a fully converged self-consistent solution, stabilized and trapped in the system by large energy barriers, and corresponding to a minimum of free energy.
Next, we study the phases and note in particular that the dominant chiral phase (i.e. \(\chi_{\pm}\) outside and inside the CV, respectively) always winds according to the vorticity \(m\), both locally around each vortex defect, and globally around the perimeter of the disc. For example, consider positive external flux and a vortex defect located at \((x,y)=(x_{0},y_{0})\) with winding \(m\), where \(m=\mp 1\) for Abrikosov vortices and antivortices, respectively, while \(m=-4\) for the CV considered here. Close to \((x_{0},y_{0})\), the dominant phase is described by \(\chi_{+}(\mathbf{R})\approx m\phi\) with polar angle \(\phi\). Far from all the vortices at the disk perimeter, the dominant chiral phase globally winds \(\chi_{+}(\mathbf{R})\approx m_{\rm tot}\phi\), with total vorticity \(m_{\rm tot}=-(N_{\rm V}+4N_{\rm CV})+N_{\rm AV}\), where \(N_{\rm V}\) counts the number of vortices, \(N_{\rm CV}\) the number of CVs, and \(N_{\rm AV}\) the number of antivortices. Thus, from top to bottom row, \(m_{\rm tot}=-(1+4)\), \(m_{\rm tot}=-4+1\), \(m_{\rm tot}=-(4+1)\), \(m_{\rm tot}=-(4+2)\), \(m_{\rm tot}=-(4+3)\), \(m_{\rm tot}=-(4+4)\). We also find that the winding constraint \(p=m+2M\) from Eq. (19) for the subdominant chiral phase is always fulfilled.
In Fig. 5 we display the spatially-resolved LDOS for the exact same systems and solutions as in Fig. 4, where each column is taken for a different fixed subgap energy \(\varepsilon\) (i.e. bias voltage). Importantly, at low energies, each Abrikosov vortex appears as a point-like peak represent
ing the Caroli-de-Gennes-Matricon states [232, 233, 234, 235, 236, 237], which expands to a size of roughly \(\sim 1\xi_{0}\) at higher energies. By contrast, the CV appears like a ring-like peak that is an order of magnitude larger already at zero energy, \(\mathcal{R}_{\rm CV}\sim 10\xi_{0}\). The CV expands into two concentric rings at higher energies, corresponding to the combined superflow generated by vorticity and the edge modes on either side of the domain wall. The intensity of the subgap states in the CV and Abrikosov vortex are also separated by an order of magnitude, but the LDOS peak of the CV should still be observable as it can be significantly larger than the coherence peak and is tunable by both temperature and flux, as shown in our earlier work [136]. Notably, as the CV is deformed by the Abrikosov vortices, we also see how the LDOS is correspondingly deformed in rows 2-6. Thus the LDOS is explicitly tracking the CV shape.
Figure 6 shows the LDOS for similar combinations of a CV with Abrikosov vortices, but now for a parallel CV (\(\Phi_{\rm ext}<0\) such that \(m=+4\) and \(p=8\) instead of \(m=-4\) and \(p=0\)), and without the antivortex scenario. For completeness, Appendix B contains a plot of the corresponding order parameter amplitudes and phases for these scenarios (i.e. the analogue of Fig. 4). The overall trend in Fig. 6 is similar to that of the antiparallel CV, but importantly, we note that the distinct LDOS signature of the axial symmetry breaking is clearly present, including the eightfold symmetry related to \(p=m+2M=8\) for the CV (instead of \(p=0\) for the antiparallel CV). Hence, despite the strong local perturbation caused by the presence of Abrikosov vortices, the overall Doppler shift caused by finite superflow from
Figure 4: Systems containing both an antiparallel CV and Abrikosov vortices or antivortices, in a disc-shaped system with radius \(\mathcal{R}=25\xi_{0}\), dominant bulk chirality \(\Delta_{+}\), at \(T=0.1T_{\rm c}\) and \(\lambda_{0}=80\xi_{0}\). Columns show, from left to right, the chiral and nodal amplitudes, then the chiral and nodal phases. In the first (second) row, an Abrikosov (anti)vortex trapped inside the CV, remaining rows an increasing number of Abrikosov vortices (one to four) outside the CV. External flux is from top to bottom row: \(8\Phi_{0}\), \(8\Phi_{0}\), \(12\Phi_{0}\), \(12\Phi_{0}\), \(14\Phi_{0}\), and \(14\Phi_{0}\) in order to stabilize the various configurations.
the \(p=8\) winding centers remains clearly distinguishable, as is thus then the direct signature of the Chern number \(M\). Interestingly, the last row of Fig. 6 was initialized with four vortices outside the CV (i.e. the same arrangement as the last row Fig. 5), but during the self-consistency loop, one of the vortices was spontaneously absorbed into the center of the CV, thereby lowering the free energy. Notably, this is a self-consistent and robust solution, illustrating that it is realistic to study and expect the appearance of configurations with an Abrikosov vortex trapped inside the CV. Importantly, during the trapping of the Abrikosov vortex in the self-consistency loop, the vortex was at some point located at the domain wall of the CV where it could have been disassociated into fractional vortices, thus leading to a higher quantized CV with \(m=+5\) and \(p=+9\). However, the displayed solution with \(m=4\) and \(p=8\) was still preferred. Hence, this is another strong indication that the quadruply quantized CV is the most robust CV in chiral \(d\)-wave superconductors.
Finally, on more general grounds, we point out that studying a combination of CVs and Abrikosov vortices is relevant, since both are robust topological defects and can thus appear simultaneously in a sample. This is further supported by the high energy barriers associated with vortex dynamics, meaning that a particular vortex solution can be trapped in the system far into its metastable regime [218, 219, 220, 221, 222], where another vortex solution technically has a lower energy but can still not enter the system. Generally, both Abrikosov vortices and domain walls can be "kicked" into the system by e.g. annealing and rapid quenches in temperature and flux, and they can be further stabilized and trapped by pinning centers and certain geometry [208, 238]. Indeed, we find that combinations of CVs and Abrikosov vortices spontaneously enter and stabilize in our self-consistency calculations for different flux-temperature combiniations.
In summary, these results show a robustness of the CV in the presence of Abrikosov vortices. At the same time, a tunability of the shape is demonstrated, although the CV
Figure 5: Same as Fig. 4, but with each column showing the LDOS at a fixed subgap energy \(\varepsilon\), with gap roughly \(1.76k_{\rm B}T_{\rm c}\).
shape can still be traced back to its original circular (octagonal) shape for the antiparallel (parallel) CV. Specifically, the LDOS at different energies appear as concentric and convex (concave) line segments, corresponding to the Doppler shifts caused by the axisymmetric (non-axisymmetric) superflow, which in turn is generated by the internal (external and internal) phase windings for the antiparallel (parallel) CV. We also note that these results give rise to an even stronger experimental signature in the LDOS, as the point-like Abrikosov vortex is distinctly different from the line-like CV. Finally, we propose that similar deformations might be caused by other strong local electromagnetic perturbations, e.g. an appropriately prepared STM tip with strong magnetization.
## VI Non-circular geometry and strong confinement
The previous two sections IV and V illustrated that the overall size and shape of the CV is balanced by effective attractive versus repulsive electrodynamical interactions, traced back to the domain wall currents and fractional vortices respectively. In this section, we further illustrate this via the interaction with the system edges, and show how confinement alone can induce asymmetric deformation modes in the CV. In addition, the results show that the LDOS signature remains robust and is not relying on the symmetry (or lack thereof) of the system itself.
Figure 7 shows an antiparallel (parallel) CV in odd (even) rows in systems with different shapes, where the columns show from left to right: magnitude of the chiral order parameter components, charge-current density, and LDOS at different fixed subgap energies. The first two rows show a sample shaped like a pentagon, importantly illustrating that the overall circular versus octagon rotation symmetries of the CVs remain, even when incommensurate with the rotation symmetry of the system. Furthermore, this is an example of a system with higher-order harmonics discussed in Sec. II.3, where Eq. (19) is modified with an additional term, such that \(p=m+2M+n\), here with integer \(n=-5\) due to
Figure 6: Same as Fig. 5, but for a parallel CV, without any antivortex scenario, and where one of the four vortices in the last row has spontaneously been trapped at the CV center. The system has dominant bulk chirality \(\Delta_{+}\), at \(T=0.1T_{c}\) and \(\lambda_{0}=80\xi_{0}\), and is exposed to negative external fluxes, from top to bottom: \(-8\Phi_{0}\), \(-12\Phi_{0}\), \(-12\Phi_{0}\), \(-14\Phi_{0}\), and \(-14\Phi_{0}\).
the five-fold rotational symmetry of the superconducting grain. This leads to additional phase gradients and therefore superflow, which in turn generates additional current components. This effect is responsible for helping the current turn the sharp corners of the system, which is a well-known effect in chiral superfluids [14]. Furthermore, the additional phase gradients and superflow also leads to a locally enhanced LDOS at the corners at finite energies, again via the Doppler shift discussed in relation to Eq. (13). As a result, the subdominant phase \(\chi_{-}\) of the antiparallel CV has integer winding \(p=-4+4+n=-5\), while \(p=4+4+n=3\) for the parallel CV. Hence, the higher-order harmonics is superimposed with the antiparallel versus parallel vorticity and chirality, especially seen by the additional signatures with five-fold rotational symmetry in the LDOS at high energies in the last column Fig. 7(h). Importantly, the higher-order harmonics still does not modify the overall strong signature of vorticity and chirality in the LDOS. In other words, the strong LDOS distinction between parallel and antiparallel CVs
Figure 7: CVs in various non-circular samples, with antiparallel (parallel) CVs in odd (even) rows. Columns, from left to right, show magnitude of the chiral order parameter components, charge-current density, and LDOS at different fixed subgap energies. Here, dominant bulk chirality is \(\Delta_{+}\), with \(T=0.1T_{c}\), \(\lambda_{0}=80\xi_{0}\), and with \(\Phi_{\text{ext}}=\pm 8\Phi_{0}\) for antiparallel and parallel CVs respectively.
remains robust. Next, the third and forth rows show a completely irregular system without any rotation symmetry. Again, the LDOS signature is robust, but the sharp wedges together with the overall asymmetry between \(x\) and \(y\) directions cause a slight deformation of the CVs. The last two rows show a rectangular system, with an even stronger asymmetry between \(x\) and \(y\) directions. Due to the effectively repulsive interaction between the system edges and the fractional vortices in the CV, the resulting CV shape is strongly deformed, with a clear \(x\) and \(y\) asymmetry. The effective repulsive interaction with the system edges is related to the energy barriers for vortex entrance and expulsion at sufficiently high external flux [174; 218; 219; 220; 221; 222]. In summary, this section illustrates both a tunability of the CV shape due to mesoscopic confinement, and most importantly that despite these CV shape changes, the experimental signatures in the LDOS are robust at all subgap energies and do not rely on the overall rotation symmetry of the system.
## VII Non-circular Fermi Surfaces
So far, we have assumed a circular and electron-doped FS as in our previous work [136]. Here we show that our main results and conclusions do not depend on the shape of the FS or particular doping level. In particular, we consider FSs formed in a hole doped material and with weak to strong deviation from a circular shape, and also with anisotropy between \(k_{x}\)- and \(k_{y}\)-momentum directions, to further mimic possible broken symmetries in the normal state. In particular, we parametrize a non-circular FS via the momentum \(\mathbf{k}=k_{x}\hat{k}_{x}+k_{y}\hat{k}_{y}\) through the normal-state dispersion \(\epsilon_{\mathbf{k}}\) on a square lattice
\[\epsilon_{\mathbf{k}}= -2t[(1+\alpha_{xy})\cos(k_{x}a_{0})+(1-\alpha_{xy})\cos(k_{y}a_{ 0})]\] \[-4t^{\prime}\cos(k_{x}a_{0})\cos(k_{y}a_{0})\] \[-2t^{\prime\prime}[(1+\alpha_{xy})\cos(2k_{x}a_{0})+(1-\alpha_{ xy})\cos(2k_{y}a_{0})], \tag{20}\]
in terms of the lattice constant \(a_{0}\), nearest-neighbor hopping \(t>0\) (which we use as a natural unit for all tight-binding energies), next-nearest neighbor hopping \(t^{\prime}\), next-nearest neighbor hopping \(t^{\prime\prime}\), next-nearest neighbor hopping \(t^{\prime\prime}\), and with anisotropy \(\alpha_{xy}\) between \(k_{x}\) and \(k_{y}\)[239]. We here consider four different tight-binding models taken from the literature [240; 241; 147], labeled as FS #1 to #4, defined in Table 1 and illustrated in Fig. 8. Here, the non-circular FS leads to a modified \(\mathbf{v}_{\text{F}}(\mathbf{p}_{\text{F}})\) entering the Eilenberger equation (2), thus modifying the propagators and all other quantities defined in Sec. II correspondingly. See Ref. [193] for further details on parametrizing such a microscopic tight-binding Fermi surface within quasiclassical theory of superconductivity. We further note that all of these FSs are hole-doped, corresponding to being centered around \((k_{x},k_{y})=(\pi/a_{0},\pi/a_{0})\), and show either a four-fold (FSs #1, #2, #4) or two-fold (FS #3) discrete rotational symmetry. There is a weak to strong deviation from circular shape in changing between FS #1 to FS #4. In contrast, an electron-doped FS is centered around \((k_{x},k_{y})=(0,0)\).
We show the antiparallel (parallel) CV computed with these FSs in odd (even) rows in Fig. 9 for an octagonal sample and in Fig. 10 for a square sample. In all figures, the columns show from left to right the chiral order parameter amplitudes, charge-current density, and LDOS at different subgap energies \(\varepsilon\). Overall, we find that both types of CVs show traces of the underlying symmetry of the FS, which can be explained in terms of
\begin{table}
\begin{tabular}{c|c|c|c|c} FS & \(t^{\prime}\) & \(t^{\prime\prime}\) & \(\mu\) & \(\alpha_{xy}\) \\ \hline \#1 & \(-0.250t\) & \(0\) & \(0\) & \(0\) \\ \#2 & \(-0.437t\) & \(0.034t\) & \(-1.203t\) & \(0\) \\ \#3 & \(-0.437t\) & \(0.034t\) & \(-1.203t\) & \(0.1\) \\ \#4 & \(-0.495t\) & \(0.156t\) & \(-1.267t\) & \(0\) \\ \end{tabular}
\end{table}
Table 1: Parametrization of tight-binding Fermi surfaces (FSs) for the normal-state dispersion in Eq. (20) with nearest neighbor hopping \(t\), next-nearest neighbor hopping \(t^{\prime}\), next-next nearest neighbor hopping \(t^{\prime\prime}\), chemical potential \(\mu\), and hopping anisotropy \(\alpha_{xy}\). Resulting FSs are illustrated in Fig. 8.
Figure 8: Normal-state band structures for the non-circular tight-binding hole-doped FSs defined in Table. 1. Colors indicate band energy \(E\), solid lines denote FS (\(E=0\)), with arrows indicating the Fermi velocity \(\mathbf{v}_{\text{F}}(\mathbf{p}_{\text{F}})\) used as input in the quasiclassical parametrization [193].
higher-order harmonics superimposed on the CV as discussed in Sec. II.3 and studied for non-circular samples in Sec. VI, but here they are instead originating from the FS. For example, a four-fold rotational symmetry of the FS (or sample) leads to a corresponding four-fold rotational symmetry with nodes and kinks developing in the CV. Similarly, an anisotropic FS with two-fold rotational symmetry (FS #3) deforms the otherwise circular CV into an ellipse, due to suppression (enhancement) of \(\mathbf{v}_{\mathrm{F}}\) along \(k_{y}\) (\(k_{x}\)), as illustrated in Fig. 8(c). Interestingly, the elliptical deformation occurs along opposite directions for the antiparallel and parallel CVs, which we interpret to be due to opposite signs of \(\mathbf{v}_{\mathrm{F}}(\mathbf{p}_{\mathrm{F}})\cdot\mathbf{p}_{\mathrm{s}}( \mathbf{R})\) for the two CVs, which can be traced back to opposite signs of \(\mathbf{A}\) (i.e. opposite external field directions) entering \(\mathbf{p}_{\mathrm{s}}\) in Eq. (13). Furthermore, we note here that FS #4 is very distorted compared to a circular FS, leading to also very strong distortions in the CV.
Despite these symmetry-breaking terms in the FS causing distortion of the CV, we find that both CV solutions are consistent with the ones presented in Sec. IV. The results are consistent with the ones presented in Sec. IV.
Figure 9: CVs in an octagon-shaped sample. Different rows show the different FSs defined in Table 1, FS #1 to FS #4, with antiparallel (parallel) CV in odd (even) rows. Columns, from left to right, show magnitude of the chiral order parameter components, charge-current density, and LDOS at different fixed subgap energies. Here, \(\Delta_{+}\) is the dominant bulk chirality with \(T=0.1T_{c}\), \(\lambda_{0}=80\xi_{0}\), and with \(\Phi_{\mathrm{ext}}=\pm 8\Phi_{0}\) for antiparallel and parallel CVs respectively.
tions are always robust, and the important asymmetry remains clear in the LDOS. Specifically, the CV with antiparallel vorticity and chirality (odd rows) generates convex and concentric lines in the LDOS, from the axisymmetric angular momentum and superflow. In contrast, the CV with parallel vorticity and chirality (even rows) always generates characteristic concave LDOS patterns due to the multiple phase winding centers which are non-overlapping (i.e. axial symmetry-breaking). Moreover, the emergent discrete rotational symmetry and interweaving resonances at higher energies is a direct experimental signature of the quantized vorticity and Chern number, due to it tracing back to winding superposition \(p=m+2M\), which is robust due to quantization and topology [136]. These results establish that the signatures of CVs are robust even for a highly anisotropic FS, reflecting broken symmetries of the normal state.
## VIII Non-degenerate nodal components
In all other sections and our previous work [136] we assumed degeneracy between the two nodal \(d\)-wave pairing symmetries, \(d_{x^{2}-y^{2}}\) and \(d_{xy}\), such that their transition temperatures are the same. Such an exact degeneracy is
Figure 10: Same as Fig. 9 but for a square-shaped sample.
experimentally relevant: it is enforced by the symmetry and group theory in any material with a three- or six-fold rotationally symmetric lattice [17]. This includes triangular, hexagonal, and honeycomb materials and is as such guaranteed in many of the materials currently proposed as chiral \(d\)-wave superconductors [105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118]. Still, for sake of full completeness, we in this section show that our main results and conclusions in addition hold for systems where this degeneracy is somehow broken. Specifically, we consider non-degenerate pairing interactions modeled by different coupling constants resulting in different transition temperatures, quantified by the ratio
\[\alpha\equiv\frac{T_{c}^{d_{xy}}}{T_{c}^{d_{z^{2}-y^{2}}}}\in[0,1]. \tag{21}\]
Hence, we set the \(d_{xy}\)-component to be subdominant for all \(\alpha<1\), resulting also in different bulk amplitudes \(|\Delta_{d_{xy}}|<|\Delta_{d_{z^{2}-y^{2}}}|\). However, apart from inserting these different coupling strengths, we do not constrain the order parameter components in any way, and solve for both of them completely self-consistently. For example, performing self-consistent calculations without any vorticity, we still find that the chiral \(d\)-wave state is the ground state even for highly non-degenerate systems with \(\alpha<0.8\), thus surviving a strong suppression of the \(d_{xy}\)-component. Notably, such a state is still fully gapped in the bulk, with a Chern number \(M=\pm 2\). Hence, the possibility of antiparallel versus parallel superposition of vorticity and chirality in a CV is still possible.
In order to investigate the effects on the CV from non-degenerate \(d\)-wave nodal components, we begin by summarizing the scenario of full degeneracy (\(\alpha=1\)) studied so far. Here, both the axially symmetric CV with antiparallel vorticity and chirality, and the axial symmetry-breaking CV with parallel vorticity and chirality, are extremely robust solutions over a large range of temper
Figure 11: CVs with broken degeneracy between the two nodal order parameter components, quantified by the \(T_{c}\) ratio \(\alpha\) in Eq. (21), in a system with dominant bulk chirality \(\Delta_{+}\) at \(T=0.1T_{c}\), \(\lambda_{0}=80\xi_{0}\), and \(\mathcal{R}=25\xi_{0}\). Antiparallel (parallel) CVs in odd (even) rows corresponding to \(\Phi_{\text{ext}}=+8\Phi_{0}\) (\(\Phi_{\text{ext}}=-8\Phi_{0}\)).
atures and flux. Notably, for degenerate nodal \(d\)-wave components in a disc-shaped system and FS, the total superconducting order parameter has full rotation symmetry for the antiparallel CV, and thus physical properties such as currents and magnetic fields generally do not reflect the four-fold symmetry of the individual nodal components. However, as the degeneracy between the nodal components is broken, it is reasonable to expect that the nodal four-fold rotational symmetry will be imprinted also on the antiparallel CV.
In Fig. 11 we study antiparallel and parallel CVs from weak non-degeneracy \(\alpha=0.99\) (top two rows) and continue to strong non-degeneracy \(\alpha=0.8\) (lowest two rows). For this full range of asymmetry, we find that both CVs are still very robust, but over a slightly narrower range of flux. By decreasing \(\alpha\) we find that the broken degeneracy and four-fold nodal symmetry become more apparent in the CV, as expected. For example, along the domain wall, the suppression of \(\Delta_{d_{x^{2}-y^{2}}}\) (\(\Delta_{d_{xy}}\)) now occurs over a smaller (larger) region, as compared to \(\alpha=1\). As \(\alpha\) is further reduced, the dominant \(\Delta_{d_{x^{2}-y^{2}}}\) covers nearly the whole domain wall, except at four isolated points. Consequently, these points become the only locations in the domain wall where \(\Delta_{d_{xy}}\) is finite. The fractional vortices are however still well-separated, and the CV structure is notably still intact. This spatial structure of the individual nodal components leads to signatures also in all other quantities, including the chiral order parameters components, and also the currents, induced flux (not shown here), and LDOS. Still, the results in Fig. 11 show that the overall conclusions and experimental signatures established in the rest of the work for the degenerate case remain robust and clear also with a strong asymmetry between the two nodal \(d\)-wave components. In particular, the LDOS for the antiparallel CV keeps its overall concentric and convex circular lines due to the order parameters and currents also exhibiting such a profile, with non-degeneracy only turning the circles more square-like. Meanwhile, the parallel CV still shows concave octagonal structures, with eight-fold interweaving resonances at higher energy due to non-trival additional phase winding in the subdominant chirality (but now overlapped with strong four-fold structure). Thus the antiparallel CV keeps the axisymmetry, while the parallel CV does not, just as established in the rest of the results. This robustness in the different LDOS patterns between the two CVs is expected: after all, the LDOS patterns stem directly from a positive versus negative superposition of the quantized and topologically protected Chern number (OAM from chirality) and vorticity (c.m. angular momentum), as introduced in Sec. II.3. Thus, as long as there is a chiral state, the positive versus negative superposition generates completely different scenarios, but possibly superimposed with higher-order contributions, in this case due to the additional broken nodal degeneracy.
Finally, for completeness, let us address the extreme limit of non-degeneracy, although this is not expected for stable chiral \(d\)-wave superconductors and thus not of central importance here. Eventually, the local variation of the current leads to a modified line tension, modifying the stability. Indeed, as \(\alpha\to 0.6\), the parameter-space region of stable CV shrinks rapidly. At some point, the CV also becomes unstable and multiple regular Abrikosov vortices are instead stabilized. We define the critical ratio where this occurs as \(\alpha^{*}(T,\Phi_{\rm ext},\lambda_{0},\mathcal{R})\), hence possibly depending on all parameters such as temperature, flux, penetration depth, and system size. The full parameter space is of course far beyond the scope of the present work, but we consider a subset of the parameter space for illustrative purposes. For example, at fixed temperature \(T=0.1T_{\rm c}\), external flux \(|\Phi_{\rm ext}|=8\Phi_{0}\), and \(\lambda_{0}=80\xi_{0}\), we find that the antiparallel CV is unstable below \(\alpha^{*}\approx 0.70\) at \(\mathcal{R}=25\xi_{0}\), \(\alpha^{*}\approx 0.60\) at \(\mathcal{R}=50\xi_{0}\), and \(\alpha^{*}\approx 0.55\) at \(\mathcal{R}=75\xi_{0}\), while the parallel CV is unstable below \(\alpha^{*}\approx 0.73\) at \(\mathcal{R}=25\xi_{0}\), \(\alpha^{*}\approx 0.64\) at \(\mathcal{R}=50\xi_{0}\), and \(\alpha^{*}\approx 0.61\) at \(\mathcal{R}=75\xi_{0}\). The CV is thus less stable in smaller systems, especially for increased non-degeneracy. We interpret this to stem from that smaller systems exhibit significant overlap between opposite system edges where both of the nodal components are suppressed, as well as between the CV and the system edge. This suppression is naturally enhanced by the non-degeneracy. As a result, the chiral state competes with both the normal state and a nodal \(d\)-wave state, which effectively hampers the formation of the chiral state, and consequently therefore also the formation of domain walls and CVs.
## IX Non-magnetic impurities
Our earlier work [136] demonstrated that the LDOS signatures of the CV are robust under the inclusion of a phenomenological energy broadening \(\delta\) of the spectrum. Such an energy broadening can be caused by e.g. disorder, impurity scattering, fluctuations, or interfaces with finite transparency [242; 243; 244; 245; 246; 247]. Here we exemplify this by studying dirty systems with non-magnetic impurities, and show that the CV as well as the LDOS signatures, are robust.
We model the non-magnetic impurities using the well-established \(t\)-matrix approach within the quasiclassical theory of superconductivity [248; 178]. The diagonal impurity self-energy from Eq. (4) then takes the form
\[\hat{\Sigma}_{\rm imp}({\bf p}_{\rm F},{\bf R};z)= n_{i}\hat{t}({\bf p}_{\rm F},{\bf p}_{\rm F}^{\prime}\to{\bf p}_{ \rm F},{\bf R};z), \tag{22}\]
with dilute impurity concentration \(n_{i}\) and impurity-scattering matrix \(\hat{t}\) fulfilling the additional self-consistency equation
\[\hat{t}({\bf p}_{\rm F},{\bf p}_{\rm F}^{\prime},{\bf R};z)= N_{\rm F}\Big{\langle}\hat{u}_{0}({\bf p}_{\rm F},{\bf p}_{\rm F}^{ \prime\prime};z)\hat{g}({\bf p}_{\rm F}^{\prime\prime},{\bf R};z)\] \[\times\hat{t}({\bf p}_{\rm F}^{\prime\prime},{\bf p}_{\rm F}^{ \prime},{\bf R};z)\Big{\rangle}_{{\bf p}_{\rm F}^{\prime\prime}}+\hat{u}_{0}({ \bf p}_{\rm F},{\bf p}_{\rm F}^{\prime};z), \tag{23}\]
with scattering potential \(\hat{u}_{0}(\mathbf{p}_{\mathrm{F}},\mathbf{p}_{\mathrm{F}}^{\prime};z)\). Equation (23) results from a diagrammatic expansion describing multiple scattering of quasiparticles and pairs by an impurity, connecting different scattering channels with momenta \(\mathbf{p}_{\mathrm{F}}\) and \(\mathbf{p}_{\mathrm{F}}^{\prime}\) on the FS (here integrated over the momentum \(\mathbf{p}_{\mathrm{F}}^{\prime\prime}\)). Here we assume equilibrium, a non-crossing approximation, and an isotropic scattering potential, such that \(\hat{u}_{0}(\mathbf{p}_{\mathrm{F}},\mathbf{p}_{\mathrm{F}}^{\prime};z)=u_{0} \hat{1}\), yielding
\[\hat{\Sigma}_{\mathrm{imp}}(\mathbf{R};z)=\Gamma_{u}\frac{\sin \delta_{0}\cos\delta_{0}\hat{1}+\sin^{2}\delta_{0}\left\langle\frac{1}{\pi} \hat{g}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)\right\rangle_{\mathbf{p}_{ \mathrm{F}}}}{\cos^{2}\delta_{0}\hat{1}-\sin^{2}\delta_{0}\left(\frac{1}{\pi} \left\langle\hat{g}(\mathbf{p}_{\mathrm{F}},\mathbf{R};z)\right\rangle_{ \mathbf{p}_{\mathrm{F}}}\right)^{2}}, \tag{24}\]
with scattering energy \(\Gamma_{u}=n_{i}/(\pi N_{\mathrm{F}})\) and scattering phase shift \(\delta_{0}=\arctan(\pi u_{0}N_{\mathrm{F}})\). We solve Eq. (24) self-consistently, together with the gap equation and Maxwell's equation. We define the "pair-breaking energy" as \(\Gamma=\Gamma_{u}\sin^{2}\delta_{0}\), related to the normal-state mean-free-path \(l=\hbar v_{\mathrm{F}}/(2\Gamma)\). We consider two extreme limits, namely the weak-scattering Born limit (\(\delta_{0}\to 0\) and \(\Gamma_{u}\to\infty\), such that \(\Gamma\) is constant) and the strong-scattering unitary limit (\(\delta_{0}\to\pi/2\) and \(u_{0}\to\infty\), such that \(\Gamma=\Gamma_{u}\)). In these limits, the equilibrium solutions simplify to
\[\hat{\Sigma}_{\mathrm{imp}}^{\mathrm{Born}}(\mathbf{R};z) =\frac{\Gamma}{\pi}\langle\hat{g}(\mathbf{p}_{\mathrm{F}},\mathbf{ R};z)\rangle_{\mathbf{p}_{\mathrm{F}}}, \tag{25}\] \[\hat{\Sigma}_{\mathrm{imp}}^{\mathrm{unitary}}(\mathbf{R};z) =-\pi\Gamma\frac{\langle\hat{g}(\mathbf{p}_{\mathrm{F}},\mathbf{ R};z)\rangle_{\mathbf{p}_{\mathrm{F}}}}{\langle\hat{g}^{2}(\mathbf{p}_{ \mathrm{F}},\mathbf{R};z)\rangle_{\mathbf{p}_{\mathrm{F}}}}, \tag{26}\]
respectively. We vary the scattering energy over orders of magnitude, \(\gamma_{u}\equiv\Gamma/(2\pi k_{\mathrm{B}}T_{c})\in[0.002,0.1]\). By comparison, the zero-temperature bulk gap is roughly \(|\Delta_{0}|/(2\pi k_{\mathrm{B}}T_{c})\approx 0.280\). We still use a phenomenological broadening \(\delta/(2\pi k_{\mathrm{B}}T_{c})=0.0005\) to avoid divergent LDOS for small \(\gamma_{u}\), but this value is an order of magnitude smaller than used in the rest of this work, \(\delta/(2\pi k_{\mathrm{B}}T_{c})=0.005\).
Figures 12 and 13 show the resulting LDOS in the presence of non-magnetic impurities for an antiparallel and parallel CV, respectively, with left and right columns showing the Born and unitary limits, respectively, with the figure panels (c-d) showing line-cuts across the CV at zero energy and panels (e-f) showing the LDOS at the domain wall. As expected, the LDOS peaks are broadened when increasing the scattering energy, eventually becoming almost completely broadened for \(\gamma_{u}\to 0.1\) as indicated by red lines in (c-f). This result is expected because such strong \(\gamma_{u}\) is comparable with the bulk gap.
Figure 12: LDOS for an antiparallel CV in a system with non-magnetic impurities, with dominant bulk chirality \(\Delta_{+}\), \(T=0.1T_{c}\), \(\Phi_{\mathrm{ext}}=8\Phi_{0}\), and \(\lambda_{0}=80\xi_{0}\). Left (right) column corresponds to the Born limit (unitary limit) with scattering phase shift \(\delta_{0}=0\) (\(\delta_{0}=\pi/2\)). (a,b) Zero-energy LDOS. (c,d) Zero-energy LDOS across horizontal dashed line in (a,b). (e,f) LDOS in the domain wall. Line colors in (c-f) denote \(\gamma_{u}\equiv\Gamma/(2\pi k_{\mathrm{B}}T_{c})\).
The broadening is also naturally not as strong in the Born limit (left) as in the unitary limit (right). Consequently, panels (a,c) illustrate that both CVs are strongly distinguishable in the spatially resolved LDOS even for \(\gamma_{u}=0.1\) in the Born limit, while panels (b,d) show that both CVs are distinguishable at \(\gamma_{u}=0.05\) in the unitary limit. We note that the peak at the disc center in Fig. 12 is a resonance related to the perfect rotation symmetry [136].
Finally, we note that the antiparallel CV radius \(\mathcal{R}_{\text{CV}}\) slightly increases by \(1\xi_{0}\) (\(2\xi_{0}\)) as \(\gamma_{u}\) changes from \(0.002\) to \(0.1\) for the Born (unitary) limit, as indicated in Figs. 12(c,d). For the parallel CV, the increase in \(\mathcal{R}_{\text{CV}}\) is even smaller, about \(0.5\xi_{0}\), see Figs. 13(c,d). We expect \(\mathcal{R}_{\text{CV}}\) to increase more significantly with \(\gamma_{u}\) in systems where \(\mathcal{R}\geq\lambda_{0}\). This is because non-magnetic impurities generally increase the penetration depth \(\lambda_{0}\)[225], which in turn increases \(\mathcal{R}_{\text{CV}}\) as shown in Sec. IV. Here, in contrast, \(\lambda_{0}=80\xi_{0}\) is much larger than \(\mathcal{R}\), explaining the small variation in \(\mathcal{R}_{\text{CV}}\). In summary, we find that the LDOS signatures of the Chern number, superconducting pairing symmetry, and chirality is robust against strong (moderate) scattering energy in the Born (unitary) limit, despite a corresponding broadening of the LDOS peaks. Furthermore, we find that the CV itself is very robust in all cases considered. This demonstrates the viability of CVs and its signatures to identify chiral superconductivity also in dirty systems.
## X Conclusions
In this work, we show a strong tunability of CVs in spin-singlet chiral \(d\)-wave superconductors, as well as a robustness of their experimental signature for a large range of material models, parameter regimes, perturbations, anisotropy, and disorder.
In terms of tunability, we find that the finite size of the CV is balanced by the attractive and repulsive interactions exerted by its domain-wall currents and fractional vortices, respectively. Thus, we show that the overall size is easily tuned directly by changing an externally applied magnetic flux and the temperature, but also depend on system size and penetration depth, the latter generally tunable by artificially adding impurities. We also find that the overall shape is tunable and deforms in an anisotropic environment, e.g. due to other vortices, an irregular system shape, or an anisotropic FS.
For the experimental signatures, our earlier work established that the LDOS host distinct signatures for the two inequivalent CVs, with antiparallel or parallel chirality and vorticity, and that this can be used to clearly identify chirality, Chern number, and even the superconducting pairing symmetry [136]. More specifically, the antiparallel CV is axisymmetric with a continuous rotation symmetry, associated with LDOS peaks appearing as concentric and convex circular lines. The parallel CV spontaneously breaks axial symmetry, generating additional winding centers outside the CV, deforming its shape into a concave structure with discrete rotation symmetry, directly related to the Chern winding number. At zero energy (bias voltage), the LDOS directly probes the domain wall structure of the CV and its overall rotation symmetry and thereby the Chern number. At higher energies, there are additional intervening resonances between the additional winding centers, even more clearly exhibiting the rotation symmetry and Chern number. This forms strong experimental signatures, directly measurable with STS and STM. In this work we establish that all of these signatures are robust for a large range of possible perturbations, system, and model changes. In particular, we demonstrate how the results hold in systems with incommensurate or no rotation symmetry, at strong confinement, for both electron doped and hole doped FSs or anisotropic FSs, non-degenerate nodal \(d\)-wave components, as well as when non-magnetic impurities are present. We also find robustness for large ranges of temperatures, external flux strength, penetration depths, and system sizes, as well as when additional Abrikosov vortices are present.
In conclusion, our work establishes CVs as a tunable and robust experimental signature of spin-singlet chiral \(d\)-wave superconductivity, which furthermore provide a platform to study fractional vortices.
## XI Acknowledgements
We thank R. Arouca, T. Lothman, M. Fogelstrom, A. B. Vorontsov and E. Babaev for valuable discussions. We acknowledge M. Fogelstrom, T. Lofwander, M. Hakansson, O. Shevtsov, and P. Stadler for their work on SuperConga. We acknowledge financial support from the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (ERC-2017-StG-757553) and the Knut and Alice Wallenberg Foundation through the Wallenberg Academy Fellows program. The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre (NSC) at Linkoping University and the Knut and Alice Wallenberg foundation. Additional computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at C3SE, HPC2N, and NSC, partially funded by the Swedish Research Council through grant agreements No. 2022-06725 and No. 2018-05973.
## Appendix A Coreless vortices in larger systems and opposite chirality
In this appendix we show that the qualitative CV features studied in Sec. III remain for much larger systems where the influence of the boundary becomes negligible,
and in systems with opposite dominant bulk chirality \(\Delta_{-}\). In particular, Fig. 14 shows various quantities for a parallel CV in a disc with radius \(\mathcal{R}=150\xi_{0}\), dominant bulk chirality \(\Delta_{-}\), external flux \(\Phi_{\rm ext}=8\Phi_{0}\), temperature \(T=0.1T_{\rm c}\), and penetration depth \(\lambda_{0}=80\xi_{0}\), to be compared to Fig. 2. There is still four well-separated fractional vortices in each nodal component, an octagonal-shaped domain wall in the chiral amplitudes, and integer phase windings \(m=-4\) in the dominant phase \(\chi_{-}\) and \(p=-8\) in the subdominant phase \(\chi_{+}\) (reversed signs due to reversed bulk chirality). Furthermore, the currents still show the same overall spatial profile and number of sign changes. Interestingly, the additional \(p=-8\) phase windings in the subdominant phase \(\chi_{+}\) remains at a small distance outside the CV. We note that for an antiparallel CV in such a large system, the \(\pi\)-shift in the dominant chirality instead remains closer to the edge (not shown here).
## Appendix B Additional results: interaction with Abrikosov vortices
This appendix contains additional numeric results for the interaction between CVs and Abrikosov vortices. In particular, Fig. 15 shows the order parameter amplitudes and phase windings for the same system as in Fig. 6, i.e. this is the analogue of Fig. 4 but for a parallel (symmetry-broken) CV caused by negative external flux. Importantly, we note that despite all the additional Abrikosov vortices generating phase windings that overlap with the \(p=8\) winding centers of the CV, the latter still generates the distinct shape of the parallel CV studied in the rest of the work.
Figure 14: Parallel CV in a disc-shaped system with radius \(\mathcal{R}=150\xi_{0}\), dominant chirality \(\Delta_{-}\), \(T=0.1T_{\rm c}\), \(\Phi_{\rm ext}=8\Phi_{0}\), \(\lambda=80\xi_{0}\), \(b_{0}\equiv 10^{-4}B_{0}\). First row: amplitudes and phases of the nodal components, second row: same but for chiral components, third row: charge-current density magnitude and \(x,y\)-components, as well as induced magnetic-flux density. To be compared with Fig. 2.
|
2305.18203
|
Concept Decomposition for Visual Exploration and Inspiration
|
A creative idea is often born from transforming, combining, and modifying
ideas from existing visual examples capturing various concepts. However, one
cannot simply copy the concept as a whole, and inspiration is achieved by
examining certain aspects of the concept. Hence, it is often necessary to
separate a concept into different aspects to provide new perspectives. In this
paper, we propose a method to decompose a visual concept, represented as a set
of images, into different visual aspects encoded in a hierarchical tree
structure. We utilize large vision-language models and their rich latent space
for concept decomposition and generation. Each node in the tree represents a
sub-concept using a learned vector embedding injected into the latent space of
a pretrained text-to-image model. We use a set of regularizations to guide the
optimization of the embedding vectors encoded in the nodes to follow the
hierarchical structure of the tree. Our method allows to explore and discover
new concepts derived from the original one. The tree provides the possibility
of endless visual sampling at each node, allowing the user to explore the
hidden sub-concepts of the object of interest. The learned aspects in each node
can be combined within and across trees to create new visual ideas, and can be
used in natural language sentences to apply such aspects to new designs.
|
Yael Vinker, Andrey Voynov, Daniel Cohen-Or, Ariel Shamir
|
2023-05-29T16:56:56Z
|
http://arxiv.org/abs/2305.18203v2
|
# Concept Decomposition for Visual Exploration and Inspiration
###### Abstract
A creative idea is often born from transforming, combining, and modifying ideas from existing visual examples capturing various concepts. However, one cannot simply copy the concept as a whole, and inspiration is achieved by examining certain aspects of the concept. Hence, it is often necessary to separate a concept into different aspects to provide new perspectives. In this paper, we propose a method to decompose a visual concept, represented as a set of images, into different visual aspects encoded in a hierarchical tree structure. We utilize large vision-language models and their rich latent space for concept decomposition and generation. Each node in the tree represents a sub-concept using a learned vector embedding injected into the latent space of a pretrained text-to-image model. We use a set of regularizations to guide the optimization of the embedding vectors encoded in the nodes to follow the hierarchical structure of the tree. Our method allows to explore and discover new concepts derived from the original one. The tree provides the possibility of endless visual sampling at each node, allowing the user to explore the hidden sub-concepts of the object of interest. The learned aspects in each node can be combined within and across trees to create new visual ideas, and can be used in natural language sentences to apply such aspects to new designs.
## 1 Introduction
Modeling and design are highly creative processes that often require inspiration and exploration [14]. Designers often draw inspiration from existing visual examples and concepts - either from the real world or using images [9, 16, 27]. However, rather than simply replicating previous designs, the ability to extract only certain aspects of a given concept is essential to generate original ideas. For example, in Figure 1(a), we illustrate how designers may draw inspiration from patterns and concepts found in nature.
Additionally, by combining multiple aspects from vari
ous concepts, designers are often able to create something new. For instance, it is described [8] that the famous Beijing National Stadium, also known as the "Bird's Nest", was designed by a group of architects that were inspired by various aspects of different Chinese concepts (see Figure 2b). The designers combined aspects of these different concepts - the shape of a nest, porous Chinese scholar stones, and cracks in glazed pottery art that is local to Beijing, to create an innovative architectural design. Such a design process is highly exploratory and often unexpected and surprising.
The questions we tackle in this paper is whether a machine can assist humans in such a highly creative process? Can machines understand different aspects of a given concept, and provide inspiration for modeling and design? Our work explores the ability of large vision-language models to do just that - express various concepts visually, decompose them into different aspects, and provide almost endless examples that are inspiring and sometimes unexpected.
We rely on the rich semantic and visual knowledge hidden in large language-vision models. Recently, these models have been used to perform personalized text-to-image generation [11, 36, 25], demonstrating unprecedented quality of concept editing and variation. We extend the idea presented in [11] to allow _aspect-aware_ text-to-image generation, which can be used to visually explore new ideas derived from the original concept.
Our approach involves (1) decomposing a given visual concept into different aspects, creating a hierarchy of sub-concepts, (2) providing numerous image instances of each learned aspect, and (3) allowing to explore combinations of aspects within the concept and across different concepts.
We model the exploration space using a binary tree, where each node in the tree is a newly learned vector embedding in the textual latent space of a pretrained text-to-image model, representing different aspects of the original concept. A tree provides an intuitive structure to separate and navigate the different aspects of a given concept. Each level allows to find more aspects of the concepts in the previous level. In addition, each node by itself contains a plethora of samples and can be used for exploration. For example, in Figure 1, the original concept is first decomposed into its dominant semantic aspects: the wooden saucer in "v1" and the bear drawing in "v2", next, the bear drawing is further separated into the general concept of a bear in "v3" and its unique texture in "v4".
Given a small set of images depicting the concept of interest as input, we build the tree gradually. For each node, we optimize two child nodes at a time to match the concept depicted in their parent. We also utilize a CLIP-based [31] consistency measurement, to ensure that the concepts depicted in the nodes are coherent and distinct. The different aspects are learned _implicitly_, without any external constraint regarding the type of separation (such as shape or texture). As a result, unexpected concepts can emerge in the process and be used as inspiration for new design ideas. For
Figure 3: Combining the learned aspects in natural sentences to produce aspect-based variations. The original concept is shown on top, along with an illustration of the chosen aspects from the tree in Figure 1. Below are three random images generated by a pre-trained text-to-image model, conditioned on the prompts above.
Figure 2: Example of design inspired by visual concepts taken from other concepts. (a) top left - fashion design by Iris Van Herpen and Chair by Emmanuel Touraine inspired by nature patterns, bottom left - the Lotus Temple in India, inspired by the lotus flower (b) Beijing National Stadium is inspired by a combination of local Chinese art forms - the crackle glazed pottery that is local to Beijing, and the heavily veined Chinese scholar stones. ©Dress by Iris van Herpen, chair by Emmanuel Touraine from Wikimedia. Lotus flower, temple, cracked pottery, scholar stone, and bird nest are from rawpixel.com [Public Domain]. Beijing National Stadium photograph by Wojtek Gurak from Flickr.
example the learned aspects can be integrated into existing concepts by combining them in natural language sentences passed to a pretrained text-to-image model (see Figure 3). They can also be used to create new concepts by combining different aspects of the same tree (intra-tree combination) or across different trees (inter-tree combination).
We provide many visual results applied to various challenging concepts. We demonstrate the ability of our approach to find different aspects of a given concept, explore and discover new concepts derived from the original one, thereby inspiring the generation of new design ideas.
## 2 Previous Work
Design and Modeling InspirationCreativity has been studied in a wide range of fields [1, 5, 10, 22, 37], and although defining it exactly is difficult, some researchers have suggested that it can be described as the act of evoking and recombinating information from previous knowledge to generate new properties [5, 47]. It is essential, however, to be able to associate ideas in order to generate original ideas rather than just mimicking prior work [6]. Experienced designers and artists are more adept at connecting disparate ideas than novice designers, who need assistance in the evocation process [5]. By reviewing many exemplars, designers are able to gain a deeper understanding of design spaces and solutions [9]. In the field of human-computer interaction, a number of studies have been conducted to develop tools and software to assist designers in the process of ideation [20, 21, 23, 24]. They are focused on providing better tools for collecting, arranging, and searching visual and textual data, often collected from the web. In contrast, our work focuses on extracting different aspects of a given visual concept and _generating_ new images for inspiration.
Our work is close to a line of work utilizing evolutionary algorithms to inspire users' creativity [3, 7, 48]. However, they mostly work in the field of 3D content generation and do not decompose different aspects from existing concepts.
Large Language-Vision ModelsWith the recent advancement of language-vision models [31] and diffusion models [33, 28, 34], the field of image generation and editing has undergone unprecedented evolution. These models have been trained on millions of images and text pairs and have shown to be effective in performing challenging vision related tasks [30, 38, 2, 4, 13]. Furthermore, the strong visual and semantic priors of these models have also been demonstrated to be effective for artistic and design tasks [29, 41, 42, 43, 26]. In our work, we demonstrate how these models can be used to decompose and transform existing concepts into new ones in order to inspire the development of new ideas.
PersonalizationPersonalized text-to-image generation has been introduced recently [11, 18, 25, 36], with the goal of creating novel scenes based on user provided unique concepts. In addition to demonstrating unprecedented quality results, these technologies enabled intuitive editing, made design more accessible, and attracted interest even beyond the research community. We utilize these ideas to facilitate the ideation process of designers and common users, by learning different visual aspects of user-provided concepts.
Current personalization methods either optimize a set of embeddings to describe the concept [11], or modify the denoising network to tie a rarely used word embedding to the new concept [36]. While the latter provides more accurate reconstruction and is more robust, it uses much more memory and requires a model for each object. In this regard, we choose to rely on the approach presented in [11]. It is important to note that our goal is to capture multiple _aspects_ of the given concept, and not to improve the accuracy of reconstruction as in [15, 46, 45, 39, 12, 40].
## 3 Preliminaries
Latent Diffusion Models.Diffusion models are generative models trained to learn data distribution by gradually denoising a variable sampled from a Gaussian distribution.
In our work, we use the publicly available text-to-image Stable Diffusion model [34]. Stable Diffusion is a type of a latent diffusion model (LDM), where the diffusion process is applied on the latent space of a pretrained image autoencoder. The encoder \(\mathcal{E}\) maps an input image \(x\) into a latent vector \(z\), and the decoder \(\mathcal{D}\) is trained to decode \(z\) such that \(\mathcal{D}(z)\approx x\). As a second stage, a denoising diffusion probabilistic model (DDPM) [17] is trained to generate codes within the learned latent space. At each step during training, a scalar \(t\in\{1,2,...T\}\) is uniformly sampled and used to define a noised latent code \(z_{t}=\alpha_{t}z+\sigma_{t}\epsilon\), where \(\epsilon\sim\mathcal{N}(0,I)\) and \(\alpha_{t},\sigma_{t}\) are terms that control the noise schedule, and are functions of the diffusion process time \(t\). The denoising network \(\epsilon_{\theta}\) which is based on a UNet architecture [35], receives as input the noised code \(z_{t}\), the timestep \(t\), and an optional condition vector \(c(y)\), and is tasked with predicting the added noise \(\epsilon\). The LDM loss is defined by:
\[\mathcal{L}_{LDM}=\mathbb{E}_{z\sim\mathcal{E}(x),y,e\sim\mathcal{N}(0,1),t} \left[||\epsilon-\epsilon_{\theta}(z_{t},t,c(y))||_{2}^{2}\right] \tag{1}\]
For text-to-image generation the condition \(y\) is a text input and \(c(y)\) represents the text embedding. At inference time, a random latent code \(z_{T}\sim\mathcal{N}(0,I)\) is sampled, and iteratively denoised by the trained \(\epsilon_{\theta}\) until producing a clean \(z_{0}\) latent code, which is passed through the decoder \(D\) to produce the image \(x\).
We next discuss the text encoder and the inversion space.
Text embedding.Given a text prompt \(y\), for example "A photo of a cat", the sentence is first converted into tokens, which are indexed into a pre-defined dictionary of vector embeddings. The dictionary is a lookup table that connects each token to a unique embedding vector. After retrieving the vectors for a given sentence from the table, they are passed to a text transformer, which processes the connections between the individual words in the sentence and outputs \(c(y)\). The output encoding \(c(y)\) is then used as a condition to the UNet in the denoising process. We denote words with \(S\), and the vector embeddings from the lookup table with \(V\).
Textual InversionWe rely on the general framework proposed by [11], who choose the embedding space of \(V\) as the target for inversion. They formulate the task of inversion as fitting a new word \(s^{*}\) to represent a personal concept, depicted by a small set of input images provided by the user. They extend the predefined lookup table with a new embedding vector \(v_{*}\) that is linked to \(s^{*}\). The vector \(v_{*}\) is often initialized with the embedding of an existing word from the dictionary that has some relation to the given concept, and then optimized to represent the desired personal concept. This process can be thought of as "injecting" the new concept into the vocabulary. The vector \(v_{*}\) is optimized w.r.t. the LDM loss in Equation (1) over images sampled from the input set. At each step of optimization, a random image \(x\) is sampled from the set, along with a neutral context text \(y\), derived from the CLIP ImageNet templates [32] (such as "A photo of \(s^{*}\)"). Then, the image \(x\) is encoded to \(z=\mathcal{E}(x)\) and noised w.r.t. a randomly sampled timestep \(t\) and noise \(\epsilon\): \(z_{t}=\alpha_{t}z+\sigma_{t}\epsilon\). The noisy latent image \(z_{t}\), timestep \(t\), and text embedding \(c(y)\) are then fed into a pretrained UNet model which is trained to predict the noise \(\epsilon\) applied w.r.t. the conditioned text and timestep. This way, \(v_{*}\) is optimized to describe the object depicted in the small training set of images.
## 4 Method
Given a small set of images \(I^{0}=\{I^{0}_{1}...I^{0}_{m}\}\) depicting the desired visual concept, our goal is to construct a rich visual exploration space expressing different aspects of the input concept.
We model the exploration space as a binary tree, whose nodes \(V=\{v_{1}..v_{n}\}\) are learned vector embeddings corresponding to newly discovered words \(S=\{s_{1}..s_{n}\}\) added to the predefined dictionary, representing different aspects of the original concept. These newly learned words are used as input to a pretrained text-to-image model [34] to generate a rich variety of image examples in each node. We find a binary tree to be a suitable choice for our objective, because of the ease of visualization, navigation, and the quality of the sub-concepts depicted in the nodes (see supplemental file for further analysis).
### Tree Construction
The exploration tree is built gradually as a binary tree from top to bottom, where we iteratively add two new nodes at a time. To create two child nodes, we optimize new embedding vectors according to the input image-set generated from the concept depicted in the parent node. During construction, we define two requirements to encourage the learned embeddings to follow the tree structure: (1) **Binary Reconstruction** each pair of children nodes together should encapsulate the concept depicted by their parent node, and (2) **Coherency** each individual node should depict a coherent concept which is distinct from its sibling. Next, we describe the loss functions and procedures designed to follow these requirements.
Binary ReconstructionWe use the reconstruction loss suggested in [11], with some modifications tailored to our goal. The procedure is illustrated in Figure 4 - in each optimization phase, our goal is to learn two vector embeddings \(v_{l},v_{r}\) corresponding to the left and right sibling
Figure 4: High level pipeline of the “binary reconstruction” stage. We optimize two sibling nodes \(v_{l},v_{r}\) at a time (marked in red and blue). (a) We first generate a small training set of images \(I^{p}\) depicting the concept in the parent node using a pretrained text-to-image model (T2I). At the root, we use the original set of images \(I^{0}\). (b) We then extend the existing dictionary by adding the two new vectors, initialized with the embedding of the word “object”. (c) Lastly, we optimize \(v_{l},v_{r}\) w.r.t. the LDM loss (see details in the text).
nodes, whose parent node is marked with \(v_{p}\) (illustrated in Figure 4, left). We begin with generating a new small training set of images \(I^{p}=\{I_{1}^{p}...I_{10}^{p}\}\), reflecting the concept depicted by the vector \(v_{p}\) (Figure 4a). At the root, we use the original set of images \(I^{0}\). Next, we extend the current dictionary by adding two new vector embeddings \(v_{l},v_{r}\), corresponding to the right and left children of their parent node \(v_{p}\) (Figure 4b). To represent general concepts, the newly added vectors are initialized with the embedding of the word "object". At each iteration of optimization (Figure 4c), an image \(x\) is sampled from the set \(I^{p}\) and encoded to form the latent image \(z=\mathcal{E}(x)\). A timestep \(t\) and a noise \(\epsilon\) are also sampled to define the noised latent \(z_{t}=\alpha_{t}z+\sigma_{t}\epsilon\) (marked in yellow). Additionally, a neutral context text \(y\) is sampled, containing the new placeholder words in the following form "A photograph of \(s_{l}\)\(s_{r}\)". The noised latent \(z_{t}\) is fed to a pretrained Stable Diffusion UNet model \(\epsilon_{\theta}\), conditioned on the CLIP embedding \(c(y)\) of the sampled text, to predict the noise \(\epsilon\). The prediction loss is backpropagated w.r.t. the vector embeddings \(v_{l},v_{r}\):
\[\{v_{l},v_{r}\}=\operatorname*{arg\,min}_{v}\mathbb{E}_{z\sim\mathcal{E}(x), y,\epsilon\sim\mathcal{N}(0,1),t}\Big{[}\|\epsilon-\epsilon_{\theta}(z_{t},t,c(y) )\|_{2}^{2}\Big{]}. \tag{2}\]
This procedure encourages \(v_{l},v_{r}\) together to express the visual concept of their parent depicted in the set \(I^{p}\). Figure 5 illustrates how the two embeddings begin by representing the word "object", and gradually converge to depict two aspects of the input concept.
We use the timestep sampling approach proposed in Reversion [19], which skews the sampling distribution so that a larger \(t\) is assigned a higher probability, according to the following importance sampling function:
\[f(t)=\frac{1}{T}(1-\alpha\cos\frac{\pi t}{T}). \tag{3}\]
We set \(\alpha=0.5\). We find that this sampling approach improves stability and content separation. This choice is further discussed in the supplementary file.
CoherencyThe resulting pair of embeddings described above together often capture the parent concept depicted in the original images well. However, the images produced by each embedding individually may not always reflect a logical sub-concept that is coherent to the observer.
We find that such incoherent embeddings are frequently characterized by inconsistent appearance of the images generated from them, i.e., it can be difficult to identify a common concept behind them. For example, in Figure 6 the concept depicted in the set on the right is not clear, compared to the set of images on the left.
This issue may be related to the observation that textual inversion often results in vector embedding outside of the distribution of common words in the dictionary, affecting editability as well [45]. It is thus possible that embeddings that are highly unusual may not behave as "real words", thereby producing incoherent visual concepts. In addition, textual-inversion based methods are sometimes unstable and depend on the seed and iteration selection.
To overcome this issue we define a consistency test, which allows us to filter out incoherent embeddings. We begin by running the procedure described above to find \(v_{l},v_{r}\) using \(k\) different seeds in parallel for a sufficient number of steps (in our experiments we found that k=4 and 200 steps are sufficient since at that point the embeddings have already progressed far enough from their initialization word "object" as seen in Figure 5).
This gives us an initial set of \(k\) pairs of vector embeddings \(V_{s}=\{v_{l}^{i},v_{r}^{i}\}_{i=1}^{k}\). For each vector \(v\in V_{s}\) we generate a random set \(I^{v}\) of 40 images using our pre-trained text-to-image model. We then use a pretrained CLIP Image encoder [31], to produce the embedding \(CLIP(I_{i}^{v})\) of each image in the set.
We define the consistency of two sets of images \(I^{a},I^{b}\) as follows:
\[\begin{split}\mathcal{C}(I^{a},I^{b})&=mean_{I_{i} ^{a}\in I^{a},I_{j}^{b}\in I^{b},I_{i}^{a}\neq I_{j}^{b}}\\ &(sim(CLIP(I_{i}^{a}),CLIP(I_{j}^{b}))).\end{split} \tag{4}\]
Figure 5: Optimization iterations. The embedding of both children nodes \(v_{l},v_{r}\) are initilized with the word “object”. During iterations, they gradually depict two aspects of the original concept. Note that using both embedding together reconstructs the original parent concept.
Figure 6: We demonstrate two sets of random images generated from two different vector embeddings. An example of a consistent set can be seen on the left, where the concept depicted in the node is clear. We show an inconsistent set on the right, where images appear to depict multiple concepts.
Note that \(|\mathcal{C}(I^{a},I^{b})|\leq 1\) because \(sim(x,y)=\frac{x\cdot y}{||x||\cdot||y||}\) is the cosine similarity between a pair of CLIP embedding of two different images. This formulation is motivated by the observation that if a set of images depicts a certain semantic concept, their vector embedding in CLIP's latent space should be relatively close to each other. Ideally, we are looking for pairs in which each node is coherent by itself, and in addition, two sibling nodes are distinct from each other. We therefore choose the pair of tokens \(\{v_{l}^{*},v_{r}^{*}\}\in V_{s}\) as follows:
\[\begin{split}\{v_{l}^{*},v_{r}^{*}\}&=\operatorname* {arg\,max}_{\{v_{l}^{i},v_{r}^{i}\}\in V_{s}}\big{[}C_{l}^{i}+C_{r}^{i}+\\ &=(min(C_{l}^{i},C_{r}^{i})-\mathcal{C}(I^{v_{l}^{i}},I^{v_{r}^{i }}))\big{]},\end{split} \tag{5}\]
where \(C_{l}^{i}=\mathcal{C}(I^{v_{l}^{i}},I^{v_{l}^{i}}),C_{r}^{i}=\mathcal{C}(I^{v_ {r}^{i}},I^{v_{r}^{i}})\). Note that we do not consider the absolute cross consistency score \(\mathcal{C}(I^{v_{l}^{i}},I^{v_{r}^{i}})\), but we compute its relative difference from the node with the minimum consistency. We demonstrate this procedure in Figure 7. We optimized two pairs of sibling nodes \(\{v_{l}^{1},v_{r}^{1}\},\{v_{l}^{2},v_{r}^{2}\}\) using two seeds, w.r.t. the same parent node. Each matrix illustrates the consistency scores \(C_{l}^{i},\mathcal{C}(I^{v_{l}^{i}},I^{v_{r}^{i}}),C_{r}^{i}\) obtained for the sets of images of each seed. In both cases, the scores on the diagonal are high, which indicates that each set is consistent within itself. While the sets on the right obtained a higher consistency score within each node, they also obtained a relatively high score across the nodes (0.73), which means they are not distinct enough.
After selecting the optimal seed, we continue the optimization of the chosen vector pair w.r.t. the reconstruction loss in Equation (2) for \(1500\) iterations.
## 5 Results
In Figures 1, 11 and 12, we show examples of possible trees. For each node in the tree, we use its corresponding placeholder word as an input to a pretrained text-to-image model [34], to generate a set of random images. These images have been generated without any prompt engineering or additional words within the sentence, except for the word itself. For clarity, we use the notion "\(v\)" next to each set of images, illustrating that the presented set depicts the concept learned in that node. As can be seen, the learned embeddings in each node capture different elements of the original concept, such as the concept of a cat and a sculpture, as well as the unique texture in Figure 11. The sub-concepts captured in the nodes follow the tree's structure, where the concepts are decomposed gradually, with two sibling nodes decomposing their parent node. This decomposition is done _implicitly_, without external guidance regarding the split theme. For many more trees please see our supplementary file.
### Applications
The constructed tree provides a rich visual exploration space for concepts related to the object of interest. In this section we demonstrate how this space can be used for novel combination and exploration.
**Intra-tree combination** - the generated tree is represented via the set of optimized vectors \(V=\{v_{1}..v_{n}\}\). Once this set is learned we can use it to perform further exploration and conceptual editing _within_ the object's "inner world". We can explore combinations of different aspects by composing sentences containing different subsets of \(V\). For example, in the bottom left area of Figure 11, we have combined \(v_{1}\) and \(v_{5}\), which resulted in a variation of the original sculpture without the sub-concept relating to the cat (depicted in \(v_{6}\)). At the bottom right, we have excluded the sub-concept depicted in \(v_{5}\) (related to a blue sculpture), which resulted in a new representation of a flat cat with the body and texture of the original object.
Such combinations can provide new perspectives on the original concept and inspiration that highlights only specific aspects.
**Inter-tree combination** - it is also possible to combine concepts learned across different trees, since we only inject new words into the existing dictionary, and do not fine-tune the model's weights as in other personalization approaches [36].
To achieve this, we first build the trees independently for each concept and then visualize the sub-concepts depicted in the nodes to select interesting combinations. In Figure 8 the generated original concepts are shown on top, along with an illustration of the concepts depicted in the relevant nodes. To combine the concepts across the trees, we simply place the two placeholder words together in a sentence and feed it into the pretrained text-to-image model. As can be seen, on the left the concept of a "saucer with a drawing" and the "creature" from the mug are combined to create many creative and surprising combinations of the
Figure 7: Consistency scores matrix between image sample sets of nodes. The seed selection process favors pairs of siblings that have a high consistency score within themselves, and low consistency score between each other. In this example, the left pair is better than the right.
two. On the right, the blue sculpture of a cat is combined with the stone depicted at the bottom of the Buddha, which together create new sculptures in which the Buddha is replaced with the cat.
**Text-based generation** - the placeholder words of the learned embeddings can be composed into natural language sentences to generate various scenes based on the learned aspects. We illustrate this at the top of Figure 9, where we integrate the learned aspects of the original concepts in new designs (in this case of a chair and a dress). At the bottom of Figure 9, we show the effect of using the learned vectors of the original concepts instead of specific aspects. We apply Textual Inversion (TI) [11] with the default hyperparameters to fit a new word depicting each concept, and choose a representative result. The results suggest that without aspect decomposition, generation can be quite limited. For instance, in the first column, both the dress and the chair are dominated by the texture of the sculpture, whereas the concept of a blue cat is almost ignored. Furthermore, TI may exclude the main object of the sentence (second and third columns), or the results may capture all aspects of the object (fourth column), thereby narrowing the exploration space.
### Evaluations
Consistency Score Validation.We first show that our consistency test proposed in Equation (4) aligns well with human perception of consistency. We conducted a perceptual study with \(35\) participants in which we presented \(15\) pairs of random image sets depicting sub-concepts of \(9\) objects. We asked participants to determine which of the sets is more consistent within itself in terms of the concept it depicts (an example of such a pair can be seen in Figure 6). We also measured the consistency scores for these sets using our CLIP-based approach, and compared the results. The CLIP-based scores matched the human choices in \(82.3\%\) of the cases.
Reconstruction and Separation.We quantitatively evaluate our method's ability to follow the tree requirements of reconstruction and sub-concept separation. We collected a set of \(13\) concepts (\(9\) from existing personalization datasets [11, 25], and \(4\) new concepts from our dataset), and gener
Figure 8: Examples of inter-tree combinations. We use our method to produce trees for the four concepts depicted in the first row. We then combine aspects from different trees to generate a set of inter-tree combinations (the chosen aspects are shown next to each concept).
Figure 9: Combining the learned aspects in natural sentences to produce aspect-based variations. The original concepts are shown at the top. In the third and fourth rows are our text-based generation results applied with the aspects depicted in the second row. Under “TI” we show image generation for the concepts in the first row (without our aspect decomposition approach), produced using [11].
ated \(13\) corresponding trees. Note that we chose concepts that are complex enough and have the potential to be divided into different aspects (we discuss this in the limitations section). For each pair of sibling nodes \(v_{l},v_{r}\) and their parent node \(v_{p}\), we produced their corresponding sets of images - \(I^{v_{l}},I^{v_{r}},I^{v_{p}}\) (where for nodes in the first level we used the original set of images \(I^{0}\) as \(I^{v_{p}}\)). We additionally produced the set \(I^{v_{l}v_{r}}\), depicting the joint concept learned by two sibling nodes.
We first compute \(\mathcal{C}(I^{v_{p}},I^{v_{l}v_{r}})\) to measure the quality of reconstruction, i.e., that two sibling nodes together represent the concept depicted in their parent node. The average score obtained for this measurement is \(0.8\), which suggests that on average, the concept depicted by the children nodes together is consistent with that of their parent node. Second, we measure if two sibling nodes depict distinct concepts by using \(\mathcal{C}(I^{v_{l}},I^{v_{r}})\). The average score obtained was \(0.59\), indicating there is larger separation between siblings, but they are still close.
Aspects Relevancy.We assess the ability of our method to encode different aspects connected to the input concept via a perceptual study. We chose 5 objects from the dataset above, and 3 random aspects for each object. We presented participants with a random set of images depicting one aspect of one object at a time. We asked the participants to choose the object they believe this aspect originated from, along with the option 'none'. In total we collected answers from \(35\) participants, and achieved recognition rates of \(87.8\%\). These evaluations demonstrate that our method can indeed separate a concept into _relevant_ aspects, where each new sub-concept is _coherent_, and the binary tree structure is valid - i.e., the combination of two children can _reconstruct_ the parent concept.
## 6 Limitations
Our method may fail to decompose an input concept. We divide the failure cases into four general categories illustrated in Figure 10:
(1) Background leakage - the training images should be taken from different perspectives and with varying backgrounds (this requirement also exists in [11]). When images do not meet these criteria, one of the sibling nodes often captures information from the background instead of the object itself.
(2) Incomprehensible aspects - some separations may not satisfy clear, interesting, aesthetic, or inspiring aspects, even when the coherency principle holds.
(3) Dominant sub-concept - we illustrate this in Figure 10c, where we show a split on the second level of the concept depicted under "\(v_{1}v_{2}\)". As shown, v1 has dominated the information, so even if the coherency term is held, decomposition to two sub-concepts has not really been achieved.
(4) Large overlap when two aspects share information - we illustrate this in Figure 10d, which is a split of the second level, where both concepts depicted in v1 and v2 appear to share too similar.
We hope that such limitations could be resolved in the future using additional regularization terms in the optimization process or through the development of more robust personalization methods.
Additionally, our method can have difficulties to create deeper trees and nodes with more than two children (see examples in supplemental file). Currently, we stop the process when sub-concepts become too simple or incoherent. This could be the result of the new embeddings drifting towards out-of-distribution codes. Further investigation is needed in this subject. Currently the time for decomposing a node can reach up to approximately \(40\) minutes on a single A100 GPU. However, as textual inversion optimization techniques will progress, so will our method.
## 7 Conclusions
We presented a method to implicitly decompose a given visual concept into various aspects to construct an inspiring visual exploration space. Our method can be used to generate numerous representations and variations of a certain subject, to combine aspects across objects, as well as to use these aspects as part of natural language sentences that drive visual generation of novel concepts.
The aspects are learned implicitly, without external guidance regarding the type of separation. This implicit approach also provides another small step in revealing the rich latent space of large vision-language models, allowing surprising and creative representations to be produced. We demonstrated the effectiveness of our method on a variety of challenging concepts. We hope our work will open the door to further research aimed at developing and improving existing tools to assist and inspire designers and artists.
AcknowledgementsWe thank Rinon Gal, Kfir Aberman, and Yael Pritch for their early feedback and insightful discussions.
Figure 10: We demonstrate four general cases of decomposition failure.
Figure 11: Exploration tree for the cat sculpture. At the bottom we show examples of possible intra-tree combinations.
Figure 12: Exploration tree for a decorated teapot. At the bottom we show examples of possible text-based generation.
|
2309.00941
|
Emergent Linear Representations in World Models of Self-Supervised
Sequence Models
|
How do sequence models represent their decision-making process? Prior work
suggests that Othello-playing neural network learned nonlinear models of the
board state (Li et al., 2023). In this work, we provide evidence of a closely
related linear representation of the board. In particular, we show that probing
for "my colour" vs. "opponent's colour" may be a simple yet powerful way to
interpret the model's internal state. This precise understanding of the
internal representations allows us to control the model's behaviour with simple
vector arithmetic. Linear representations enable significant interpretability
progress, which we demonstrate with further exploration of how the world model
is computed.
|
Neel Nanda, Andrew Lee, Martin Wattenberg
|
2023-09-02T13:37:34Z
|
http://arxiv.org/abs/2309.00941v2
|
# Emergent Linear Representations in World Models of Self-Supervised Sequence Models
###### Abstract
How do sequence models represent their decision-making process? Prior work suggests that Othello-playing neural network learned nonlinear models of the board state (Li et al., 2023). In this work, we provide evidence of a closely related _linear_ representation of the board. In particular, we show that probing for "my colour" vs. "opponent's colour" may be a simple yet powerful way to interpret the model's internal state. This precise understanding of the internal representations allows us to control the model's behaviour with simple vector arithmetic. Linear representations enable significant interpretability progress, which we demonstrate with further exploration of how the world model is computed.1
Footnote 1: Code available at [https://github.com/ajyl/mech_int_othelloGPT](https://github.com/ajyl/mech_int_othelloGPT)
## 1 Introduction
How do sequence models represent their decision-making process? Large language models are capable of unprecedented feats, yet largely remain inscrutable black boxes. Yet evidence has accumulated that models act as feature extractors: identifying increasingly complex properties of the input and representing these in the internal activations (Geva et al., 2021; Bau et al., 2020; Gurnee et al., 2023; Belinkov, 2022; Burns et al., 2022; Goh et al., 2021; Elhage et al., 2022). A key first step for interpreting them is understanding how these features are represented. Mikolov et al. (2013) introduce the **linear representation hypothesis**: that features are represented linearly as directions in activation space. This would be highly consequential if true, yet this remains controversial and without conclusive empirical justification. In this work, we present novel evidence of linear representations, and show that this hypothesis has real predictive power.
We build on the work of Li et al. (2023), who demonstrate the emergence of a _world model_ in sequence models. Namely, the authors train OthelloGPT, an autoregressive transformer model, to predict legal moves in a game of Othello given a sequence of prior moves (Section 2.2). They show that the model spontaneously learns to track the correct board state, recovered using _non-linear_ probes, despite never being told that the board exists. They further show a causal relationship between the model's inner board state and its move predictions using model edits. Namely, they show that the edited network plays moves that are legal in the edited board state even if illegal in the original board, and even if the edited board state is unreachable by legal play (i.e., out of distribution).
Critically, the original authors claim that OthelloGPT uses _non-linear_ representations to encode the board state, by achieving high accuracy with non-linear probes, but failing to do so using linear probes. In our work, we demonstrate that a closely related world model is actually _linearly_ encoded.
Figure 1: The emergent world models of OthelloGPT are linearly represented. We find that the board states are encoded relative to the current player’s colour (Mine vs. Yours) as opposed to absolute colours (Black vs. White).
Our key insight is that rather than encoding the _colours_ of the board (Black, White, Empty), the sequence model encodes the board _relative_ to the current player of each timestep (Mine, Yours, Empty). In other words, for odd timesteps, the model considers Black tiles as Mine and White tiles as Yours, and vice versa for even timesteps (Section 3). Using this insight, we demonstrate that a _linear_ projection can be learned with near perfect accuracy to derive the board state.
We further demonstrate that we can steer the sequence model's predictions by simply conducting vectoral arithmetics using our linear vectors (Section 4). Put differently, by pushing the model's activations in the directions of Mine, Yours, or Empty, we can alter the model's belief state of the board, and change its predictions accordingly. Our intervention method is much simpler and interpretable than that of Li et al. (2023), which rely on gradients to update the model's activations (Section 4.1). Our results confirm that our interpretation of each probe direction is correct, but also demonstrates that a mechanistic understanding of model representations can lead to better control. Our results do not contradict that of Li et al. (2023), but add to our understanding of emergent world models.
We provide additional interpretations of the sequence model using linear operations. For example, we provide empirical evidence of how the model derives empty tiles of the board, and find additional linear representations, such as tiles being Flipped at each timestep.
Finally, we provide a short discussion of our thoughts. How should we think of linear versus non-linear representations? Perhaps most interestingly, why do linear representations emerge?
## 2 Preliminaries
In this section we briefly describe Othello, Othello GPT, and our notations.
### Othello
Othello is a two player game played on a 8x8 grid. Players take turns playing black or white discs on the board, and the objective is to have the majority of one's coloured discs by the end of the game.
At each turn, when a tile is played, all of the opponent's discs that are enclosed in a horizontal, vertical, or diagonal row between two discs of the current player are flipped. The game ends when there are no more valid moves for both players.
### OthelloGPT
OthelloGPT is a 8-layer GPT model (Radford et al., 2019), each layer consisting of 8 attention heads and a 512-dimensional hidden space. We use the model weights provided by Li et al. (2023), denoted there as the synthetic model. The vocabulary space consists of 60 tokens, each one corresponding to a playable move on the board (e.g., A4).2
Footnote 2: The game always starts with 4 tiles in the center of the board already filled.
The model is trained in an autoregressive manner, meaning for a given sequence of moves \(m_{<t}\), the model must predict the next valid move \(m_{t}\).
Note that no a priori knowledge of the game nor its rules are provided to the model. Rather, the model is only given move sequences with a training objective to predict next valid moves. Further note that these valid moves are uniformly chosen, and this training objective differs from that of models like AlphaZero (Silver et al., 2018), which are trained to play strategic moves to win games.
### Notations
Transformers.Our transformer architecture (Vaswani et al., 2017) consists of embedding and unembedding layers \(Emb\) and \(Unemb\) with a series of \(L\) transformer layers in-between. Each transformer layer \(l\) consists of \(H\) multi-head attentions and a multilayer perception (MLP) layer.
A forward pass in the model first embeds the input token at timestep \(t\) using embedding layer \(Emb\) into a high dimensional space \(x_{t}^{0}\in\mathbb{R}^{D}\). We refer to \(x_{t\in T}^{0}\) as the start of the _residual stream_. Then each attention head \(Att_{l}^{h},\forall h\in H\) and MLP block at layer \(l\) add to the residual stream:
\[x_{t}^{l\_mid mid}=x_{t}^{l}+\sum_{h\in H}Att_{l}^{h}(x_{t}^{l})\]
\[x_{t}^{l+1}=x_{t}^{l\_mid mid}+MLP(x_{t}^{l\_mid mid})\]
Each attention head \(Att_{l}^{h}\) computes value vectors by projecting the residual stream to a lower dimension using \(Att_{l}^{h}.V\), linearly combines value vectors using \(Att_{l}^{h}.A\), and projects back to the residual stream using \(Att_{l}^{h}.O\):
\[h(x)=(Att_{l}^{h}.A\otimes Att_{l}^{h}.O*Att_{l}^{h}.V)*x\]
A final prediction is made by applying \(Unemb\) on \(x^{L-1}\), followed by a softmax.
Probe Models.We notate linear and non-linear probes as \(p^{\lambda}\) and \(p^{\nu}\). Our linear probes are simple linear projections from the residual stream: \(p^{\lambda}(x_{t}^{l})=\text{softmax}(Wx_{t}^{l}),W\in\mathbb{R}^{D\times 3}\). The dimension \(D\times 3\) comes from doing a 3-way classification.3 Non-linear probes are 2-layer MLP models: \(p^{\nu}(x_{t}^{l})=\text{softmax}(W_{1}\text{ReLU}(W_{2}x_{t}^{l}))\), \(W_{1}\in\mathbb{R}^{H\times 3},W_{2}\in\mathbb{R}^{D\times H}\). Li et al. (2023a) classify the colour at each tile (Black, White, Empty). Our insight is to classify the colours _relative_ to the current turn's player (Mine, Yours, Empty).
Footnote 3: In practice, because we are predicting the state of all 64 tiles, the shape of our probe is \(D\times 64\times 3\).
## 3 Linearly Encoded Board States
In this section we describe our experiments to find linear board state representations.
### Experiment Setup
Rather than encoding the colour of each tile (Black, White, Empty), OthelloGPT encodes each tile _relative_ to the player of each timestep (Mine, Yours, Empty) -- for _odd_ timesteps, we consider Black to be Mine and White to be Yours, and vice versa for _even_ timesteps.
In order to learn the weights of our linear probe, we train on 3,500,000 game sequences. We use a validation set of 512 games, and train until our validation loss converges according to a patience value of 10. In practice, our linear probes converge after around 100,000 training samples. We test our probes on a held out set of 1,000 games.
We train a different probe for each layer \(l\). Hyperparameters are provided in the Appendix.
### Results
Table 1 shows the accuracy for various probes.
We include four baselines. The first is a linear probe trained on a randomly initialized GPT model. We also include a probabilistic baseline, in which we always choose the most likely colour per tile at each timestep, according to a set of 60,000 games from training data. The next two baselines are probe models used in Li et al. (2023a): a linear and non-linear probe trained to classify amongst {Black, White, Empty}.
Our linear probes achieve high accuracy by layer 4. Unbeknownst previously, we show that the emerged board state is linearly encoded.
## 4 Intervening with Linear Directions
In this section we demonstrate how we intervene on OthelloGPT's board state using linear probes.
### Method
An inherent issue with probing is that it is correlational, not causal (Belinkov, 2022b). To validate that our probes have found a true world model, we confirm that the model uses the encoded board state
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \(x^{0}\) & \(x^{1}\) & \(x^{2}\) & \(x^{3}\) & \(x^{4}\) & \(x^{5}\) & \(x^{6}\) & \(x^{7}\) \\ \hline Randomized & 37 & 35.1 & 33.9 & 35.5 & 34.8 & 34.7 & 34.4 & 34.5 \\ Probabilistic & & & & 61.8 & & & & \\ Linear {Black, White, Empty} & 62.2 & 74.8 & 74.9 & 75.0 & 75.0 & 74.9 & 74.8 & 74.4 \\ Non-Linear {Black, White, Empty} & 63.4 & 88.6 & 93.3 & 96.3 & 97.5 & 98.3 & 98.7 & 98.3 \\ \hline Linear {Mine, Yours, Empty} & **90.9** & **94.8** & **97.2** & **98.3** & **99** & **99.4** & **99.6** & **99.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Probing accuracy for board states. OthelloGPT linearly encodes the board state relative to the current player at each timestep (Mine vs. Yours, as opposed to colours Black or White.
Figure 2: Intervening methodology: we intervene by adding either Empty, Mine, or Yours directions into each layer of the residual stream. Red squares in each board indicate the tiles that have been intervened, teal tiles indicate new legal moves post-intervention that the model predicts.
for its predictions.
To verify this, we conduct the same intervention experiment as Li et al. (2023a). Namely, given an input game sequence (and its corresponding board state \(B\)), we intervene to make the model believe in an altered board state \(B^{\prime}\). We then observe whether the model's prediction reflects the made-believe board state \(B^{\prime}\) or the original board state \(B\).
Our intervention approach is simple: we add our linear vectors to the residual stream of each layer:
\[x^{\prime}\gets x+\alpha p_{d}^{\lambda}(x)\]
where \(d\) indicates a direction amongst {Mine, Yours, Empty} and \(\alpha\) is a scaling factor. In other words, to flip a tile from Yours to Mine, we simply push the residual stream at every layer in the Mine direction, or to "erase" a previously played tile, we push in the Empty direction. 45
Footnote 4: We experiment with intervening on different layers. See Appendix for more details.
Footnote 5: We use the TransformerLens library: [https://github.com/neelnanda-io/TransformerLens](https://github.com/neelnanda-io/TransformerLens).
Note that this intervention is much simpler than that of Li et al. (2023a). Namely, Li et al. (2023a) edits the activation space (\(x\)) of OthelloGPT using several iterations of gradient descent from their non-linear probe. Instead, we perform a single vector addition.
### Experiment Setup
For our intervention experiment, we adopt the same setup and metrics as Li et al. (2023a). We use an evaluation benchmark consisting of 1,000 test cases. Each test case consists of a partial game sequence (\(B\)) and a targeted board state \(B^{\prime}\).
We measure the efficacy of our intervention by treating the task as a multi-label classification problem. Namely, we compare the top-\(N\) predictions post-intervention against the groundtruth set of legal moves at state \(B^{\prime}\), where \(N\) is the number of legal moves at \(B^{\prime}\). We then compute error rate, or the number of false positives and false negatives.
Li et al. (2023a) only considers the scenario of flipping the colour of a tile. To also validate our Empty direction, we also experiment with "erasing" a previously played tile by making it empty.
### Results
Table 2 shows the average error rates after our interventions. Our interventions are equally effective as that of gradient-based editing, and confirms that our interpretation of each linear direction matches how the model uses such directions.
## 5 Additional Linear Interpretations
The linear representation hypothesis is of interest to the mechanistic interpretability community because it provides a foothold into understanding a system. The internal state of the transformer, the residual stream, is the sum of the outputs of all previous components (heads, layers, embeddings and neurons) (Ehlage et al., 2021), so any linear function of the residual stream can be linearly decomposed into contributions from each component, allowing us to trace back where a computation is coming from.
In this section we leverage our newfound linear representation of board state to provide additional interpretations of OthelloGPT, as proof of concept of how discovering linear representations unlocks downstream interpretability applications.
### Interpreting Empty Tiles
Here we interpret how OthelloGPT derives the status of empty tiles.
The Empty Circuit.A key insight for Empty is that input tokens each correspond to a tile on the board (i.e., A4), and once played, the tile can only change colour but remains non-empty.
We view OthelloGPT as using attention heads to "broadcast" which moves have been played: given a move at timestep \(t\), attention heads write this information into other residual streams. This information (Played) can be represented as following. First, each move \(m\) (A4) is embedded: \(Emb[m]\). Then the model writes this information to other residual streams using linear projections \(Att.V\) and \(Att.O\) (Section 2.3):
\begin{table}
\begin{tabular}{c c} \hline \hline Flipping colours & Avg. \# Errors \\ \hline Null Intervention Baseline & 2.723 \\ Non-Linear Intervention & 0.12 \\ Linear Probe Addition & **0.10** \\ \hline Erasing & Avg. \# Errors \\ \hline Null Intervention & 2.73 \\ Non-Linear Intervention & 0.11 \\ Linear Probe Addition & **0.02** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Error rates from interventions.
\(\textsc{Played}_{h}(m)=Emb[m]@Att_{h}.V@Att_{h}.O\)
For each attention head in the first layer,6 we compute the cosine similarity between Played and the \(p^{\lambda}_{\textsc{Empty}}\) direction:
Footnote 6: Knowing which moves were Played (i.e. show up in the input sequence), should not depend on any other computation, and thus we expect this information to be written by the attention heads in the first layer.
\[\max_{h\in H}\textsc{CosSim}(\textsc{Played}_{h}(m),p^{\lambda}_{\textsc{Empty}}(m))\]
Since the two terms encode _opposite_ information, we expect a high negative cosine similarity.
We observe an average similarity score of **-0.862** across all 60 squares,7, confirming that \(p_{\textsc{Empty}}\) is encoding Not Played. This tells us that \(p_{\textsc{Empty}}\) is a linear function of the token embeddings.
Footnote 7: The center 4 squares can never be empty.
This also implies that OthelloGPT knows which tiles are empty by \(x^{0\_mid}\): after the first attention heads but before the MLP layer. On a binary classification task of Empty vs. Not-Empty from 1,000 games in our test split, our probe achieves an accuracy of **76.8%** and **98.9%**, when projecting from \(X^{0\_pre}\) and \(x^{0\_mid}\) respectively.
Logit Attribute for Empty.The previous analysis is based on the _weights_ of the model. Here we provide an alternative analysis by studying the _activations_ during inference.
First, we select a move \(m\) (A4) that we wish to explain. We then construct a "clean" and "corrupt" set of partial game sequences (N=4,569). Our clean set always includes \(m\), while our corrupt set replaces all timesteps with \(m\) in the clean set with an alternative move. We ensure that all games in our corrupt set remain legal sequences. Finally, we study the _difference in probability_ that \(m\) is empty, according to our probes, in our two sets. Namely, we project the outputs from each attention head onto the Empty direction and apply a softmax:
\[P_{\textsc{Empty}[m]}(\sigma)=Softmax(\sigma*p^{\lambda}_{\textsc{Empty}[m]})\]
where \(\sigma\) is the output from each attention head.
Figure 3 shows the difference in probability that A4 is empty, between our clean and corrupt inputs, measured in each attention head of the first layer. The figure decomposes two scenarios: when A4 was originally played by Me or You. This is because some attention heads only attend to My moves (4, 7), while some only attend to Yours (1, 3, 8), which we show below.
### Attending to My & Your Timesteps
We find that some attention heads only attend to either My or Your moves. Figure 4 shows two examples: at each timestep, each head _alternates_ between attending to even or odd timesteps. Such behavior further indicates how the model computes its world model based on Mine and Yours as opposed to Black and White.
### Additional Linear Concepts: Flipped
In addition to linearly representing the board state, we find that OthelloGPT also encodes which tiles are being flipped, or captured, at each timestep. To test this, we modify our probing task to classify between Flipped vs. Not-Flipped, with the same training setup described above. Given the class imbalance, for this experiment we report \(F1\) scores. Table 3 demonstrates high \(F1\) scores by layer 3.
Figure 4: Examples of attention heads attending to Your (left) or My (right) moves.
Figure 3: Difference in probability of A4 being empty, between our clean and corrupt sequences, measured in each attention head.
We also conduct a modified version of our intervention experiment, in which we always randomly select a flipped tile at the current timestep to intervene on. Then, instead of adding either \(p^{\lambda}_{\text{MINE}}\), \(p^{\lambda}_{\text{YOURS}}\), or \(p^{\lambda}_{\text{EMPTY}}\), we _subtract_\(p^{\lambda}_{\text{FLIPPED}}\). This tests whether the Flipped feature is causally relevant for computing the next move, by exploring whether this is sufficient to cause the model to play valid moves in the new board state. We get an average error rate of **0.486**, compared to a null intervention baseline rate of **1.686**.
One can consider Flipped tiles as the difference between the previous and current board state. One might naturally think that a recurrent computation could derive the current board state by iteratively applying such differences. However, transformer models do **not** make recursive computations:8 We view Flipped to be both an unexpected encoding and a hint for the rest of the board circuit.
Footnote 8: Doing so would require our transformer model to have the same number of layers as the maximum game sequence length of 60.
### Multiple Circuits Hypothesis
Although we find a board state circuit and its causality on move predictions, we find that it does not explain the entire model. If our understanding is correct, we expect the model to compute the board state before computing valid moves. However, we find that in end games, this is not the case.
To check for the correct board state, we apply our linear probes on each layer, and check the earliest layer in which all 64 tiles are correctly predicted.9 To check for correct move predictions, we project from each layer using the unembedding layer, and check the earliest layer in which the top-N move predictions are all correct, where N is the number of groundtruth legal moves.
Footnote 9: It might be the case that legal moves could be predicted without 100% accuracy of the board state. We try variants (see Appendix), but observe similar trends.
Figure 5 plots the proportion of times the board state is computed before (or after) valid moves (first y-axis). We also overlay the average earliest layer in which board or moves are correctly computed (second y-axis, aqua and lime curves). To our surprise, we find that in end games, the model often computes legal moves _before_ the board state (black bars). We henceforth refer to this behavior as MoveFirst, and share some thoughts.
End Game Circuits.First, MoveFirst starts to occur around move 30, which is the mid-point of the game. Second, MoveFirst occurs more frequently as we near the end of the game (increasing black bars). Interestingly, in Othello, starting from
Figure 5: Proportion of times the board state is computed before/after move predictions are made (First y-axis). **Light Grey:** Boards are computed in an earlier layer than moves. **Dark Grey, Black:** Boards are computed in the same or later layer than moves. **Red:** Model never computes the correct board state. **Aqua, Lime (Curves):** Average earliest layer in which the board or moves are correctly computed (Second y-axis). Starting from the mid-game, we start observing the model compute moves before boards (black bar), and this occurs more frequently as the game progresses.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \(x^{0}\) & \(x^{1}\) & \(x^{2}\) & \(x^{3}\) & \(x^{4}\) & \(x^{5}\) & \(x^{6}\) & \(x^{7}\) \\ \hline Linear \{Flipped, Not-Flipped} & 74.76 & 85.75 & 91.62 & 94.82 & 96.44 & 97.13 & 96.82 & 96.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: \(F1\) score for probing on Flipped tiles. In addition to the board state, the model also linearly encodes concepts such as flipped tiles per timestep.
the mid-point, there are progressively fewer empty tiles than there are filled tiles as the board fills up. Also note that as the game progresses, it becomes more likely for every empty tile to be a legal move.
One possible explanation for this phenomenon is that in the end game, it may be possible to predict legal moves with simpler circuits that do not require the entire board state. For instance, perhaps it combines Empty with other features such as IsSurrounded-By-Mine or Is-Border and so on.
Multiple Circuits.Interestingly, the model still uses the board circuit at end games. To demonstrate this, we run our intervention experiment on 1,000 _end games_,10 and still achieve a low error rate of **0.112**.11 We thus hypothesize that OthelloGPT (and more broadly, sequence models) consist of multiple circuits. Another hypothesis is that residual networks make "iterative inferences" (Section 5.5), and for end games, OthelloGPT uses simpler circuits in the early layers and refines its predictions at late layers using board state.
Footnote 10: We intervene on a timestep > 30
Footnote 11: Non-intervention baseline: 1.988.
End Game Board Accuracy.We observe that board state accuracy drops near end games. This can be seen by the growing red bars, but also by measuring per-timestep accuracy of our probes (see Appendix). It is unclear whether 1) the model does not bother to compute the perfect board state, as alternative circuits allow the model to still correctly predict legal moves, or 2) the model learns an alternative circuit because it struggles to compute the correct board state at end games.
Memorization.Note that in the first few timesteps, the board and legal moves are sometimes both computed in the same layer (dark grey bars). This may be due to memorization: 1) these predictions both occur at the first layer, and 2) there are only so many openings in an Othello game.
### Iterative Feature Refinements
Figure 6 visualizes OthelloGPT's "iterative inference" (Jastrzebski et al., 2018; Belrose et al., 2023; Veit et al., 2016; nostalgebraist, 2020), or iterative refinement of features. For each layer, we plot the projected board states using our probes, and projected next-move predictions using the unembedding layer. Multiple evidence of iterative refinements are provided in the Appendix.
## 6 Discussions
### On Linear vs. Non-Linear Interpretations
One challenge with probing is knowing which features to look for.12 For instance, classifying {Black, White} versus {Mine, Yours} leads to different takeaways, which illustrates the danger of _projecting our preconceptions_. What might seem "sensible" to a human interpreter (Black, White) may not be for a model.13
Footnote 12: For a longer discussion on probing, see Appendix.
Footnote 13: In hindsight, given the symmetric game-play of Othello, encoding Mine, Yours is perfectly “sensible” for the model.
More broadly, what is "sensible", or alternatively, how we choose to interpret linear or non-linear encodings, can be relative to how we see the world. Suppose we had a perfect world model of our physical world. Further suppose that if and when it computes a gravitational force between two objects (Newton's law), we discover a neuron whose square root was the distance between two objects. Is this a non-linear representation of distance? Or, given the form of Netwon's law, is the square of the distance a more natural way for the model to represent the feature, and thus considered a linear representation? As this example shows, what constitutes a natural feature may be in the eye of the beholder.
### On the Emergence of Linear Representations
Linear representations in sequence models have been observed before: iGPT (Chen et al., 2020), which was autoregressively trained to predict next pixels of images, lead to robust linear image representations. The question remains, why do linear feature representations emerge? What linear representations are currently encoded in large language models? One reason might be simply that matrix multiplication can easily extract a different subset of linear features for each neuron. However, we leave a complete explanation to future work.
## 7 Related Work
We discuss three broad related areas: understanding internal representations, interventions, and mechanistic interpretability.
### Understanding Internal Representations
Multiple researchers have studied world representations in sequence models. Li et al. (2021) train sequence models on a synthetic task, and uncover
world models in their activations. Patel and Pavlick (2022) demonstrate that language models can learn to ground concepts (e.g., direction, colour) to real world representations. Burns et al. (2022) find linear vectors that encode "truthfulness".
Many studies also build or study linear representations for language. Word embeddings Mikolov et al. (2013, 2013) build vectoral word representations. Linear probes have also been used to extract linguistic characteristics in sentence embeddings Conneau et al. (2018); Tenney et al. (2019).
Linear representations are found outside of language models as well. Merullo et al. (2022) demonstrate that image representations from vision models can be linearly projected into the input space of language models. McGrath et al. (2022) and Lovering et al. (2022) find interpretable representations of chess or Hex concepts in AlphaZero.
### Intervening On Language Models
A growing body of work has intervened on language models, by which we mean controlling their behavior by altering their activations.
We consider two broad categories. Parametric approaches often use optimizations (i.e. gradient descent) to locate and alter activations Li et al. (2023); Meng et al. (2022, 2022); Hernandez et al. (2023); Hase et al. (2023). Meanwhile, inference-time interventions typically apply linear arithmetics, for instance by using "truthful" vectors Li et al. (2023), "task vectors" Ilharco et al. (2022), or other "steering vectors" Subramani et al. (2022); Turner et al. (2023).
### Mechanistic Interpretability
Mechanistic interpretability (MI) studies neural networks by reverse-engineering their behavior Olah et al. (2020); Elhage et al. (2021). The goal of MI is to understand the underlying computations and representations of a model, with a broader goal of validating that their behavior aligns with what researchers have intended. Such framework has allowed researchers to understand grokking Nanda et al. (2023), superposition Elhage et al. (2022, 2022); Scherlis et al. (2022); Arora et al. (2018), or to study individual neurons Mu and Andreas (2020); Antverg and Belinkov (2021); Gurnee et al. (2023).
## 8 Conclusion
In this work we demonstrated that the emergent world model in Othello-playing sequence models is full of linear representations. Previously unbeknownst, we demonstrated that the board state in OthelloGPT is linearly represented by encoding the colour of each tile _relative_ to the player at each timestep (Mine, Yours, Empty) as opposed to absolute colour (Black, White, Empty). We showed that we can accurately control the model's behaviour with simple vector arithmetic on the internal world model. Lastly, we mechanistically interpreted multiple facets of the sequence model, analysing how empty tiles are detected, and linear representations of which pieces are flipped. We find hints that multiple circuits might exist for predicting legal moves in the end game, as well as further evidence that residual networks iteratively refine their features across layers.
## 9 Acknowledgements
We thank the original authors of Li et al. (2023) for opensourcing their work, making it possible to conduct our research.
We thank Chris Olah for invaluable discussion and encouragement, and drawing our attention to the implication of these results for the linear representation hypothesis.
Figure 6: Iterative refinements: the top row shows each layer projected using our linear probes. The bottom row shows the model’s predictions for legal moves at each layer, by applying the unembedding layer on each layer.
## 10 Author Contributions
Neel Nanda discovered the linear representation in terms of relative board state, and showed that simple vector arithmetic sufficed for causal interventions. He led an initial version of the experiments and write-ups, and advised throughout.
Andrew Lee led this write-up and performed all experiments in this paper. He discovered the flipped linear representation, the empty results, and the multiple circuit hypothesis results.
Martin Wattenberg helped with editing and distilling the paper, and contributed the analogy about a linear vs quadratic representation of distance.
|
2306.15277
|
Upgraded waveform model of eccentric binary black hole based on
effective-one-body-numerical-relativity for spin-aligned binary black holes
|
Effective one body numerical relativity waveform models for spin aligned
binary black holes (SEOBNR) are based on the effective one body theoretical
framework and numerical relativity simulation results. SEOBNR models have
evolved through version 1 to version 4. We recently extended SEOBNRv1 model to
SEOBNRE (Effective One Body Numerical Relativity waveform models for Spin
aligned binary black holes along Eccentric orbit) model which is also valid for
spin aligned binary black hole coalescence along eccentric orbit. In this paper
we update our previous SEOBNRE model to make it consistent to SEOBNRv4 which is
the most widely used SEOBNR waveform model. This upgraded SEOBNRE model
improves accuracy compared to previous SEOBNRE model, especially for highly
spinning black holes. For spin aligned binary black holes with mass ratio
$1\leq q\lesssim10$, dimensionless spin $-0.9\lesssim\chi\lesssim0.995$ and
orbital eccentricity $0\leq e_0\lesssim0.6$ at reference frequency $Mf_0=0.002$
($M$ is the total mass of the binary black hole, $f_0\approx 40\frac{10{\rm
M}_\odot}{M}$Hz), the upgraded SEOBNRE model can always fit numerical
relativity waveform better than 98.2\%. For most cases the fitting factor can
even be better than 99\%.
|
Xiaolin Liu, Zhoujian Cao, Lijing Shao
|
2023-06-27T08:07:11Z
|
http://arxiv.org/abs/2306.15277v1
|
Upgraded waveform model of eccentric binary black hole based on effective-one-body-numerical-relativity for spin-aligned binary black holes
###### Abstract
Effective one body numerical relativity waveform models for spin aligned binary black holes (SEOBNR) are based on the effective one body theoretical framework and numerical relativity simulation results. SEOBNR models have evolved through version 1 to version 4. We recently extended SEOBNRv1 model to SEOBNRE (Effective One Body Numerical Relativity waveform models for Spin aligned binary black holes along Eccentric orbit) model which is also valid for spin aligned binary black hole coalescence along eccentric orbit. In this paper we update our previous SEOBNRE model to make it consistent to SEOBNRv4 which is the most widely used SEOBNR waveform model. This upgraded SEOBNRE model improves accuracy compared to previous SEOBNRE model, especially for highly spinning black holes. For spin aligned binary black holes with mass ratio \(1\leq q\lesssim 10\), dimensionless spin \(-0.9\lesssim\chi\lesssim 0.995\) and orbital eccentricity \(0\leq e_{0}\lesssim 0.6\) at reference frequency \(Mf_{0}=0.002\) (\(M\) is the total mass of the binary black hole, \(f_{0}\approx 40\frac{\text{M}_{0}}{M}\text{Hz}\)), the upgraded SEOBNRE model can always fit numerical relativity waveform better than 98.2%. For most cases the fitting factor can even be better than 99%.
## I Introduction
Tens of compact binary coalescence (CBC) events have been detected by LIGO and VIRGO [1; 2; 3; 4; 5; 6]. One possible channel for the formation of merging binaries is isolated evolution in the field. Another possible channel is through dynamical interaction in dense stellar environments such as globular clusters or galactic nuclei. Currently it is not clear the detected binaries are formed through which channel. The binaries formed in the field can radiate away their eccentricity. The dynamically formed binaries may still have significant residual eccentricity when their gravitational waves enter the LIGO-Virgo band. People may infer the formation channel of the binary through the eccentricity detection. Consequently more and more attention is paid to the eccentricity detection recently [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30].
In order to estimate the eccentricity, accurate waveform model for eccentric binary system is needed. Most waveform models for eccentric binary are based on post-Newtonian approximation and are consequently valid only for inspiral part [31; 32; 33; 34; 35; 36; 37; 38]. Currently waveform models covering the whole inspiral-merger-ringdown process for binary black hole (BBH) appear [39]. One is the Eccentric, Nonspinning, Inspiral, Gaussian-process Merger Approximated waveform model (ENIGMA) [40; 41; 42] and the other three are based on effective-one-body (EOB) framework including the Effective-One-Body Numerical-Relativity waveform model for Spin-aligned binary black holes along Eccentric orbit (SEOBNRE) [43; 44; 45; 46; 47], the extended TEOBiResumS_SM model [48; 49; 26] and SEOBNRv4EHM [50; 51]. Among all of these waveform models for eccentric binary, only SEOBNRE and the extended TEOBiResumS_SM model can treat spinning black holes. Such kind of complete waveform model is important to treat parameter degeneracy [28], especially between the black hole spin and orbit eccentricity.
SEOBNRE waveform model is based on the Effective-One-Body (EOB) theoretical framework and Numerical-Relativity (NR) simulations. Buonanno and Damour firstly proposed the original idea of effective one body method for binary black hole in general relativity [52]. Later Buonanno, Pan and others for the first time [53] combine the effective one body method with numerical relativity results to get effective one body numerical relativity (EOBNR) model for binary black hole coalescence. Aiming for the faithful waveform template for gravitational wave detection, SEOBNRv1 [54], SEOBNRv2 [55], SEOBNRv3 [56] and SEOBNRv4 [57] are consequently constructed. Alternative to the SEOBNR series, Nagar, Bernuzzi and others developed TEOBResumS models [58]. Recently EOBNR models have also been developed to describe the waveform of binary neutron stars [59; 60; 61]. Even for gravitational wave memory, EOBNR model can also work quite well [62].
In [43] we extended the SEOBNRv1 to SEOBNRE model which can also describe eccentric binary black hole coalescence [46]. Previous studies indicate that SEOBNRE waveform model works quite well [44; 45; 46]. But as noted by the authors of [26], the SEOBNRv1 model is outdated now. Consequently SEOBNRv1 and SEOBNRE do not accurately cover high spins or mass ratios up to 10. Compared to SEOBNRv1, SEOBNRv4 admits much higher accuracy. SEOBNRv4 satisfies the accuracy requirement of current gravitational wave detectors and has been widely used in LIGO and Virgo data analysis. In the current work we upgrade our previous SEOB
NRE model to let it be consistent to the waveform model SEOBNRv4. So for clarity, we call our previous model SEOBNREv1 and the current one SEOBNREv4.
Throughout this paper we will use units \(c=G=1\). This paper is arranged as following. We describe the construction of SEOBNREv4 model in the Sec. II. Significantly different points between SEOBNREv1 and SEOBNREv4 are pointed out there. Along with the construction process special attention is paid to the consistency check between SEOBNRv4 and SEOBNREv4 for quasicircular binary systems. For 2000 binary black hole systems with mass ratio ranges in \([1,10]\), spin ranges in \((-1,1)\), the fitting factor between SEOBNRv4 and SEOBNREv4 is greater than 99.98%. In the Sec. III we then validate SEOBNREv4 against the whole spin-aligned binary black hole simulations included in the SXS catalog [63]. Finally we give a discussion and a summary in the last section.
## II Upgrade SeOBNREv1 model to SEOBNREv4 model
In [43] we constructed full inspiral-merger-ringdown waveform model for binary black hole systems along eccentric orbit based on EOBNR framework. Our construction method is independent of adiabatic approximation. So there is no direct eccentricity concept involved in our model. Instead only initial eccentricity as a parameter goes into our SEOBNRE model. This is the same as numerical relativity simulation.
Here we upgrade our previous SEOBNRE model (referred as SEOBNREv1 in the current paper for clarity) to SEOBNREv4 version which is consistent to the LIGO template SEOBNRv4 [57], the most widely used EOBNR waveform model so far. The basic idea of constructing SEOBNREv4 is combining the EOB dynamics with an adjusted waveform formula which describes an eccentric binary.
The SEOBNREv4 model consists two major parts, one is the waveform formula which depends on the trajectory of the effective one body; the another is the dynamics which determines the trajectory of the effective one body. Because the dynamics is divided into the conservative part and the dissipative part, the EOB dynamics also depends on the waveform formula.
Regarding to the waveform formula, we append an extra part \(h^{(\text{PNE})}\) (c.f. the Eq.(52) of [43]) to the existing waveform formula of SEOBNRv4 model [57]. This extra waveform part describes the adjustment of eccentric motion of the binary black hole. Regarding to the dynamics part of SEOBNREv4 model, we adopt the conservative dynamics of SEOBNRv4 directly while construct the dissipative part based on the adjusted waveform according to the relations (54)-(60) of [43].
The nonquasi-circular correction (NQC) term \(N_{\ell m}\) of the waveform formula is crucial to the quasi-circular waveform part \(h^{(C)}\). The treatment of this term in SEOBNRv4 is different to that in SEOBNRv1. SEOBNv1 assumes
\[N_{\ell m} =[1+\frac{\tilde{p}_{r}^{2}}{(r\Omega)^{2}}(a_{1}^{h_{\ell m}}+ \frac{a_{2}^{h_{\ell m}}}{r}+\frac{a_{3}^{h_{\ell m}}+a_{3S}^{h_{\ell m}}}{r^{ 3/2}}\] \[+\frac{a_{4}^{h_{\ell m}}}{r^{2}}+\frac{a_{5}^{h_{\ell m}}}{r^{5/ 2}})]\exp[i[\frac{\tilde{p}_{r}}{r\Omega}b_{1}^{h_{\ell m}}\] \[+\frac{\tilde{p}_{r}^{2}}{r\Omega}(b_{2}^{h_{\ell m}}+\frac{b_{3} ^{h_{\ell m}}}{r^{1/2}}+\frac{b_{4}^{h_{\ell m}}}{r}))]. \tag{1}\]
And the involved parameters \(a_{1}^{h_{\ell m}}\), \(a_{2}^{h_{\ell m}}\), \(a_{3}^{h_{\ell m}}\), \(a_{3S}^{h_{\ell m}}\), \(a_{4}^{h_{\ell m}}\), \(a_{5}^{h_{\ell m}}\), \(b_{1}^{h_{\ell m}}\), \(b_{2}^{h_{\ell m}}\), \(b_{3}^{h_{\ell m}}\) and \(b_{4}^{h_{\ell m}}\) have been determined as functions of binary black holes' mass ratio and spins when SEOBNRv1 was calibrated to numerical relativity. Consequently our SEOBNREv1 inherits these parameters directly.
Differently SEOBNRv4 assumes
\[N_{\ell m} =[1+\frac{\tilde{p}_{r}^{2}}{(r\Omega)^{2}}(a_{1}^{h_{\ell m}}+ \frac{a_{2}^{h_{\ell m}}}{r}+\frac{a_{3}^{h_{\ell m}}}{r^{3/2}})]\] \[\times\exp[i\frac{\tilde{p}_{r}}{r\Omega}(b_{1}^{h_{\ell m}}+b_{2 }^{h_{\ell m}}\tilde{p}_{r})]. \tag{2}\]
In the implementation of SEOBNRv4, quantities \(|h_{\ell m}|\), \(\frac{d|h_{\ell m}|}{dt}\), \(\frac{d^{2}|h_{\ell m}|}{dt^{2}}\), \(\frac{d\arg(h_{\ell m})}{dt}\) and \(\frac{d^{2}\arg(h_{\ell m})}{dt^{2}}\) at \(t_{\text{match}}\) are determined as functions of binary black holes' mass ratio and spins when SEOBNRv4 was calibrated to numerical relativity [57]. Here the \(\arg(h_{\ell m})\) means the phase of \(h_{\ell m}\). When SEOBNRv4 generates a waveform for a specific binary black holes' parameters, the NQC parameters \(a_{1}^{h_{\ell m}}\), \(a_{2}^{h_{\ell m}}\), \(a_{3}^{h_{\ell m}}\), \(b_{1}^{h_{\ell m}}\) and \(b_{4}^{h_{\ell m}}\) are solved by requiring the gotten \(|h_{\ell m}|\), \(\frac{d|h_{\ell m}|}{dt}\), \(\frac{d^{2}|h_{\ell m}|}{dt^{2}}\), \(\frac{d\arg(h_{\ell m})}{dt}\) and \(\frac{d^{2}\arg(h_{\ell m})}{dt^{2}}\) at \(t_{\text{match}}\) equal to the ones determined by numerical relativity. We should not ask the quantities \(|h_{\ell m}|\), \(\frac{d|h_{\ell m}|}{dt}\), \(\frac{d^{2}|h_{\ell m}|}{dt^{2}}\), \(\frac{d\arg(h_{\ell m})}{dt}\) and \(\frac{d^{2}\arg(h_{\ell m})}{dt^{2}}\) of the eccentric waveform equal to the ones determined through quasi-circular numerical relativity simulations. Consequently our SEOBNREv4 generates a corresponding quasi-circular waveform firstly to determine NQC parameters \(a_{1}^{h_{\ell m}}\), \(a_{2}^{h_{\ell m}}\), \(a_{3}^{h_{\ell m}}\), \(b_{1}^{h_{\ell m}}\) and \(b_{2}^{h_{\ell m}}\). Afterwards, we use the NQC parameters to generate eccentric waveform. We caution this step is quite important. Otherwise the resulted waveform will be completely different.
For strongly eccentric orbit, the 'nonquasi-circular' effect has already been counted by our \(h^{(\text{PNE})}\) terms. In contrast, for quasicircular orbit, the \(h^{(\text{PNE})}\) terms are negligible. Considering this fact, we introduce an adjusting factor into NQC terms. More explicitly we set following NQC for SEOBNREv4
\[N_{\ell m} =[1+N^{(E)}\frac{\tilde{p}_{r}^{2}}{(r\Omega)^{2}}(a_{1}^{h_{ \ell m}}+\frac{a_{2}^{h_{\ell m}}}{r}+\frac{a_{3}^{h_{\ell m}}}{r^{3/2}})]\] \[\times\exp[iN^{(E)}\frac{\tilde{p}_{r}}{r\Omega}(b_{1}^{h_{\ell m }}+b_{2}^{h_{\ell m}}\tilde{p}_{r})], \tag{3}\] \[N^{(E)} =\left[1-\text{erf}\left(\frac{M_{f0}}{0.002}\frac{e_{0}-0.2}{0.0 5}\right)\right]/2. \tag{4}\]
Here erf means the error function. When \(e_{0}\to 0\), \(N^{(E)}\) goes to 1 and original NQC is recovered. When \(e_{0}\) slightly bigger than 0.2 at the reference frequency \(Mf_{0}=0.002\), the NQC terms disappear.
About the initial state setup for the SEOBNREv4 effective one body dynamics, we firstly give some reference frequency \(f_{0}\) of gravitational wave and the related eccentricity \(e_{0}\) of orbit. Then we calculate the frequency according to the Kepler relation
\[f_{0}^{\prime}=\frac{f_{0}}{(1-e_{0})^{2}}. \tag{5}\]
Plugging the \(f_{0}^{\prime}\) into equations
\[\frac{\partial H}{\partial r}=0, \tag{6}\] \[\frac{\partial H}{\partial p_{\phi}}=\frac{\pi}{f_{0}} \tag{7}\]
to solve \(r_{0}\) and \(p_{\phi_{0}}\) for corresponding circular orbit. Afterwards we adjust \(r_{0}\) as
\[r_{0}^{\prime}=\frac{r_{0}}{1+e_{0}} \tag{8}\]
for warranted elliptic orbit. Then use \(r_{0}^{\prime}\) as initial setup for the SEOBNREv4 effective one body dynamics. This operation replaces the relation (67) of [43].
In the SEOBNRv4 code, the time evolution stops if eccentric orbit-like behavior \(\dot{p}_{r}>0\) happens near the time when the waveform is connected to the quasi-normal modes. Although this condition implies that the test particle falls into the region inside the last stable orbit in most cases, the real orbit eccentricity (in contrast to the corrected one by the nonquasi-circular effect) possibly makes \(\dot{p}_{r}>0\) happen outside the last stable orbit which results in waveform generation failure. Consequently we change the time evolution stop condition to requiring both the test particle falls into the region inside the last stable orbit and \(\dot{p}_{r}>0\). This adjustment makes our SEOBNREv4 code more flexible.
As the construction of SEOBNREv1, the first check is the consistence between our SEOBNREv4 and SEOBNRv4 for quasi-circular binary black hole systems. We have done a bunch of tests for spin aligned binary black hole systems. If we plot out the two waveforms together, we can not distinguish them by eye. More quantitatively, we use the following fitting factor to describe this consistency
\[\mathrm{FF} \equiv\frac{\langle h_{1}|h_{2}\rangle}{\|h_{1}\|\cdot\|h_{2}\|}, \tag{9}\] \[\langle h_{1}|h_{2}\rangle =2\int\left(\tilde{h}_{1}\tilde{h}_{2}^{*}+\tilde{h}_{1}^{*} \tilde{h}_{2}\right)df,\] (10) \[\|h\| \equiv\sqrt{\langle h|h\rangle}, \tag{11}\]
where the "\((\tilde{\cdot})\)" means the Fourier transformation, the "*" means taking the complex conjugate. We have tested 2000 spin-aligned binary black hole systems with randomly chosen symmetric mass ratio \(\eta\) in the range \([0.02,0.25]\) (corresponding to mass ratio about \(1<q<50\)) and the randomly chosen dimensionless spin of the black holes \(\chi\) in the range \((-1,1)\). We plot the tested parameters in the Fig. 1. Roughly all parameter space has been covered. For all of these tested cases, the fitting factor between SEOBNRv4 and SEOBNREv4 is bigger than 99.98%.
In principle, Schott terms for radiation reaction force and higher multipole waveforms should be considered like recent works [51; 64] for BBH along eccentric orbit. Interestingly the current work indicates that SEOBNRv4 Hamiltonian improves the waveform accuracy much better than Schott terms and higher multipoles. That is to say in EOBNR type waveform models, a good Hamil
Figure 1: Quasi-circular waveforms test for SEOBNREv4 against SEOBNRv4. We project the 3D parameter space of spinning, nonprecessing waveforms to the symmetric mass ratio \(\eta=\frac{m_{1}m_{2}}{(m_{1}+m_{2})^{2}}\) and the two dimensionless BH spin \(\chi_{1,2}\). The color represents \(\log_{10}(1-\mathrm{FF})\).
tonian is more important than Schott terms and higher multipoles for waveform accuracy.
## III Validation of the SeOBNREv4 against numerical relativity waveforms
Respect to the advanced LIGO and the advanced Virgo, we define inner product of two given waveforms \(h_{1}(t)\) and \(h_{2}(t)\) as
\[(h_{1}|h_{2})\equiv 4\Re\int_{f_{\rm min}}^{f_{\rm max}}\frac{\tilde{h}_{1}(f) \tilde{h}_{2}^{*}(f)}{S_{n}(f)}df, \tag{12}\]
where \(S_{n}(f)\) is the one-sided power spectral density of the detector's noise. \((f_{\rm min},f_{\rm max})\) corresponds to the detector frequency band. In the current paper we use the designed sensitivity of advanced LIGO [65]. Respectively \(f_{\rm min}=10\)Hz and \(f_{\rm max}=8192\)Hz.
### Small eccentricity cases
There are 300 small eccentricity cases among the SXS catalogue [63]. The eccentricities of these simulations are less than 0.01 when the numerical simulation starts. The parameters including mass ratio \(q\) and spin \(\chi\) of these 300 cases are shown in the Fig. 2. We setup the reference frequency \(f_{0}\) to the frequency when the numerical simulation starts and initial eccentricity \(e_{0}\) according to the value provided by numerical relativity estimation.
The SEOBNREv1 model breaks down for some cases with large spin. There are 56 such failed cases. Among the rest 244 cases, the case with the smallest fitting factor is 0292 with FF 31.8%. There are 65 cases among the 244 cases for SEOBNREv1 admitting \(\mathrm{FF}<99\%\). There are 23 cases among these 65 cases admitting \(\mathrm{FF}<90\%\).
In contrast, SEOBNREv4 model never breaks down. It works quite well for all 300 cases. There are only 12 cases for SEOBNREv4 with \(\mathrm{FF}<99\%\) among 300 cases in all. Even the smallest fitting factor among these 300 cases can reach 98.3% corresponding to the case 1426, which admits eccentricity 0.0003326 when the numerical simulation starts.
Comparing the right panel to the left panel of the Fig. 3, we can see that SEOBNREv4 improves quite much than SEOBNREv1. The fitting factors in the right panel of the Fig. 2 of [57] are always bigger than 99% is because the above mentioned 12 cases are all absent in the right panel of the Fig. 2 of [57]. These 12 cases were not available when SEOBNRv4 was constructed [57]. For comparison usage, we calculate these 12 cases for SEOBNNRv4 waveform model in the Fig. 4. We can see the behavior of SEOBNREv4 is roughly the same to that of SEOBNRv4 for taking them as quasi-circular cases.
The comparison results between SEOBNREv1 and SEOBNREv4 for the 300 cases are listed in the Table. 1. From these listed results we can see SEOBNREv1 fits better than SEOBNREv4 for some slow spinning binary black holes (\(\chi_{1,2}<0.5\)). But the difference is always less than 1%. In another words, up to 1%, the SEOBNREv4 behaves better than SEOBNREv1 for these small eccentricity cases.
### Large eccentricity cases
Now we move on to the cases with significant eccentricities. The eccentricities of these simulations when the numerical simulation starts are bigger than 0.02 or even too big to be estimated through numerical relativity method. There are 35 significantly eccentric cases in the SXS catalog [63]. As we and others realized before [46; 26; 43], the initial eccentricity estimated by numerical relativity is not very meaningful. Consequently, the same procedure is done for the waveform comparison as in [46; 43] to search the corresponding parameter \(e_{0}\). The determined eccentricities at reference frequency \(Mf_{0}=0.002\) by SEOBNREv1 and SEOBNREv4 are consistent to each other. Note that \(M\) is the total mass of the binary black hole, and \(f_{0}\approx 40\frac{10\mathrm{M}_{\odot}}{M}\)Hz. The difference of the estimated eccentricity between SEOBNREv1 and SEOBNREv4 is smaller than 0.05. The determined eccentricities fall in the range \(e_{0}\lesssim 0.6\). The corresponding procedure in [26] is a little different, where both \(f_{0}\) and \(e_{0}\) are adjusted to align the waveforms between numerical relativity and the extended TEOBiResumS_SM waveform.
The validating results are listed in the Table. 2 and plotted in the Fig. 5. For SEOBNREv1 there are 3 failed cases. And there are 7 cases admitting \(\mathrm{FF}<99\%\). Among them there are two cases admitting \(\mathrm{FF}<98\%\) and the worst case admits \(\mathrm{FF}<95\%\). SEOBNREv4 works well for all 35 cases. And there are 6 cases admitting \(\mathrm{FF}<99\%\). But there are only 3 cases admitting \(\mathrm{FF}<98.8\%\). SXS:BBH:1370 is the most worst case which still admits fitting factor 98.8%. When the total mass is bigger than \(100\mathrm{M}_{\odot}\), the fitting factor of SXS:BBH:1370 can also be larger than 99%.
Corresponding to the time-domain waveform example shown in [26], we plot the waveform comparison between SEOBNREv4 and numerical relativity result in the Fig. 6 for significantly eccentric waveform without spin SXS:BBH:1369. Our fitting factor is 98.8% and [26] admits fitting factor 98.6%. Again we need to note here we adjust only \(e_{0}\) which is different to the procedure adjusting both \(e_{0}\) and \(f_{0}\) in [26]. Regarding to the spinning eccentric waveforms, we have ([26] has) 99.7% (99.0%) for SXS:BBH:89, 99.6% (98.5%) for SXS:BBH:321, 99.6% (98.8%) for SXS:BBH:322, 99.5% (98.4%) for SXS:BBH:323, 98.7% (97.8%) for SXS:BBH:324, 98.9% (99.4%) for SXS:BBH:1136, 99.2% (96.8%) for SXS:BBH:1149, 99.4% (99.8%) for SXS:BBH:1169.
Combining the results of Fig. 3 and 5 we can see our SEOBNREv4 can always recover numerical relativity
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \hline ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & \(\nu\)1FF & \(\nu\)4FF & ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & \(\nu\)1FF & \(\nu\)4FF \\ \hline
[MISSING_PAGE_POST]
0000 & 0.99926536 & 0.99862089 & 16
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & v1FF & v4FF & \(\,\)ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & v1FF & v4FF \\ \hline
[MISSING_PAGE_POST]
000
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & v1FF & v4FF & ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & v1FF & v4FF \\ \hline
1444 & 5.94 & -0.0630 & -0.7589 & 0.99459368 & 0.99091162 & 1445 & 4.67 & -0.4971 & 0.8000 & 0.98810982 & 0.99456634 \\
1446 & 3.15 & -0.8000 & 0.7770 & FAIL & 0.99854387 & 1447 & 3.16 & 0.7398 & 0.8000 & FAIL & 0.99869873 \\
1448 & 6.94 & -0.4816 & 0.5248 & 0.99266244 & 0.99732545 & 1449 & 4.19 & -0.8000 & -0.3445 & FAIL & 0.99338723 \\
1450 & 4.07 & -0.2836 & -0.8000 & 0.99344190 & 0.99843940 & 1451 & 4.06 & 0.3133 & -0.8000 & 0.97510875 & 0.99109158 \\
1452 & 3.64 & 0.8000 & -0.4266 & FAIL & 0.99728541 & 1453 & 2.35 & 0.8000 & -0.7845 & 0.87309598 & 0.99757038 \\
1453 & 2.35 & 0.8000 & -0.7845 & 0.87309598 & 0.99757038 & 1454 & 2.45 & -0.8000 & -0.7340 & FAIL & 0.99283404 \\
1454 & 2.45 & -0.8000 & -0.7340 & FAIL & 0.99283404 & 1455 & 8.00 & -0.3978 & 0.0014 & 0.99535512 & 0.99818015 \\
1455 & 8.00 & -0.3978 & 0.0014 & 0.99535512 & 0.99818015 & 1456 & 3.00 & 0.7449 & 0.6970 & FAIL & 0.99821791 \\
[MISSING_PAGE_POST]
0.7838 & 0.99780380 & 0.99759788 & 1499 & 1.00 & -0.7547 & 0.3429 &
waveform with FF \(>98.2\%\) for parameters range \(1\leq q\lesssim 10,\ -0.9\lesssim\chi_{1,2}\lesssim 0.995\) and \(0\leq e_{0}\lesssim 0.6\) at reference frequency \(Mf_{0}=0.002\) (equivalently \(f_{0}\approx 40\frac{10\mathrm{M}_{\odot}}{M}\mathrm{Hz}\)).
## IV Seobnrev4 as a waveform model for supermassive binary black holes
In this section we do a primary estimation of SEOBNREv4 as a waveform model for supermassive binary black holes. We consider LISA [66; 67], Taiji [68] and Tianqin [69; 70] as example space-based detectors. We do not involve realistic response functions as [71], instead we use sky averaged sensitivity [72] to do the estimation. And more the confusion noise is also ignored in the current estimation.
Specifically we use the following approximated sensitivity for space based gravitational wave detectors (Eq. (13) of [72])
\[S_{n}(f) =\frac{10}{3L^{2}}\left(P_{\mathrm{OMS}}+2(1+\cos^{2}(f/f_{*})) \frac{P_{\mathrm{acc}}}{(2\pi f)^{4}}\right)\times\] \[\left(1+\frac{6}{10}\left(\frac{f}{f_{*}}\right)^{2}\right), \tag{13}\] \[f_{*} =c/(2\pi L). \tag{14}\]
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & v1FF & \(e_{\mathrm{v1}}\) & v4FF & \(e_{\mathrm{v4}}\) & ID & \(q\) & \(\chi_{1}\) & \(\chi_{2}\) & v1FF & \(e_{\mathrm{v1}}\) & v4FF & \(e_{\mathrm{v4}}\) \\ \hline
[MISSING_PAGE_POST]
0000 & -0.0000 & 0.98986 & 0.60 & 0.98813 & 0.58 & & & & & & & & & & \\ \end{tabular}
\end{table}
Table 2: Validating the SEOBNRE models against the SXS simulations with significant eccentricity. Notation convention is the same to that of Tab. 1. In addition, \(e_{\mathrm{v1}}\) and \(e_{\mathrm{v4}}\) are the determined eccentricity at reference frequency \(Mf_{0}=0.002\) by SEOBNREv1 and SEOBNREv4 respectively. ‘-’ for \(e_{\mathrm{v1}}\) means not available.
Figure 3: Validating SEOBNRE waveform models to SXS simulations with small eccentricity. The left panel is for SEOBNREv1 and the right panel is for SEOBNREv4. The legend shows the ID of SXS simulations. There are 300 lines in each panel. Only 5 lines with smallest fitting factor are marked out with legend in each panel.
For LISA [72] we have
\[P_{\rm OMS} =(1.5\times 10^{-11}{\rm m})^{2}{\rm Hz}^{-1}, \tag{15}\] \[P_{\rm acc} =(3\times 10^{-15}{\rm ms}^{-2})^{2}\left(1+\left(\frac{4\times 1 0^{-4}{\rm Hz}}{f}\right)^{2}\right){\rm Hz}^{-1},\] (16) \[L =2.5\times 10^{9}{\rm m}. \tag{17}\]
For Taiji [73] we have
\[P_{\rm OMS} =(8\times 10^{-11}{\rm m})^{2}{\rm Hz}^{-1}, \tag{18}\] \[P_{\rm acc} =(3\times 10^{-15}{\rm ms}^{-2})^{2}\left(1+\left(\frac{4\times 1 0^{-4}{\rm Hz}}{f}\right)^{2}\right){\rm Hz}^{-1},\] (19) \[L =3\times 10^{9}{\rm m}. \tag{20}\]
For Tianqin we have [69]
\[P_{\rm OMS} =(1\times 10^{-12}{\rm m})^{2}{\rm Hz}^{-1}, \tag{21}\] \[P_{\rm acc} =(1\times 10^{-15}{\rm ms}^{-2})^{2}\left(1+\left(\frac{1\times 1 0^{-4}{\rm Hz}}{f}\right)^{2}\right){\rm Hz}^{-1},\] (22) \[L =\sqrt{3}\times 10^{8}{\rm m}. \tag{23}\]
Since the Taiji and Tianqin projects develop very fast, our approximated sensitivity curve maybe different to the ones shown in other publications (for example [74; 75]). But the above approximation is enough for the estimation usage needed in the current work.
The mass range of supermassive black hole binaries for space based gravitational wave detectors is \((10^{3},10^{8}){\rm M}_{\odot}\). The fitting factors between SEOBNRv4 and SXS simulations listed in the Table. I respect to LISA, Taiji and Tianqin respectively are plotted in the Fig. 8. We can see the fitting factors are very bad for BBH with total mass larger than \(10^{7}{\rm M}_{\odot}\). This is because when the total mass is larger than \(10^{7}{\rm M}_{\odot}\), the later ringdown stage waveform dominates. For the later ringdown stage, both EOBNR model and NR results admit problems. Regarding EOBNR, due to the runoff error during the calculation, the too small ringdown waveform will be dominated by numerical errors. For NR more complicated numerical errors ruin the late ringdown stage waveform. We plot the related behavior in Fig. 9. If ones restrict the mass range respectively in \((4\times 10^{4},4\times 10^{6}){\rm M}_{\odot}\), \((5\times 10^{4},9\times 10^{6}){\rm M}_{\odot}\) and \((4\times 10^{3},1\times 10^{6}){\rm M}_{\odot}\) for LISA, Taiji and Tianqin, the fitting factors can be larger than \(98\%\). Correspondingly we check SEOBNREv4 also for the mass range \((4\times 10^{4},4\times 10^{6}){\rm M}_{\odot}\). In Fig. 10 we find that the consistence between SEOBNREv4 waveforms and NR ones are all better than \(97\%\). For most cases the fitting factor is bigger than \(99\%\).
## V Summary and Conclusion
The gravitational wave detection of compact binary objects mergers with the advanced LIGO and advanced VIRGO detectors is now a common occurrence. Although no orbit eccentricity has been found yet in the observed binary merger events [27; 38; 44; 76], binary black hole (BBH) may form in dense stellar environments, which are expected to enter the frequency band of ground-based GW detectors with non-negligible eccentricity [77]. Such possibility inspires the study and modeling of eccentric BBH systems in recent years. This is because accurate waveform model can enhance the detection of eccentric BBH.
In [43; 46] we have constructed an accurate waveform model for mildly spin aligned binary black hole along eccentric orbit, SEOBNRE. Our SEOBNRE waveform model can accurately recover numerical relativity waveform when black hole's spin is less than \(0.6\). In the current paper, we upgrade previous SEOBNRE model to let it consistent to the most widely used EOBNR waveform SEOBNRv4. Correspondingly we call our previous SEOBNRE model SEOBNREv1 and the newly upgraded one SEOBNREv4. Compared to SEOBNREv1, the new model SEOBNREv4 is more robust to generate waveform, and is more accurate than SEOBNREv1. The fitting factor improves from \(94.8\%\) to \(98.2\%\) (c.f. the Fig. 5). Most importantly, the newly upgraded SEOBNREv4 can work for highly spinning binary black holes. For example, SEOBNREv1 does not work for binary black hole with individual spin \(0.7313\) and \(-0.85\), while SEOBNREv4 can reach fitting factor \(99.9\%\) to the numerical relativity waveform (c.f. the Table. I and the Fig. 3).
In summary, our newly upgraded waveform model
Figure 4: Comparing the SXS simulation waveforms to SEOBNRv4 waveforms for the 12 cases with fitting factor smaller than \(99\%\) shown in the right panel of the Fig. 3. The plot convention is the same to that of the Fig. 3.
Figure 5: Validating SEOBNRE waveform models to eccentric SXS simulations. The left panel is for SEOBNREv1 and the right panel is for SEOBNREv4. The legend shows the ID of SXS simulations. There are 32 lines in the left panel. And there are 35 lines in the right panel. The 3 missing cases in the left panel correspond to the failed cases for SEOBNREv1. Only 5 lines with smallest fitting factor are marked out with legend in each panel.
Figure 6: Waveform comparison between NR and the SEOBNREv4 model for SXS:BBH:1369. The corresponding fitting factor is \(\mathrm{FF}=98.8\%\). The initial eccentricity at the reference frequency \(Mf_{0}\approx 0.002\) is \(e_{0}=0.59\) estimated by the SEOBNREv4 model.
SEOBNREv4 is an accurate waveform model for highly spinning, nonprecession binary black holes along eccentric orbit. The tested parameter range includes \(1\leq q\lesssim 10\), \(-0.9\lesssim\chi_{1,2}\lesssim 0.995\) and \(0\leq e_{0}\lesssim 0.6\) at reference frequency \(Mf_{0}=0.002\) (equivalently \(f_{0}\approx 40\frac{10\text{M}_{\odot}}{M}\text{Hz}\)).
Hopefully our SEOBNREv4 model may help people to find eccentric signals in the existing and future LIGO/VIRGO/KAGRA data. Such detection will not only help people to understand more the binary black hole formation mechanism [78; 79; 80; 45] but also assist people to test gravity theory [81; 82; 83].
In the future it is interesting to consider tidal correction of SEOBNREv4 waveform model like [59; 60; 61]. Such extension may be applied to the eccentricity estimation for GW170817 and GW190425 like events [76]. Relating to space based gravitational wave detector [84], it is not only interesting but also important to extend our SEOBNREv4 to general mass ratio cases. If ones can use the same waveform model to cover both almost equal mass binary black holes and extreme mass ratio binary black hole, which is essentially the start point of effective one body theory [85; 86; 87; 88; 89], we can believe such model also valid for intermediate mass ratio binary black hole systems [90; 91; 92; 93; 94; 95; 96; 97; 98]. Consequently such waveform model will be robust for space based gravitational wave detector.
###### Acknowledgements.
This work was supported in part by the National Key Research and Development Program of China Grant
Figure 8: Fitting factors between SEOBNRv4 waveforms and NR waveforms for cases listed in Table. 1. Three panels correspond to LISA, Taiji and Tianqin respectively.
Figure 7: Approximated sensitivity curves for LISA, Taiji and Tianqin. And approximated waveforms for two equal mass BBHs with source frame total mass \(2\times 10^{8}\text{M}_{\odot}\) and \(2\times 10^{3}\text{M}_{\odot}\) locating at \(z=3\).
No. 2021YFC2203001 and in part by the NSFC (No. 11920101003 and No. 12021003). Z. Cao was supported by CAS Project for Young Scientists in Basic Research YSBR-006 and by "the Interdiscipline Research Funds of Beijing Normal University".
## References
* Abbott et al. (2019) B. P. Abbott et al. (LIGO Scientific, Virgo), Phys. Rev. **X9**, 031040 (2019), eprint 1811.12907.
* Nitz et al. (2019) A. H. Nitz, C. Capano, A. B. Nielsen, S. Reyes, R. White, D. A. Brown, and B. Krishnan, The Astrophysical Journal **872**, 195 (2019), URL [https://doi.org/10.3847%2F1538-4357%2Fab0108](https://doi.org/10.3847%2F1538-4357%2Fab0108).
* Nitz et al. (2019) A. H. Nitz, A. B. Nielsen, and C. D. Capano, The Astrophysical Journal **876**, L4 (2019), URL [https://doi.org/10.3847%2F2041-821%2Fab18a1](https://doi.org/10.3847%2F2041-821%2Fab18a1).
* Magee et al. (2019) R. Magee, H. Fong, S. Caudill, C. Messick, K. Cannon, P. Godwin, C. Hanna, S. Kapadia, D. Meacher, S. R. Mohite, et al., The Astrophysical Journal **878**, L17 (2019), URL [https://doi.org/10.3847%2F2041-821%2Fab20cf](https://doi.org/10.3847%2F2041-821%2Fab20cf).
* Zackay et al. (2019) B. Zackay, T. Venumadhav, L. Dai, J. Roulet, and M. Zaldarriaga, Phys. Rev. D **100**, 023007 (2019), URL [https://link.aps.org/doi/10.1103/PhysRevD.100.023007](https://link.aps.org/doi/10.1103/PhysRevD.100.023007).
* Venumadhav et al. (2019) T. Venumadhav, B. Zackay, J. Roulet, L. Dai, and M. Zaldarriaga, Phys. Rev. D **100**, 023011 (2019), URL [https://link.aps.org/doi/10.1103/PhysRevD.100.023011](https://link.aps.org/doi/10.1103/PhysRevD.100.023011).
* Mikoczi et al. (2012) B. Mikoczi, B. Kocsis, P. Forgacs, and M. Vasith, Phys. Rev. D **86**, 104027 (2012), URL [https://link.aps.org/doi/10.1103/PhysRevD.86.104027](https://link.aps.org/doi/10.1103/PhysRevD.86.104027).
* Huerta and Brown (2013) E. A. Huerta and D. A. Brown, Phys. Rev. D **87**, 127501 (2013), URL [https://link.aps.org/doi/10.1103/PhysRevD.87.127501](https://link.aps.org/doi/10.1103/PhysRevD.87.127501).
* Loutrel et al. (2014) N. Loutrel, N. Yunes, and F. Pretorius, Phys. Rev. D **90**, 104010 (2014), URL [https://link.aps.org/doi/10.1103/PhysRevD.90.104010](https://link.aps.org/doi/10.1103/PhysRevD.90.104010).
* Coughlin et al. (2015) M. Coughlin, P. Meyers, E. Thrane, J. Luo, and N. Christensen, Phys. Rev. D **91**, 063004 (2015), URL [https://link.aps.org/doi/10.1103/PhysRevD.91.063004](https://link.aps.org/doi/10.1103/PhysRevD.91.063004).
* Sun et al. (2015) B. Sun, Z. Cao, Y. Wang, and H.-C. Yeh, Phys. Rev. D **92**, 044034 (2015), URL [http://link.aps.org/doi/10.1103/PhysRevD.92.044034](http://link.aps.org/doi/10.1103/PhysRevD.92.044034).
* Ma et al. (2017) S. Ma, Z. Cao, C.-Y. Lin, H.-P. Pan, and H.-J. Yo, Phys. Rev. D **96**, 084046 (2017), URL [https://link.aps.org/doi/10.1103/PhysRevD.96.084046](https://link.aps.org/doi/10.1103/PhysRevD.96.084046).
* Tanay et al. (2019) S. Tanay, A. Klein, E. Berti, and A. Nishizawa, Phys. Rev. D **100**, 064006 (2019), URL [https://link.aps.org/doi/10.1103/PhysRevD.100.064006](https://link.aps.org/doi/10.1103/PhysRevD.100.064006).
* Loutrel et al. (2020) N. Loutrel, Classical and Quantum Gravity **37**, 075008 (2020), URL [https://doi.org/10.1088%2F1361-638%2F2Fab745f](https://doi.org/10.1088%2F1361-638%2F2Fab745f).
* Loutrel and Yunes (2017) N. Loutrel and N. Yunes, Class. Quant. Grav. **34**, 044003 (2017), eprint 1702.01818.
* Loutrel et al. (2019) N. Loutrel, S. Liechersbach, N. Yunes, and N. Cornish, Class. Quant. Grav. **36**, 01 (2019), eprint 1801.09009.
* Moore et al. (2018) B. Moore, T. Robson, N. Loutrel, and N. Yunes, Class. Quant. Grav. **35**, 235006 (2018), eprint 1807.07163.
* Loutrel et al. (2019) N. Loutrel, S. Liebersbach, N. Yunes, and N. Cornish, Class. Quant. Grav. **36**, 025004 (2019), eprint 1810.03521.
* Gondan and Kocsis (2019) L. Gondan and B. Kocsis, Astrophys. J. **871**, 178 (2019), eprint 1809.00672.
* Gondan et al. (2018) L. Gondan, B. Kocsis, P. Raffai, and Z. Frei, Astrophys. J. **860**, 5 (2018), eprint 1711.09989.
* Hoang et al. (2018) B.-M. Hoang, S. Naoz, B. Kocsis, F. A. Rasio, and F. Dosopoulou, Astrophys. J. **856**, 140 (2018), eprint 1706.09896.
* Gondan et al. (2018) L. Gondan, B. Kocsis, P. Raffai, and Z. Frei, Astrophys. J. **855**, 34 (2018), eprint 1705.10781.
* Moore and Yunes (2019) B. Moore and N. Yunes, Classical and Quantum Gravity **36**, 185003 (2019), URL [https://doi.org/10.1088%2F1361-6382%2Fab3778](https://doi.org/10.1088%2F1361-6382%2Fab3778).
* Moore and Yunes (2019) B. Moore and N. Yunes (2019), eprint 1910.01680.
* Chiaramello and Nagar (2020) D. Chiaramello and A. Nagar, Phys. Rev. D **101**, 101501 (2020), URL [https://link.aps.org/doi/10.1103/PhysRevD.101.101501](https://link.aps.org/doi/10.1103/PhysRevD.101.101501).
* Nitz et al. (2020) A. H. Nitz, A. Lenon, and D. A. Brown, The Astrophysical Journal **890**, 1 (2020), URL [https://doi.org/10.3847%2F1538-4357%2Fab6611](https://doi.org/10.3847%2F1538-4357%2Fab6611).
* Lenon et al. (2020) A. K. Lenon, A. H. Nitz, and D. A. Brown, arXiv e-prints arXiv:2005.14146 (2020), eprint 2005.14146.
* Ramos-Buades et al. (2020) A. Ramos-Buades, S. Husa, G. Pratten, H. Estelles, C. Garcia-Quiros, M. Mateu-Lucena, M. Colleoni, and R. Jaume, Phys. Rev. D **101**, 083015 (2020), URL [https://link.aps.org/doi/10.1103/PhysRevD.101.083015](https://link.aps.org/doi/10.1103/PhysRevD.101.083015).
* Ramos-Buades et al. (2019) A. Ramos-Buades, S. Tiwari, M. Haney, and S. Husa,
Figure 9: Ringdown stage quasi-circular waveforms comparison for SEOBNRv4 (marked with ‘EOB’) against numerical relativity (marked with ‘NR’). Here spinless BBH with mass ratio 6 is considered.
|
2304.04503
|
Head-tail Loss: A simple function for Oriented Object Detection and
Anchor-free models
|
This paper presents a new loss function for the prediction of oriented
bounding boxes, named head-tail-loss. The loss function consists in minimizing
the distance between the prediction and the annotation of two key points that
are representing the annotation of the object. The first point is the center
point and the second is the head of the object. However, for the second point,
the minimum distance between the prediction and either the head or tail of the
groundtruth is used. On this way, either prediction is valid (with the head
pointing to the tail or the tail pointing to the head). At the end the
importance is to detect the direction of the object but not its heading. The
new loss function has been evaluated on the DOTA and HRSC2016 datasets and has
shown potential for elongated objects such as ships and also for other types of
objects with different shapes.
|
Pau Gallés, Xi Chen
|
2023-04-10T10:46:12Z
|
http://arxiv.org/abs/2304.04503v1
|
# Head-tail Loss: A simple function for Oriented Object Detection and Anchor-free models
###### Abstract
This paper presents a new loss function for the prediction of oriented bounding boxes, named head-tail-loss. The loss function consists in minimizing the distance between the prediction and the annotation of two key points that are representing the annotation of the object. The first point is the center point and the second is the head of the object. However, for the second point, the minimum distance between the prediction and either the head or tail of the groundtruth is used. On this way, either prediction is valid (with the head pointing to the tail or the tail pointing to the head). At the end the importance is to detect the direction of the object but not its heading. The new loss function has been evaluated on the DOTA and HRSC2016 datasets and has shown potential for elongated objects such as ships and also for other types of objects with different shapes.
oriented bounding boxes object detection loss function head-tail-loss elongated objects ships DOTA dataset HRSC2016 dataset
## 1 Introduction
Object detection on satellite images is a rapidly growing field, with a wide range of applications such as monitoring land use, natural resource management, and disaster response. The ability to automatically detect and locate objects in satellite images can greatly improve the efficiency and accuracy of many tasks that rely on this data.
Object detection can be classified into two variants: detection using horizontal bounding boxes (HBBs) and detection using oriented bounding boxes (OBBs). HBBs are the traditional approach where objects are detected using rectangular boxes that align with the image axes. On the other hand, OBBs are oriented boxes that can align with the true orientation of the objects in the image. OBBs have been shown to be more accurate than HBBs, as they take into account the rotation of the objects, which is an important aspect in satellite images where objects may be oriented in any direction. Additionally, OBBs are less sensitive to small changes in object orientation, which can lead to more robust object detection. In recent years, several works [1, 2, 3] have proposed to use OBBs for object detection in satellite images. These methods have shown promising results and have demonstrated the advantages of using OBBs over HBBs. However, the use of OBBs also brings new challenges, such as the need for more complex models and the increased computational cost. Therefore, the development of efficient and accurate OBB-based object detection methods for satellite images is an important research area that has yet to be fully explored. One example of OBB-based detector is FCOSR which is an object detection model that utilizes a complex loss function to improve performance. Unlike traditional object detection models, such as Faster R-CNN [4], FCOSR uses a multi-task loss function that simultaneously optimizes for both classification and localization. This loss function is designed to handle rotated bounding boxes and is composed of several components, including the centerness-aware loss and the rotated IoU loss [5]. This complex loss function allows FCOSR to better handle rotated objects, leading to improved performance on datasets with a high degree of rotation.
Another important consideration in object detection on satellite images is the difference between anchor-based and anchor-free models. Anchor-based models rely on predefined anchor boxes, whereas anchor-free models do not use
anchor boxes and instead directly predict the object's bounding box coordinates. Anchor-free models have been shown to be more efficient and have fewer parameters than anchor-based models, but they may be less robust to variations in object scale. Furthermore, anchor-based models are more sensitive to the choice of anchor sizes and aspect ratios, while anchor-free models are more robust in this regard.
In addition, object detection models can be classified as single-stage or multistage. Single-stage models, such as YOLO [6] and FCOS [5], directly predict the object's bounding box coordinates and class scores in one pass. Multistage models, such as RetinaNet [7] and FPN [8], use a two-stage process where a region proposal network is used to generate potential object locations, which are then passed to a second stage for detection and classification. Multistage models tend to be more accurate than single-stage models, but they are also more computationally expensive.
In terms of performance evaluation, several metrics are commonly used for object detection on satellite images, such as precision, recall, Intersection over Union (IoU), mean average precision (mAP), and confusion matrix. Precision measures the proportion of true positive detections among all positive detections, recall measures the proportion of true positive detections among all actual objects, IoU measures the overlap between predicted and ground-truth bounding boxes, mAP is the mean of average precision across all classes and confusion matrix provides the count of true positives, true negatives, false positives, and false negatives. These metrics allow to evaluate the performance of the model by giving a comprehensive understanding of how well the model is detecting objects in the image.
Recently, several state-of-the-art loss functions for oriented object detection have been proposed and used in the literature. Some examples include the polar ray loss [9], which is a differentiable function based on the angle between the predicted bounding box and the ground-truth bounding box, and has shown promising results in object detection on satellite images. The Rotate IoU loss [10] is a differentiable function based on the intersection over union (IoU) between the predicted bounding box and the ground-truth bounding box. The Rotation-Invariant and Scale-Invariant Loss (RIS-Loss) [11] is a loss function based on the angle-sensitive IoU and the scale-sensitive IoU. Another example is the Centerness-Aware Scale-Adaptive Loss (CASA) [12], which is a loss function that considers both the centerness and the scale-adaptiveness of the detection results. Other loss functions include the Scale-Aware and Rotation-Aware Loss (SRA-Loss) [13], the Rotation-Invariant and Scale-Invariant Loss (RIS-Loss) [11], and the Rotate IoU-NMS loss [14].
In this paper, we propose a new loss function, named head-tail-loss, for the prediction of oriented bounding boxes in object detection on satellite images, specifically for elongated objects such as ships. Our loss function consists of minimizing the distance between the prediction and the annotation of two key points that represent the annotation of the object: the center point and one of the two extremities of the object, either the head or the tail. By using this method, either the prediction with the head pointing to the tail or the tail pointing to the head is valid, therefore allowing the model to detect the direction of the object, but not its heading. The head-tail-loss is simpler than the existing loss functions and in this paper, we show its feasibility and potential for improving the detection of elongated objects in satellite images.
## 2 Methodology
In this paper, we used two publicly available datasets for the task of object detection on satellite images: DOTA [15] and HRSC2016 [16]. In this section, we will describe the format and content of these datasets in detail.
DOTA, which stands for "Dataset for Object Detection in Aerial Images," is a large-scale dataset for object detection in aerial images. It contains over 2,800 images and 15 object categories, including plane, ship, storage tank, baseball diamond, tennis court, basketball court, ground track field, harbor, bridge, large vehicle, small vehicle, helicopter, roundabout, soccer-ball field, and swimming pool. Each image in the dataset is annotated with oriented bounding boxes and object categories. The images in the DOTA dataset were collected from Google Earth, and they have a resolution range of about 0.3m to 3m per pixel. The objects in the images vary greatly in scale and orientation, making the dataset challenging for object detection.
HRSC2016, which stands for "High-Resolution Ship Detection Challenge 2016," is a dataset specifically designed for ship detection in high-resolution satellite images. It contains over 1,500 images and one object category (ship) and each image in the dataset is annotated with oriented bounding boxes. The images in the HRSC2016 dataset have a resolution range from 2-m to 0.4-m and the size of images ranges from 300x300 to 1500x900 and most of them are larger than 1000x600. The objects in the images vary greatly in scale and orientation, making the dataset challenging for object detection.
Both DOTA and HRSC2016 datasets provide a challenging testbed for object detection in satellite images due to the large-scale variation and dense cluttered background of the objects in images. In addition, DOTA dataset provides a more diverse set of object classes and the HRSC2016 dataset provides lower-resolution images which allow evaluation of the performance of the model in different scenarios.
The new loss function, named head-tail-loss, is defined as follows:
\[L_{head-tail}=\frac{\left\|\boldsymbol{p}_{center}-\boldsymbol{g}_{center} \right\|^{2}+\min\left(\left\|\boldsymbol{p}_{head}-\boldsymbol{g}_{head} \right\|^{2},\left\|\boldsymbol{p}_{head}-\boldsymbol{g}_{tail}\right\|^{2} \right)}{S} \tag{1}\]
Where \(\boldsymbol{pc}enter\), \(\boldsymbol{p}head\) are the center and head predictions, respectively and \(\boldsymbol{g}center\), \(\boldsymbol{g}head\), \(\boldsymbol{g}_{tail}\) are the center, head and tail groundtruth, respectively, S is the image size in pixels. This loss function minimizes the distance between the prediction and the annotation of two key points that represent the annotation of the object: the center point and one of the two extremities of the object, either the head or the tail. The minimum is only between the distances of the predicted head to annotated head and the predicted head to annotated tail. Then the center point distances are added to this minimum, and the final result is divided by the image size with the aim to make the result equal to a small value in the range around 0 and 1. This allows the model to detect the direction of the object, but not its heading, and also makes the loss function independent of the image size.
As depicted in Figure 1, the head-tail loss function allows for predictions that point in either direction, as long as the direction of the object is accurately captured, regardless of the heading of the object. This allows the model to detect the direction of the object, but not its heading, and also make the loss function independent of the image size.
Equation 1 does not take into account the width of the object. Initially, the idea was to set the width as a function of the height, specifically for the purpose of detecting ships and vessels. However, it was later decided to add an additional component to the equation that considers the absolute difference between the predicted and the actual width, divided by the size of the image. This modification broadens the applicability of the function to a wider range of object shapes.
## 3 Results
The performance of the different models trained with the dataset HRSC2016 is presented and analyzed in Table 1. The first model, \(FCOS+HT\), is the experiment using FCOS model and the proposed head-tail loss function. The second model, FCOS+HTc, refers to the same model but keeping the centerness component in the FCOS model. The third model, \(FCOS\), is the standard FCOS model. As can be seen in the table, the proposed head-tail loss function with the centerness component removed (\(FCOS+HT\)) achieved similar performance in terms of mAP and AR compared to the standard FCOS model (\(FCOS\)) and a slight improvement in terms of mAP when the centerness component is kept (\(FCOS+HTc\)).
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Model** & **mAP** & **AR** \\ \hline FCOS+HT & 0.659 & 0.778 \\ FCOS & 0.760 & 0.837 \\ FCOS+HTc & **0.778** & **0.857** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance results of different models in terms of Mean Average Precision (mAP) and Average Recall (AR).
Figure 1: Conceptual example of the head-tail-loss function. The image shows two satellite photos of a ship side by side. On the left, the head-tail ground truth or annotation is similar to the prediction. In the right photo, the prediction points to the tail instead, but the prediction is equally valid. The head-tail loss function allows for predictions that point in either direction, as long as the direction of the object is accurately captured, regardless of the heading of the object. This allows the model to detect the direction of the object, but not its heading.
The models trained on HRSC2016 varied considerably depending on the defined random seed. Thus, the results are the average of several experiments1.
Footnote 1: These random seeds used were: \(129\),\(305\),\(531\),\(8\),\(98\),\(25\),\(727\)
The same notation is used for the models that were trained with the dataset DOTA10. Additionally, the two-stage detectors of roitrans[17] and fasterRCNN[4] as well as FCOSR[18], which is a single-stage anchor-free detector for rotated objects, were trained. The results for these alternative models are summarized in Table 2. And the results with FCOS with the head-tail loss and the original loss are shown in Table 3.
These numbers are clearly worse than in the case of HRSC2016. This second dataset is more difficult and it has objects in different shapes. The head-tail loss function was designed for elongated objects and this is one possible reason for the worse scores.
An interesting finding was that the centerness branch in the FCOS architecture appeared to be unnecessary when using the head-tail loss function, as it already incorporates the center point of the object. However, the results showed that maintaining the centerness branch in the implementation with the head-tail loss function resulted in improved performance in both experiments.
## 4 Conclusions
The new loss function, named head-tail-loss, has been evaluated on the DOTA and HRSC2016 datasets. The results indicated that the head-tail-loss performed better on the HRSC2016 dataset, which contains images of ships, than on the DOTA dataset. This is likely because the head-tail-loss is more suitable for elongated objects such as ships, as observed in the improved performance for the elongated object categories in DOTA. Additionally, an experiment was conducted to evaluate the performance of the head-tail-loss on objects with different shapes, such as storage tanks, by measuring the minimum distance between the predicted head and the annotated head, tail, left side, or right side. The results showed that this approach worked better for squared objects such as storage tanks.
However, the results in the second experiment appear to be worse than expected. One possible explanation is that the models used in this second experiment are more complex, due to the use of multiple stages and the sophistication of the loss function in the case of FCOSR. Additionally, the DOTA10 dataset is more challenging than HRSC2016, which may have also contributed to the worse performance. Therefore, it is important to keep in mind the characteristics of the dataset and the type of objects that the model will be used to detect when selecting a loss function and model architecture.
## Acknowledgments
The authors would like to acknowledge the valuable guidance provided by Dr. Xi Chen, a researcher in the field of Cartography and Geographical Information Systems. Dr. Chen, who is currently a professor at East China Normal University, is known for his expertise in image processing, machine learning, and remote sensing. His knowledge and insights have been instrumental in the development of this research. Additionally, the authors would like to thank the DOTA and HRSC2016 datasets for providing the data used in this study.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline
**model** & **BC** & **BD** & **Bridge** & **GTF** & **HC** & **Harbor** & **LV** & **Plune** & **RA** & **SBF** & **SP** & **ST** & **SV** & **Ship** & **mAP** \\ \hline FCOSR & 0.840 & 0.798 & 0.485 & 0.707 & **0.648** & 0.726 & **0.776** & **0.895** & 0.652 & **0.878** & **0.751** & **0.863** & **0.811** & **0.891** & **0.754** \\reira & **0.860** & **0.836** & **0.810** & **0.754** & 0.564 & **0.734** & 0.762 & 0.894 & 0.625 & 0.616 & 0.707 & 0.845 & 0.744 & 0.871 & 0.749 \\ fasterRCNN & 0.849 & 0.762 & 0.402 & 0.671 & 0.596 & 0.458 & 0.588 & 0.892 & **0.677** & 0.552 & 0.680 & 0.832 & 0.735 & 0.759 & 0.689 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance results of different object detection models on the DOTA dataset. The scores for each model are expressed as mean average precision (mAP) and average recall (AR) as expressed in the methodology section. The columns represent the different object categories.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c} \hline \hline
**model** & **BC** & **BD** & **Bridge** & **GTF** & **HC** & **Harbor** & **LV** & **Plune** & **RA** & **SBF** & **SP** & **ST** & **SV** & **Ship** & **TC** & **mAP** \\ \hline FCOSR+HT & 0.500 & 0.648 & 0.220 & 0.527 & 0.259 & 0.300 & 0.352 & 0.796 & 0.544 & 0.316 & 0.571 & 0.674 & 0.472 & 0.522 & 0.897 & 0.506 \\ FCOS+HTc & 0.740 & 0.779 & 0.362 & 0.581 & 0.494 & 0.492 & 0.691 & 0.880 & 0.593 & **0.529** & 0.664 & 0.564 & 0.733 & 0.366 & 0.908 & 0.666 \\ FCOS & **0.807** & **0.789** & **0.448** & **0.589** & **0.506** & **0.610** & **0.739** & **0.885** & **0.650** & 0.502 & **0.693** & **0.830** & **0.778** & **0.844** & **0.909** & **0.795** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance results on the DOTA10 dataset using the FCOS model with different variations. The scores are expressed as mean average precision (mAP) and average recall (AR) as explained in the methodology section.
|
2310.17386
|
A Challenge in Reweighting Data with Bilevel Optimization
|
In many scenarios, one uses a large training set to train a model with the
goal of performing well on a smaller testing set with a different distribution.
Learning a weight for each data point of the training set is an appealing
solution, as it ideally allows one to automatically learn the importance of
each training point for generalization on the testing set. This task is usually
formalized as a bilevel optimization problem. Classical bilevel solvers are
based on a warm-start strategy where both the parameters of the models and the
data weights are learned at the same time. We show that this joint dynamic may
lead to sub-optimal solutions, for which the final data weights are very
sparse. This finding illustrates the difficulty of data reweighting and offers
a clue as to why this method is rarely used in practice.
|
Anastasia Ivanova, Pierre Ablin
|
2023-10-26T13:33:26Z
|
http://arxiv.org/abs/2310.17386v1
|
# A Challenge in Reweighting Data
###### Abstract
In many scenarios, one uses a large training set to train a model with the goal of performing well on a smaller testing set with a different distribution. Learning a weight for each data point of the training set is an appealing solution, as it ideally allows one to automatically learn the importance of each training point for generalization on the testing set. This task is usually formalized as a bilevel optimization problem. Classical bilevel solvers are based on a warm-start strategy where both the parameters of the models and the data weights are learned at the same time. We show that this joint dynamic may lead to sub-optimal solutions, for which the final data weights are very sparse. This finding illustrates the difficulty of data reweighting and offers a clue as to why this method is rarely used in practice.
## 1 Introduction
In many practical learning scenarios, there is a discrepancy between the training and testing distribution. For instance, when training large language models, we may have access to a training set that contains many low-quality data points from different sources and want to train a model on this dataset to perform well on a testing set that contains a few high-quality points [7, 3, 18]. An appealing way to solve this problem is _data reweighting_[21, 23, 26], where one attributes one weight to each data point in the training set. The weight of a training sample should reflect how much this sample resembles the testing set and helps the model perform well on it. Figure 1 illustrates the general principle.
Learning the optimal weights can be cast as a bilevel optimization problem [9], where the optimal weights are such that training the model with these weights leads to the smallest test loss possible. The weights are usually constrained to sum to one, leading to an optimization problem on the simplex, which is usually solved with mirror descent [19]. Despite its promise of automatically learning
the importance of data points, data reweighting is still seldom used in practice. In this paper, we try to provide a possible explanation for this lack of adoption, showing, in short, that the underlying optimization problem is hard.
In a large-scale setting where fitting the model once is expensive, it is prohibitively costly to iteratively update the weights with a model fit at each iteration [20]. Hence, practitioners often resort to _warm-started_ bilevel optimization, where the parameters of the model and the weights evolve simultaneously [17, 14, 6].
This paper aims to provide a detailed analysis of the corresponding joint dynamics. Our main result indicates that warm starting with mirror descent leads to _sparse weights_: after training, only a few training points have a non-zero weight, which is detrimental to the generalization power of the corresponding parameters. Our results show a weakness of warm-started bilevel approaches for this problem and are a first step toward explaining the hardness of data-reweighting.
**Contributions and paper organization** In Sec. 2, we introduce **data reweighting as a bilevel problem** and explain how warm-started bilevel aims at solving the problem. In Sec. 3, we **study the corresponding dynamics**.
Figure 1: Data reweighting principle: test and train distributions are different. We aim to estimate a weight for each training sample that reflects its contribution to the model’s performance on the test set. For instance, if the test set only contains dog images, we want a large weight on dog images in the training set, a small weight on other animals which might help the model, and zero weight on irrelevant training images. These weights should be learned automatically during training.
We focus on two settings. When the parameters are updated at a much _greater_ pace than the weights, we formalize the intuition that this recovers the standard, non-warm-started, bilevel approach, which leads to satisfying solutions but takes a long time to converge. In the opposite setting, where the parameters are updated at a much _slower_ pace than the weights, where **we show that this leads to extremely sparse weights**, which in turn hinders the generalization of the model since the model is effectively trained with few samples. Finally, Sec. 4 gives numerical results illustrating the theory presented in the paper.
**Notation:** The vector containing all 1's is \(\mathbb{1}_{k}\in\mathbb{R}^{k}\). The simplex is \(\Delta_{n}=\{w\in\mathbb{R}^{n}_{+}|\sum_{i=1}^{n}w_{i}=1\}\). The multiplication of two vectors \(u,v\in\mathbb{R}^{n}\) is \(u\odot v\in\mathbb{R}^{n}\) of entries \((u_{i}v_{i})\). The set \(\{1,n\}\) is the set of integers \(1\) to \(n\). The support \(\operatorname{Supp}(w)\) of a vector \(w\in\mathbb{R}^{n}\) is the set of indices \(i\) in \(\{1,n\}\) such that \(w_{i}\neq 0\). Given a set of indices \(S=(s_{1},\ldots,s_{l})\subset\{1,n\}\) of size \(s\), the restriction to \(S\) of a vector \(w\in\mathbb{R}^{n}\) is the vector \(w\mid_{S}\in\mathbb{R}^{l}\) of entries \(w_{s_{j}}\) for \(j\in\{1,l\}\). The gradient (resp. Hessian) of a loss \(\ell(\theta;x)\) is \(\nabla_{\theta}\ell(\theta;x)\) (resp. \(\nabla^{2}_{\theta\theta}\ell(\theta;x)\)). The range of a matrix is the span of its column.
## 2 Data reweighting as a bilevel problem
We consider a train dataset \(X=[x_{1},\ldots,x_{n}]\), and a testing dataset \(X^{\prime}=[x^{\prime}_{1},\ldots,x^{\prime}_{m}]\). Our goal is to train a machine learning model on the train set that has good performance on the testing set. Letting \(\theta\in\mathbb{R}^{p}\) the parameters of the model, we let \(\ell(\theta;x)\) the loss corresponding to the train set, and \(\ell^{\prime}(\theta;x^{\prime})\) the loss of the test set. The classical _empirical risk minimization_ cost function is \(G(\theta)=\frac{1}{n}\sum_{i=1}^{n}\ell(\theta;x_{i})\) for the train set and \(F(\theta):=\frac{1}{m}\sum_{j=1}^{m}\ell^{\prime}(\theta;x^{\prime}_{j})\) for the test set, where each sample has the same weight.
The goal of data reweighting is to give a different weight from \(1/n\) to each training data in order to get a solution \(\theta\) that leads to a good performance on the test set, i.e., leads to a small \(F(\theta)\). To this end, we introduce
\[G(\theta,w):=\sum_{i=1}^{n}w_{i}\ell(\theta;x_{i}) \tag{1}\]
defined for \(w\) belonging to the _simplex_\(\Delta_{n}=\{w\in\mathbb{R}^{n}_{+}|\ \sum_{i=1}^{n}w_{i}=1\}\). Ideally, we would want \(w_{i}\) to be large when the training sample \(x_{i}\) helps the model's performance on the testing set, and conversely, \(w_{i}\) should be small when \(x_{i}\) does not help the model on the testing set. We make the following blanket assumption which makes the bilevel problem well-defined:
**Assumption 1** (Strong convexity).: _The loss functions \(\ell,\ell^{\prime}\) are differentiable. Additionally, the function \(\theta\mapsto\ell(\theta;x)\) is \(\mu-\) strongly convex with \(\mu>0\) for all \(x\), i.e. for all \(x,\theta\), we have \(\lambda_{\min}(\nabla^{2}_{\theta\theta}\ell(\theta;x))\geq\mu\)._
For instance, such assumption is verified if \(\ell(\theta;x)\) is of the form \(\operatorname{fit}(\theta;x)+\mu\|\theta\|^{2}/2\), where the fit function is a data-fit term that is convex (like a least-squares or logistic loss) and \(\mu\|\theta\|^{2}/2\) is a regularizer.
Changing the weights \(w\) modifies the cost function \(G(\theta,w)\), hence its minimizer is now a function of \(w\), denoted \(\theta^{*}(w)=\arg\min_{\theta}G(\theta,w)\). Note that the strong-convexity assumption 1 implies that the function \(G\) itself is \(\mu-\)strongly convex, guaranteeing the existence and uniqueness of \(\theta^{*}(w)\). Data reweighting is formalized as the _bilevel problem_
\[\min_{w\in\Delta_{n}}h(w):=F(\theta^{*}(w)), \tag{2}\]
where \(\theta^{*}\) depends implicitly on \(w\) through \(G\).
### Importance sampling as the hope behind data reweighting?
One can take a distributional point of view on the problem to get better insights into the sought-after solutions. We let \(\mu=\sum_{i=1}^{n}\delta_{x_{i}}\) the empirical training distribution, and \(\nu=\sum_{j=1}^{m}\delta_{x_{j}^{\prime}}\) the empirical test distribution. Instead of having one weight per data sample, we now have a weighting function \(\omega:\mathbb{R}^{d}\to\mathbb{R}_{+}\) such that \(\int\omega(x)d\mu(x)=1\). The bilevel problem becomes
\[\begin{split}&\min_{\omega}\int\ell^{\prime}(\theta^{*}(\omega);x)d \nu(x)\text{ subject to}\\ &\theta^{*}(\omega)\in\arg\min_{\theta}\int\ell(\theta;x)\omega( x)d\mu(x)\end{split} \tag{3}\]
In practice, the distributions \(\mu\) and \(\nu\) are sums of Diracs; we can wonder what happens instead if they are continuous.
**Proposition 1**.: _If \(\nu\) is absolutely continuous w.r.t. \(\mu\), and \(\ell=\ell^{\prime}\), a global solution to the bilevel problem (3) is \(\omega^{*}(x)=\frac{d\nu}{d\mu}(x)\)._
This ratio \(d\nu/d\mu\) is precisely the one that _importance sampling_ techniques [24] try to estimate: in this specific case, bilevel optimization recovers importance sampling. We now turn to the resolution of the bilevel problem.
### Solving the bilevel problem
The bilevel optimization problem corresponding to data-reweighting is the _single-level_ optimization of the non-convex function \(h\) over the simplex \(\Delta_{n}\) (2), for which _mirror descent_[19, 2] is an algorithm of choice. Starting from an initial guess \(w^{0}\in\Delta_{n}\), mirror descent iterates \(\tilde{w}^{k+1}=w^{k}\odot\exp(-\eta\nabla h(w^{k}))\) and \(w^{k+1}=\frac{\tilde{w}^{k+1}}{\|\tilde{w}^{k+1}\|_{1}}\), where \(\odot\) is the element-wise multiplication.
This method involves the gradient of the value function \(h\), which is obtained with the chain rule [22, 8]
\[\nabla h(w)=\left[\frac{\partial\theta^{*}}{\partial w}\right]^{T}\nabla F( \theta^{*}(w)). \tag{4}\]
The implicit function theorem then gives, thanks to the invertibility of \(\nabla^{2}_{\theta\theta}\) guaranteed by Assumption 1:
\[\frac{\partial\theta^{*}}{\partial w}=-\left[\nabla^{2}_{\theta\theta}G(\theta^{*} (w),w)\right]^{-1}\nabla^{2}_{\theta w}G(\theta^{*}(w),w). \tag{5}\]
In the data reweighting problem, \(G\) has a special structure, which gives \(\nabla^{2}_{\theta\theta}G(\theta,w)=\sum_{i=1}^{n}w_{i}\nabla^{2}\ell(\theta; x_{i})\) and \(\nabla^{2}_{\theta w}G(\theta,w)=\left[\nabla\ell(\theta,x_{1}),\ldots,\nabla \ell(\theta,x_{n})\right]\in\mathbb{R}^{p\times n}\). We finally obtain \(\nabla h(w)=\Psi(\theta^{*}(w),w)\in\mathbb{R}^{n}\) where the \(i-\)th coordinate of \(\Psi\) is given by
\[\Psi(\theta,w)_{i}=-\left\langle\nabla\ell(\theta;x_{i}),\left[\nabla^{2}_{ \theta}G(\theta,w)\right]^{-1}\nabla F(\theta)\right\rangle. \tag{6}\]
This hyper-gradient has an intuitive structure: letting \(\langle u,v\rangle_{\theta}\) the scalar product defined over \(\mathbb{R}^{p}\) by \(\langle u,v\rangle_{\theta}=\langle u,\left[\nabla^{2}_{\theta\theta}G(\theta, w)\right]^{-1}v\rangle\), the hyper-gradient corresponding to sample \(i\) is simply the (opposite) of the alignment measured with this new scalar product between the gradient of the sample \(i\), and the gradient of the outer function \(F\). Therefore, this gradient increases weights for which \(\nabla\ell(\theta;x_{i})\) aligns with the outer gradient. The mirror descent algorithm to solve the bilevel problem (2) is described in Algorithm 1.
```
Input : Initial point \(w^{0}\in\Delta\), step-size \(\eta\), number of iterations \(N\). for\(k=1\)to\(N\)do Compute \(\theta^{*}(w^{k})\) Compute \(\nabla h(w^{k})=\Psi(\theta^{*}(w^{k}),w^{k})\) Update \(w^{k+1}=\dfrac{w^{k}\odot\exp(-\eta\nabla h(w^{k}))}{\|w^{k}\odot\exp(-\eta \nabla h(w^{k}))\|_{1}}\) endfor Return :\(w^{N}\).
```
**Algorithm 1** Exact Bilevel Algorithm
Unfortunately, this algorithm is purely theoretical as it is not implementable; indeed, computing \(\theta^{*}(w^{k})\) is equivalent to finding the minimum of \(G\), which is generally impossible. We can instead try to approximate it by replacing \(\theta^{*}(w^{k})\) by the output of many iterations of an optimization algorithm, but then the cost of the algorithm becomes prohibitive: each iteration requires the approximate resolution of an optimization problem.
### The practical solution: warm-started bilevel optimization
A sound idea for a scalable algorithm is to use an iterative method instead to minimize \(G\) and have \(\theta\) and \(w\) evolve simultaneously. In this case, we have two sets of variables \(\theta^{k},w^{k}\). The parameters \(\theta^{k}\) are updated using standard algorithms like (stochastic) gradient descent, and then to update \(w^{k}\), we approximate the hyper-gradient by \(\nabla h(w^{k})\simeq\Psi(\theta^{k},w^{k})\) in Eq. (6). For simplicity, we use gradient descent with step-size \(\rho\) to update \(\theta^{k}\)i, and each iteration consists of one
update of \(\theta^{k}\) followed by one update of \(w^{k}\). The full procedure is described in Algorithm 2. As a side note, this algorithm can still be expensive to implement because i) computing \(\Psi\) requires solving a linear system and ii) in a large-scale setting when \(n\) and \(m\) are large, computing the inner and outer gradients and the Hessian scales linearly with the number of samples. Several works try to fix these issues by proposing bilevel algorithms that do not have to form a Hessian or invert a system and that are stochastic, i.e., only use one sample at each iteration to progress [10, 27, 11, 16]. For our theoretical analysis, we do not consider such modifications and focus on the bare-bones case of Algorithm 2, which slightly departs from practice, but is already insightful.
```
Input : Initial points \(\theta^{0}\in\mathbb{R}^{p}\) and \(w^{0}\in\Delta_{n}\), step-sizes \(\eta\) and \(\rho\), number of iterations \(N\). for\(k=1\)to\(N\)do Compute \(\Psi(\theta^{k},w^{k})\) Update \(\theta^{k+1}=\theta^{k}-\rho\nabla G(\theta^{k},w^{k})\) Update \(w^{k+1}=\dfrac{w^{k}\odot\exp(-\eta\Psi(\theta^{k},w^{k}))}{\|w^{k}\odot\exp(- \eta\Psi(\theta^{k},w^{k}))\|_{1}}\) endfor Return :\(w^{N}\).
```
**Algorithm 2** Warm Started Bilevel Algorithm
### A toy experiment
We describe a toy experiment illustrating the practical issues of Algorithm 2. We consider a 2-d regression setting, where the train data \([x_{1},\ldots,x_{n}]\) comes from a mixture of two Gaussian, and each cluster has a different regression parameter. Formally, given two centroids \(\mu_{1},\mu_{2}\in\mathbb{R}^{2}\) and parameters \(\hat{\theta}_{1},\hat{\theta}_{2}\in\mathbb{R}^{2}\), each training sample \(x_{i}=[d_{i},y_{i}]\) is generated by
\[z_{i}\sim\text{Bern}(1,2),\;\;d_{i}\sim\mathcal{N}(\mu_{z_{i}},I),y_{i}\sim \mathcal{N}(\langle d_{i},\theta_{z_{i}}^{*}\rangle,\sigma^{2}),\]
where \(\text{Bern}(1,2)\) is the Bernoulli law with probability \(1/2\) over \(\{1,2\}\). Here, \(z_{i}\) represents the random cluster associated to \(x_{i}\). Meanwhile, the test data \(x_{j}^{\prime}\) is only drawn from the first cluster with parameter \(\hat{\theta}=\hat{\theta}_{1}\). The corresponding points \(d\) are plotted in the two leftmost figures in Figure 2 in a color reflecting the target's value \(y\). The model is linear with a least-squares loss, meaning that the train and test loss are
\[\ell(x,\theta)=\ell^{\prime}(x,\theta)=\frac{1}{2}\|\langle d,\theta\rangle-y \|^{2}\text{ with }x=[d,y]. \tag{7}\]
The four rightmost figures in Figure 2 display four possible reweighting solutions. Taking uniform weights \(w_{i}=1/n\), the train loss is minimized by finding a compromise between the two cluster's parameters, and the model does not recover \(\hat{\theta}_{1}\) nor \(\hat{\theta}_{2}\). This leads to a high training and testing loss, and \(\|\hat{\theta}-\theta^{*}\|\) is
large. Intuitively, in the data reweighting framework, we want the weights to focus only on the first cluster and to discard points from the second cluster, which is irrelevant to the test problem. Additionally, in a noiseless setting (\(\sigma=0\)), setting \(w_{i}\) proportional to \(\delta_{zi=1}\) leads to a global minimum in the bilevel problem since \(\theta^{*}(w)=\hat{\theta}\) and hence \(h(w)=F(\hat{\theta})=0\). In this ideal scenario, the model is fit only on the correct cluster, leading to a small error \(\|\hat{\theta}-\theta^{*}\|\), due only to the label noise.
We then apply Algorithm 1 to this problem. Here, since the inner problem is quadratic, we have a closed-form \(\theta^{*}(w)=\left(\sum_{i=1}^{n}w_{i}d_{i}d_{i}^{T}\right)^{-1}\sum_{i=1}^{n} w_{i}y_{i}d_{i}\), making Algorithm 1 implementable. It gives a reasonable solution where most of the weights in the second cluster are close to \(0\), and weights in the first cluster are nearly uniform. Thus, the error \(\|\hat{\theta}-\theta^{*}\|\) is small.
We finish with the warm-started Algorithm 2. We observe that the algorithm outputs sparse weights: only a few weights are non-zero. This, in turn, leads to a higher estimation error \(\|\hat{\theta}-\theta^{*}\|\). As an important note, this effect is worsened when taking a higher learning rate \(\eta\) for the update of \(w\), and is mitigated by taking small learning rates \(\rho\) that lead to slow convergence to better solutions. This illustrates a problem with the warm-started method: it has a trajectory distinct from the exact bilevel method and recovers sub-optimal solutions. In
Figure 2: **Data reweighting for a linear regression problem. The first two figures show the problem setup. The train set consists of points from two clusters. Points color corresponds to the value of target \(y=\langle\hat{\theta}_{i},d\rangle+\text{noise}\), where each cluster has its own linear model \(\hat{\theta}_{1}\) and \(\hat{\theta}_{2}\). The test set consists of points only from the first cluster with \(\hat{\theta}=\hat{\theta}_{1}\). The last four figures show four reweighting solutions. The size of the points in the picture corresponds to the weight value \(w_{i}\) for the corresponding sample. We give a small non-zero radius to zero weights for visualization purposes. Uniform weights: same weights for all data points regardless of the cluster. Since the full dataset is not separable with a linear model, the linear model finds a compromise that leads to a poor solution. Optimal weights: equal weights for points from the first cluster, zero weights for the other points. The wrong cluster is fully discarded, leading to optimal estimation of the parameters. Exact bilevel: output of Algorithm 1, gets to a solution close to the optimal weights, and the cluster is correctly identified with almost uniform weights. Warm started bilevel: output of Algorithm 2, gets to a very sparse solution which does not generalize well.**
the following section, we develop a mathematical framework that explains this phenomenon.
## 3 A Dynamical System View on the Warm-Started Bilevel Algorithm
The study of iterative methods like Algorithm 2 is notoriously hard; we focus on the dynamical system obtained by letting the step sizes \(\eta\), \(\rho\) go to \(0\) at the same speed. Algorithm 2 can be seen as the discretization of the Ordinary Differential Equation (ODE)
\[\begin{cases}\dot{\theta}&=-\alpha\nabla G(\theta,w)\\ \dot{w}&=-\beta P(w)\Psi(\theta,w)\end{cases}\, \tag{8}\]
where \(\alpha>0\) controls the speed of convergence of \(\theta\), \(\beta>0\) controls the speed of \(w\), and \(P(w)=\operatorname{diag}(w)-ww^{T}\) is a preconditioning matrix, which is the inverse Hessian metric \(\operatorname{diag}(w)\) associated to the entropy applied to the projector on the tangent space \(I_{n}-\mathbb{1}_{n}w^{T}\), recovering a Riemannian gradient flow [12]. This ODE can be recovered from Algorithm 2 in the following sense:
**Proposition 2**.: _Let \((\theta(t),w(t))\) the solution of the ODE (8), and \((\theta^{k},w^{k})\) the iterates of Algorithm 2. Assume that the steps are such that \((\rho,\eta)=\tau(\alpha,\beta)\) with \(\tau>0\). Then, for any \(T>0\), we have \(\lim_{\tau\to 0}\sup_{t\in[0,T]}\|(\theta(t),w(t))-(\theta^{\lfloor t/\tau \rfloor},w^{\lfloor t/\tau\rfloor})\|=0\)._
When \(\alpha\gg\beta\), the dynamics in \(\theta\) are much faster, and we recover the classical bilevel approach:
**Theorem 1**.: _Let \(w^{*}(t)\) the solution of the ODE \(\dot{w}=-P(w)\nabla h(w)\), and \(\theta^{\alpha,\beta},w^{\alpha,\beta}\) the solution of (8). Then for all \(\alpha\), for all time horizon \(T\), we have_
\[\lim_{\beta\to 0}\sup_{t\in[0,T]}\|w^{\alpha,\beta}(t/\beta)-w^{*}(t)\|=0\]
\[\lim_{\beta\to 0}\|\theta^{\alpha,\beta}(t)-\theta^{*}(w(t))\|\leq\|\theta_{0}- \theta^{*}(w_{0})\|e^{-\mu\alpha t}\]
The proof of this result uses the classical tools in bilevel literature [10]. This result highlights that, as expected, if \(\alpha\gg\beta\), the variable \(\theta\) is tracking \(\theta^{*}(w)\), which in turn means that \(w\) follows the direction of the true gradient of \(h\): we recover the Exact Bilevel dynamics, that corresponds to the gradient flow of \(h\). The rest of this section is devoted to understanding what happens in the other regime where \(\beta\gg\alpha\), when the dynamics in \(w\) are much faster than that in \(\theta\).
### Mirror descent flows on the simplex
Before understanding the joint dynamics, we consider the dynamics of the warm-started bilevel ODE (8) when only \(w\) evolves, i.e., when \(\alpha=0\) and \(\theta(t)=\theta^{0}\)
For a vector field \(\phi:\mathbb{R}^{n}\to\mathbb{R}^{n}\), we consider mirror descent updates given by \(\tilde{w}^{k+1}=w^{k}\odot\exp(-\eta\phi(w^{k}))\) and \(w^{k+1}=\frac{\tilde{w}^{k+1}}{\|\tilde{w}^{k+1}\|_{1}}\). Letting the step size \(\eta\) to \(0\), we recover the _mirror descent flow_[13], that is the ODE
\[\dot{w}=-\Phi(w)\text{ with }\Phi(w)=P(w)\phi(w). \tag{9}\]
When the vector field \(\phi\) is the gradient of a function \(f:\mathbb{R}^{n}\to\mathbb{R}\), we recover mirror descent to minimize \(f\) on the simplex, with standard guarantees. In our warm-started formulation, the field \(\phi(w)=\Psi(\theta_{0},w)\) does not correspond to a gradient since its Jacobian may not be symmetric. We analyze the stationary points of this ODE and their stability. We recall that the support of a vector \(w\), \(\operatorname{Supp}(w)\), is the set of indices in \(\{1,n\}\) such that \(w_{i}\neq 0\), and let \(l\) its cardinal.
**Proposition 3**.: _The stationary points of the mirror descent flow (9) are the \(w\) such that \(\phi(w)\mid_{\operatorname{Supp}(w)}\) is proportional to \(\mathbb{1}_{l}\)._
All vectors of the form \(w=(0,\dots,1,\dots,0)\) fulfill this condition, but there might be other solutions with a larger support. To study the stability of these points, we turn to the Jacobian of \(\Phi\). Letting for short \(\phi=\phi(w)\in\mathbb{R}^{n}\) and \(J=D\phi(w)\in\mathbb{R}^{n\times n}\), we find
\[D\Phi(w)=\operatorname{diag}(\phi)+\operatorname{diag}(w)J-\langle w,\phi \rangle I_{n}-w(\phi^{T}+w^{T}J)\]
Because of the simplex constraint, we only care about this Jacobian in directions that are in the tangent space \(T_{\Delta}=\{\delta\in\mathbb{R}^{n}|\ \sum_{i=1}^{n}\delta_{i}=0\}\). Since \(\Phi\) makes the flow stay in the simplex, we have that \(D\Phi(w)[\delta]\in T_{\Delta}\) for any \(\delta\in T_{\Delta}\). Without loss of generality, we assume that \(w_{1},\dots,w_{l}\neq 0\) and \(w_{l+1}=\dots=w_{n}=0\). The Jacobian greatly simplifies for coordinates \(i\) such that \(w_{i}=0\); indeed in this case for any vector \(\delta\) in \(\mathbb{R}^{n}\) we have \(D\Phi(w)[\delta]_{i}=(\phi_{i}-\langle w,\phi\rangle)\delta_{i}\). On the other hand, for a coordinate \(i\) in the support, under the optimality conditions, taking a displacement of the form \(\delta=(\tilde{\delta}_{l},0,\dots,0)\), we find that \(D\Phi(w)[\delta]_{i}=w_{i}([\tilde{J}\tilde{\delta}]_{i}-\sum_{j=1}^{l}(\phi_ {j}+[\tilde{J}^{T}w]_{j})\tilde{\delta}_{j})\), where \(\tilde{J}\) is the upper-left \(l\times l\) block of \(J\). In other words, \(D\Phi(w)\) has the following structure when \(w\) is a stationary point of the flow:
\[D\Phi(w)=\begin{bmatrix}P(\tilde{w})\tilde{J}&(*)\\ 0&\operatorname{diag}(\phi_{i}-\langle w,\phi\rangle)\end{bmatrix}\]
We readily obtain the stability condition, which is that this matrix has all positive eigenvalues:
**Proposition 4**.: _A stationary point \(w\) of (9) is stable if and only if for all \(i\notin\operatorname{Supp}(w)\) we have \(\phi(w)_{i}>\langle w,\phi(w)\rangle\) and the matrix \(P(\tilde{w})\tilde{J}\), as a linear operator \(T_{\Delta}\to T_{\Delta}\), has eigenvalues with positive real parts._
We can finally quantify the local speed of convergence towards these stationary points:
**Proposition 5**.: _Let \(w^{*}\) be a stable stationary point of (9), and let \(\delta\) be in the tangent cone at \(w^{*}\). Then, letting \(w(t)\) the trajectory of the ODE (9) starting from \(w^{*}+\delta\), we have \(w(t)=w^{*}+\exp(-D\Phi(w^{*})t)\delta+o(\delta)\)._
This result is classical in ODE theory [5]. We now turn to the case of interest for bilevel optimization, where \(\phi\) corresponds to the hyper-gradient field with frozen parameters \(\theta\), i.e., \(\phi(w)=\Psi(\theta_{0},w)\).
### The Bilevel Flow with Frozen Parameters
With fixed parameters \(\theta_{0}\), the field of interest \(\phi(w)=\Psi(\theta_{0},w)\) is given by the simple equation
\[\phi(w)=\Gamma g(w), \tag{10}\]
where \(\Gamma=[\nabla\ell(\theta_{0};x_{1}),\ldots,\nabla\ell(\theta_{0};x_{n})]\in \mathbb{R}^{n\times p}\) is a matrix containing all the inner gradients, and \(g(w)=\left(\sum_{i=1}^{n}w_{i}\nabla_{\theta}^{2}\ell(\theta_{0};x_{i})\right) ^{-1}\nabla F(\theta_{0})\in\mathbb{R}^{p}\) is a the outer gradient \(\nabla F(\theta_{0})\) transformed by the metric induced by the Hessian of the inner function \(G\). Figure 3 illustrates the weights' dynamics with few samples on a low-dimensional problem and its "sparsifying" effect.
The field \(\phi\) only depends mildly on \(w\), through the impact of \(w\) on the Hessian of the inner problem. We start our analysis with the simple case where all Hessians \(\nabla_{\theta}^{2}\ell(\theta_{0},x_{i})\) are the same, in which case \(g\) is constant:
**Proposition 6**.: _If \(g(w)\) is constant in Eq. 10, then the field \(\phi(w)\) is constant, equal to \(\phi\in\mathbb{R}^{n}\). The mirror flow (9) converges to \(w^{*}\) such that \(w_{i}^{*}=0\) if \(i\notin\arg\max\phi\), and \(w_{i}^{*}\) is proportional to \(w_{0}^{*}\) otherwise._
Figure 3: Weights dynamics from the mirror-flow ODE (9) with the field \(\phi\) in Eq. (10), with \(n=5\) training samples and \(p=3\) parameters. Matrix \(\Gamma\) is randomly sampled with i.i.d. Gaussian entries, same for \(\nabla F(\theta_{0})\), and the individual Hessians are taken as \(\nabla_{\theta\theta}^{2}\ell(\theta;x_{i})=u_{i}u_{i}^{T}+0.1I_{p}\) where the \(u_{i}\) are drawn i.i.d. from random Gaussians. The weights follow a non-trivial dynamic that eventually converges to a solution with at most \(p\) non-zero weights, as predicted by Prop. 8. Most random initializations lead to only one non-zero weight; we display a rarer dynamic here.
In particular, if \(\phi\) only has a unique maximal coefficient, the mirror flow converges to a \(w^{*}\) with only _one_ non-zero weight - which is as sparse as it gets. We now turn to the general case with non-constant Hessians.
The vector field \(\phi\) has a "low rank" structure, as it is parameterized by \(g(w)\) of dimension \(p\), which generally is much lower than \(n\). It is, therefore, natural that it is hard to satisfy the stability conditions of Prop. 3 with non-sparse weights: we have \(n\) conditions to verify with a family of vectors \(g(w)\) that is of dimension \(p\). To formalize this intuition, we define the following set:
**Definition 1**.: _The set \(\mathcal{I}_{l}^{p}\) is the set of \(l\times p\) matrices \(Z\) such that \(\mathbb{1}_{l}\in\operatorname{range}(Z)\) or \(\mathbb{0}_{l}\in\operatorname{range}(Z)\)._
In other words, this is the set of matrices \(Z\) for which the equation \(Zx=\mathbb{1}_{l}\) or the equation \(Zx=\mathbb{0}_{l}\) has a non-zero solution \(x\). This set either contains most matrices if \(l\leq p\) or very few if \(p<l\):
**Proposition 7**.: _Assume that \(Z\in\mathbb{R}^{l\times p}\) has i.i.d. entries drawn from a continuous distribution. If \(l\leq p\), then \(\mathbb{P}(Z\in\mathcal{I}_{l}^{p})=1\), while if \(l>p\) then \(\mathbb{P}(Z\in\mathcal{I}_{l}^{p})=0\)._
This set is linked to the stationary points of the flow (9):
**Proposition 8**.: _If \(w\) is a stationary point of (9) with the hypergradient (10), letting \(l\) the size of the support of \(w\), we have \(\Gamma|_{\operatorname{Supp}(w)}\in\mathcal{I}_{l}^{p}\)._
This proposition immediately shows that, in general, we cannot find a stationary point of (9) with support larger than \(p\), the number of features: the mirror descent dynamics applied with the hypergradient, if it converges, will converge to a sparse solution with at most \(p\) non-zero coefficients. Note that we could not show that the corresponding flow always converges; our results only indicate that _if_ the flow converges, then it must be towards a sparse solution. Through numerical simulations, we have identified trajectories of the ODE that oscillate and do not seem to converge.
### Two Variables Dynamics
We conclude our analysis by going back to the two variables ODE (8) and use the previous analysis to show that the sparsity issue also impacts the warm-started bilevel problem in the case where \(\beta\gg\alpha\). To do so, we first assume that the mirror flow ODE with frozen parameters \(\theta\) in Eq (9) converges for all \(\theta\).
**Assumption 2**.: _The ODE \(\dot{w}=-\Psi(\theta,w)\) starting from \(w_{0}\), with fixed \(\theta\), is such that \(w(t)\) goes to a limit as \(t\) goes to infinity. We call \(\Omega(\theta,w_{0})\) this limit._
Note that, following Prop. 8, the limit \(\Omega\) is in general sparse with a support smaller than \(p\). We now give a result similar to Thm. 1 but when the dynamics in \(w\) gets much faster than that in \(\theta\):
**Theorem 2**.: _Let \(\theta^{*}(t)\) the solution of the ODE \(\dot{\theta}=-\nabla G(\theta,\Omega(\theta,w_{0}))\), and \(\theta^{\alpha,\beta},w^{\alpha,\beta}\) the solution of (8),where \(w\) starts from \(w_{0}\). Under technical assumptions described in Appendix, for all \(\beta\), for all time horizon \(T\), we have_
\[\lim_{\alpha\to 0}\sup_{t\in[0,T]}\|\theta^{\alpha,\beta}(t/\alpha)-\theta^{*}(t )\|=0.\]
As a consequence, the parameters \(\theta\) track the gradient flow of \(G\) obtained with the sparse weights \(\Omega(\theta,w_{0})\). This leads to sub-optimal parameters, which are estimated using only a few training samples, and explains the behavior observed in Figure 2. Note that our theory only works in the regime where \(\alpha\gg\beta\) (Thm. 1) or \(\alpha\ll\beta\) (Thm. 2), the behavior of the warm-started bilevel in practice is therefore interpolating between these two regimes. However, in practice, we observe that the warm-started bilevel methods are attracted to sparse solutions, hinting at the fact that Thm. 2 might better describe reality than Thm. 1.
## 4 Experiments
All experiments are run using the Jax framework [4] on CPUs.
### The role of mirror descent
We place ourselves in the same toy setup with a mixture of two Gaussians that is described in Sec. 2.4. We take \(n=500\) and \(m=100\) in dimension 2. This experiment aims to understand if mirror descent is critical to observing the behavior described in the paper. We use another method to enforce the simplex constraint, namely, we introduce the following inner function: \(G(\theta,\lambda)=\sum_{i=1}^{n}\frac{\sigma(\lambda_{i})}{\sum_{j}\sigma( \lambda_{j})}\ell(\theta;x_{i})\), where \(\sigma\) is the sigmoid function. Thus, we take the vector \(\lambda\) as outer parameters, which defines the weights as \(w_{i}=\frac{\sigma(\lambda_{i})}{\sum_{j}\sigma(\lambda_{j})}\) and we use gradient descent on the \(\lambda\) instead of mirror descent on \(w\) to solve the bilevel problem. We use \(F(\theta)=\frac{1}{m}\sum_{j=1}^{m}\ell^{\prime}(\theta;x_{j}^{\prime})\) as outer function. We use the same algorithm as Algorithm 1 but without the mirror step since the \(\lambda\)'s are unconstrained. Figure 4 displays the results. In blue, we display the error between the parameters and the target \(\theta^{*}\). In orange, we display the error between the parameters found by minimizing the inner function with the current weights. Finally, the red curve tracks the entropy of the weights defined as entropy \(=-\sum_{i=1}^{n}w_{i}\log(w_{i})\). We use entropy as a proxy for sparsity: entropy is maximized when the weights are uniform and minimized when all weights but one are 0. Entropy decreases during training, as suggested by our theory. Between iterations 1000 and 5000, the weight distribution is not yet sparse and correctly identifies the good cluster; hence minimizing the train loss with those weights leads to good results: the orange curve is low. However, because of the warm started dynamics, the parameters take some time to catch up, eventually converging only when the weights are already sparse, leading to a large error. Here, replacing mirror descent with reparameterization leads to the same behavior described in the paper.
### Hyper data-cleaning
We conduct a hyper data-cleaning experiment in the spirit of [9]. It is a classification task on the MNIST dataset [15], where the training and testing set consists
of pairs of images of size \(28\times 28\) and labels in \(\{0,9\}\). The gist of this experiment is that the training set is _corrupted_: with a probability of corruption \(p_{c}=0.9\), each training sample's label is replaced by a random, _different_ label; corrupted samples always have an incorrect label. The testing set is uncorrupted. We take \(n=8K\) training samples; hence we only have 800 clean training samples hidden in the training set.
We use a linear model and a cross-entropy loss, with a small \(\ell_{2}\) regularization of \(10^{-2}\) on the training loss. We are, therefore, in the strongly convex setting described in this paper: for any weight \(w\), the inner problem has one and exactly one set of optimal parameters. However, in this case, Algorithm 1 is not implementable since we do not have a closed-form solution to the regularized multinomial logistic regression.
The linear system resolution in Algorithm 2 is also impractical; we, therefore, use the scalable algorithm SOBA [6], which has an additional variable that tracks the solution to the linear system updated using Hessian-vector products. We first run the algorithm with a fixed small inner learning rate \(\rho=10^{-3}\) and several outer learning rates \(\eta\). Figure 5 displays the results. The validation loss is computed on samples that are not part of the test set. When the outer learning rate \(\eta\) is too high (red curves), as predicted by our theory, the weights go to very sparse solutions, leading to sub-optimal performance. When the outer learning rate \(\eta\) is too small, the system converges to a good solution, but slowly (the bluest curve has not yet converged). Overall, the range of learning rates where the algorithm converges quickly to a good solution is very narrow.
Then, we run take different choices of inner learning rate \(\rho\) and outer learning
Figure 4: Convergence of Algorithm 2 on a toy mixture problem as described in Sec. 2.4. We replace mirror descent by reparameterizing the weights as the softmax of new parameters. **Top:** Weights at different iterations. **Bottom:** training dynamics.
rate \(\eta\). We cover a large range of learning rate ratios \(r=\eta/\rho\) while ensuring the algorithm's convergence. Hence, for a fixed target ratio \(r\), we pick \(\eta\) and \(\rho\) so that neither go over a prescribed value \(\eta_{\max}=1\) and \(\rho_{\max}=10^{-2}\), by taking \(\eta=\min(r\rho_{\max},\eta_{\max})\) and \(\rho=\eta/r\). The algorithm always converges in this setup. We perform \(12K\) iterations of the algorithm and then compute two metrics: validation accuracy and entropy of the final weights. As baselines for entropy, we compute the entropy of the _uniform distribution_, equal to \(\log(n)\), and of the _perfect distribution_, which would put a weight of \(0\) on all corrupted data points, and uniform weight elsewhere, equal to \((1-p_{c})\log(n)\). Figure 6 displays the results. We see that when the outer learning rate \(\eta\) is too small, nothing happens because the weights have not yet converged. The corresponding accuracy is around \(10\%\): training the model on corrupted data completely fails. Then, when the ratio gets larger, reweighting starts working: for ratios between \(10^{-1}\) to \(10^{2}\), the weights learned are not sparse and correctly identify the correct data. This leads to an accuracy close to the accuracy of the model trained on the perfect distribution, i.e., on the \(800\) clean train samples. However, the time it takes to convergence is roughly linear with the ratio: taking a ratio of \(10^{-1}\) leads to training about \(1000\) times slower than with \(10^{2}\). Finally, when the ratio is much higher, we arrive at the region predicted by our theory (Thm. 2): weights become extremely sparse, as highlighted by the entropy curve, and accuracy decreases. We even reach a point where the entropy goes to \(0\), i.e., only one weight is non-zero. The corresponding accuracy is not catastrophic thanks to warm-starting; it is much better than that of a model trained on that single sample: the model has had time to learn something in the period where the weights were non-zero. This observation is reminiscent of [25], which also mention that the weights trajectory impacts the final parameters.
Figure 5: Applying SOBA, a scalable warm-start bilevel method, for datacleaning on the MNIST dataset, with a fixed inner learning rate and different outer learning rates. **Top:** training curves. **Bottom:** Weights after training, sorted in descending order, zooming in on the first \(1500\) weights out of \(8000\).
## Discussion & Conclusion
In this work, we have illustrated a challenge for data reweighting with bilevel optimization: bilevel methods must use warm-starting to be practical, but warm-starting induces sparse data weights, leading to sub-optimal solutions. To remedy the situation, a small outer learning rate should therefore be used, which might, in turn, lead to slow convergence. Classical bilevel optimization theory [1] demonstrates the convergence of warm-started bilevel optimization to the solutions of the true bilevel problem. This may seem paradoxical at first and in contradiction with our results. Two explanations lift the paradox: i)[1] require the ratio \(\frac{\alpha}{\beta}\) to be smaller than some intractable constant of the problem, hence not explaining the dynamics of the system in the setting where \(\alpha\) is not much smaller than \(\beta\), which is the gist of this paper, ii) convergence results in bilevel optimization are always obtained as non-convex results, only proving that the gradient of \(h\) goes to \(0\). In fact, for the data reweighting problem, several stationary points of \(h\) are sparse (see Prop. 8). Hence, our results on the sparsity of the resulting solution can be seen as implicit bias results, where the ODE converges to different solutions on the manifold of stationary points. Finally, our results are orthogonal to works considering the implicit bias of bilevel optimization [1, 25] These results are based on over-parameterization, implying that the inner problem is not strongly convex and has multiple minimizers, while our results do not require such a structure.
Figure 6: Output of SOBA on the datacleaning task for a wide range of learning rate ratios. Entropy is a proxy for weight’s sparsity.
|
2305.13356
|
Critical phase and spin sharpening in SU(2)-symmetric monitored quantum
circuits
|
Monitored quantum circuits exhibit entanglement transitions at certain
measurement rates. Such a transition separates phases characterized by how much
information an observer can learn from the measurement outcomes. We study
SU(2)-symmetric monitored quantum circuits, using exact numerics and a mapping
onto an effective statistical-mechanics model. Due to the symmetry's
non-Abelian nature, measuring qubit pairs allows for nontrivial entanglement
scaling even in the measurement-only limit. We find a transition between a
volume-law entangled phase and a critical phase whose diffusive purification
dynamics emerge from the non-Abelian symmetry. Additionally, we numerically
identify a "spin-sharpening transition." On one side is a phase in which the
measurements can efficiently identify the system's total spin quantum number;
on the other side is a phase in which measurements cannot.
|
Shayan Majidy, Utkarsh Agrawal, Sarang Gopalakrishnan, Andrew C. Potter, Romain Vasseur, Nicole Yunger Halpern
|
2023-05-22T18:00:01Z
|
http://arxiv.org/abs/2305.13356v3
|
# Critical phase and spin sharpening in SU(2)-symmetric monitored quantum circuits
###### Abstract
Monitored quantum circuits exhibit entanglement transitions at certain measurement rates. Such a transition separates phases characterized by how much information an observer can learn from the measurement outcomes. We study SU(2)-symmetric monitored quantum circuits, using exact numerics and a mapping onto an effective statistical-mechanics model. Due to the symmetry's non-Abelian nature, measuring qubit pairs allows for nontrivial entanglement scaling even in the measurement-only limit. We find a transition between a volume-law entangled phase and a critical phase whose diffusive purification dynamics emerge from the non-Abelian symmetry. Additionally, we numerically identify a "spin-sharpening transition." Across the transition, the rate at which measurements reveal information about the total spin quantum number changes parametrically with system size.
## I Introduction
Traditionally, quantum many-body physicists have been limited to studying closed systems in equilibrium. Thanks to the maturation of quantum simulators [1], researchers can now prepare and control open quantum systems far from equilibrium with high precision. Quantum simulators have helped answer foundational questions about quantum entanglement and thermodynamics [2; 3]. Also, quantum simulators have the potential to solve real-world problems in, e.g., materials science and chemistry [3; 4]. These advances have spurred the development of tools for studying open quantum systems far from equilibrium [5]. _Monitored quantum circuits_ form one toolkit [6; 7]. A typical monitored quantum circuit acts on a chain of \(L\) qubits (spin-1/2 particles). The circuit contains two-qubit unitary gates, after each layer of which every qubit has a probability \(p\) of being measured. Monitored circuits exhibit measurement-induced phase transitions (MIPTs), due to the competition between chaotic dynamics and measurements [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31].
Initially, an MIPT was cast as a transition between phases characterized by volume-law and area-law entanglement scaling [32; 33]. Equivalently, the transition is a purification transition between a _mixed phase_ and a _pure phase_[8]. When the measurement rate \(p\) is low, the chaotic dynamics scramble information about the initial state. Local measurements cannot extract that information in this mixed phase. An initially mixed state becomes pure, conditionally on measurement outcomes, in a time \(t_{\rm P}\sim\exp(L)\), with \(L\) the number of qubits. In contrast, at large \(p\), the measurements can distinguish different initial states efficiently. In this pure phase, an initially mixed state purifies quickly, often at an \(L\)-independent rate [32].
Few properties restrict the simplest monitored circuits' dynamics: unitarity and locality. Monitored circuits can be enriched, though. Enhancements include charge conservation [19; 20; 21; 22; 23; 24], measurements of particular operators (such as generators of the toric-code stabilizer) [25; 26], and the replacement of qubits with free fermions [27; 28; 29; 30; 34; 35]. U(1)-symmetric monitored circuits exhibit a _charge-sharpening transition_[21] between a _charge-fuzzy phase_ and a _charge-sharp phase_. These phases are distinguished by how quickly measurements collapse superpositions of different amounts of charge--how efficiently an observer can learn from local measurements the amount of global charge in the system.
Noncommuting symmetry charges have spawned a growing subfield of quantum thermodynamics [36; 37; 38; 39]. Noncommutation of charges has been shown to increase average entanglement [40], decrease entropy-production rates [41; 42], and necessitate modifications to the eigenstate thermalization hypothesis (ETH) [43; 44]. Researchers have used trapped ions to bridge this subfield from theory to experimental reality [45; 46; 47]. This subfield's discoveries partially motivates our work, as do
two computational results: first, a model of quantum computation can be defined from SU(2)-symmetric gates and spin fusions [48]. Second, SU(2)-symmetric measurements can achieve universal quantum computation, if performed on certain initial states [49; 50]. We therefore study monitored quantum circuits with three noncommuting charges.
In this work, we explore monitored-random-circuit dynamics of one-dimensional (1D) qubit chains with SU(2) symmetry. Equivalently, the circuits conserve three noncommuting charges: the total spin angular momentum's components. First, we explore the purification dynamics of a spin chain initially entangled with an ancilla spin. We identify a purification transition between a mixed phase, in which the ancilla purifies over an exponential-in-\(L\) time, and a critical phase1 with scale-invariant purification and entanglement growth. Above a critical measurement rate (at \(p>p_{\rm c}\)), we observe an extended-in-\(p\) critical phase in which the purification time scales diffusively: \(t_{\rm P}\sim L^{2}\). Second, we examine the entanglement dynamics undergone by an initially unentangled state. The purification transition doubles as an entanglement transition between volume-law entanglement scaling, at \(p<p_{\rm c}\), and subextensive (logarithmic or small-power-law) scaling, at \(p>p_{\rm c}\). The critical entanglement dynamics \(p>p_{\rm c}\)--even in the measurement-only limit (\(p=1\))--due to the local measurements' noncommuting nature. In fact, a Lieb-Shultz-Mattis-type anomaly precludes a simple area-law entangled regime [35], as would arise when \(p=1\), absent symmetries.
Footnote 1: For the purposes of this discussion, we classify Goldstone phases with long-range order as critical.
Observing the purification/entanglement transition experimentally would require many instances of the same set of measurement outcomes. Such instances occur with vanishing likelihood in the thermodynamic limit. This challenge is the _postselection problem_. To evade this difficulty, we explore a "spin-sharpening/learnability" transition. Denote by \(s\) the total spin quantum number. We examine whether the dynamics collapse an initial superposition of states in different \(s\) sectors. Unlike in the U(1)-symmetric problem, the sectors generally cannot be shared by the (extensive) charges: our system's three charges, failing to commute, share only one sector. We identify a spin-sharpening transition at a measurement rate \(p=p_{\#}\), which is numerically indistinguishable from the entanglement-transition rate: \(p_{\#}\approx p_{\rm c}\). In the "spin-sharp" phase (\(p>p_{\#}\)), an observer can, in principle, determine the system's \(s\) in a time scale \(t\sim L^{2}\), with a probability tending to unity as \(L\to\infty\). In contrast, in the "spin-fuzzy" phase (\(p<p_{\#}\)), the time scale is \(t\sim L^{3}\). This "learning" perspective might be used to probe the transition experimentally.
Finally, we interpret our results within an effective replica statistical-mechanics model. We obtain the model by averaging over the gates and measurement outcomes, building on previous results about asymmetric and symmetric circuits [7]. This model casts dynamical properties of SU(2)-symmetric monitored quantum circuits in terms of some effective Hamiltonian's low-energy properties. We interpret our numerical results in terms of this effective-Hamiltonian model.
The rest of this paper is organized as follows. In Sec. II, we introduce SU(2)-symmetric monitored quantum circuits. We present the purification/entanglement transition in Sec. III and the spin-sharpening transition in Sec. IV. Section V contains our statistical-mechanics mapping. Section VI finishes with opportunities established by this work.
## II Model: SU(2)-symmetric monitored circuits
Consider a brickwork circuit acting on a 1D chain of qubits, as depicted in Fig. 1. The number \(L\) of spins is even for convenience. Denote by \(\sigma_{j}^{(x,y,z)}\) the Pauli matrices acting on qubit \(j\). The total spin components \(S^{(x,y,z)}=\frac{1}{2}\sum_{j=1}^{L}\sigma_{j}^{(x,y,z)}\) generate the algebra associated with a global SU(2) symmetry. We set \(\hbar\) to 1. The spin-squared operator \(\vec{S}^{2}\) has eigenvalues \(s(s+1)\) labelled by the total spin quantum number \(s\). We denote the eigenvalues of \(S^{(z)}\) by \(m\), the two-qubit singlet state by \(|{\rm s}_{0}\rangle\), and the two-qubit eigenvalue-\(m\) triplets by \(|{\rm t}_{m}\rangle\).
Each brick is, with a probability \(1-p\), a gate, or, with a probability \(p\), a projective measurement. The gates are chosen randomly from SU(2). The most general such gate acting on spins \(j\) and \(j+1\), has the form
\[\cos(\phi)\mathbb{I}-i\sin(\phi)\ {\rm Sw}_{j,j+1}, \tag{1}\]
up to an irrelevant overall phase. \({\rm Sw}_{j,k}\) swaps the states of the spins \(j\) and \(k\). We draw each gate's parameter \(\phi\) independently from the uniform distribution on
Figure 1: **SU(2)-symmetric monitored quantum circuits.**\(L\) qubits (circles) are prepared in the state \(\rho_{\rm i}\). Each “brick” in the brickwork circuit is an SU(2)-symmetric unitary gate with a probability \(1-p\) and is a two-qubit projective measurement with a probability \(p\). The circuit acts for some time (some number of layers) before the final state, \(\rho_{\rm f}\), is read out. One brick illustrates which bonds have even (odd) indices.
\([0,2\pi)\). Each measurement projects a two-qubit state onto the singlet (\(s=0\)) or triplet (\(s=1\)) subspace (fusion channel). Crucially, two measurements fail to commute when acting on overlapping spin pairs. Crucially, the SU(2) symmetry precludes nontrivial single-qubit measurements. One time step consists of a brick layer on even-index bonds and a layer on odd-index bonds. In the even-index-bond layers, a brick connects the first and \(L^{\rm th}\) qubits, effecting periodic boundary conditions.
## III Purification/entanglement transition
We first explore the model's entanglement and purification dynamics. In Sec. III.1, we examine the purification dynamics of initially mixed states. The spin chain's state begins scrambled and entangled with an ancilla qubit \(A\). The ancilla's entanglement decreases over time. In Sec. III.2, we examine the entanglement dynamics exhibited by initially short-range-entangled pure states.
### Purification time
We determine the purification time as follows [9]. Denote by \(\left|s,m,\lambda_{0}\right\rangle\) and \(\left|s,m,\lambda_{1}\right\rangle\) two orthogonal states from the same \((s,m)\) sector. The last index distinguishes degenerate states. We entangle an ancilla qubit with the system's \(L\) qubits, forming the \((L+1)\)-qubit state
\[\left|\tilde{\psi_{\rm i}}\right\rangle=\tfrac{1}{\sqrt{2}}\left(\left|0 \right\rangle_{A}\left|s,m,\lambda_{0}\right\rangle+\left|1\right\rangle_{A} \left|s,m,\lambda_{1}\right\rangle\right). \tag{2}\]
The subscript \(A\) distinguishes the ancilla from the system qubits. \(A\) does not undergo gates or measurements.
We choose two system states that have \(s=1\) and \(m=0\). In \(\left|s=1,m=0,\lambda_{0}\right\rangle\), qubits \(1\) and \(2\) are in the triplet \(\left|\mathfrak{t}_{0}\right\rangle\); and the remaining pairs of qubits, in singlets \(\left|s_{0}\right\rangle\). In \(\left|s=1,m=0,\lambda_{1}\right\rangle\), qubits \(3\) and \(4\) are in \(\left|\mathfrak{t}_{0}\right\rangle\), instead. These two system states are orthogonal, in the same \(\vec{S}^{2}\) sector, and in the same \(S^{(z)}\) sector. However, one can distinguish the states by measuring qubits \(1\) and \(2\). Such local distinguishability is undesirable. Therefore, after preparing \(\left|\tilde{\psi_{\rm i}}\right\rangle\), we scramble the system: the system undergoes a unitary-only (\(p=0\)) SU(2)-symmetric circuit for \(L^{2}\) time steps. (The \(t_{\rm P}\) identified later in this subsection motivates the \(L^{2}\).) The scrambling encodes quantum information about the ancilla roughly uniformly in many-body entanglement. This process prepares \(\left|\psi_{\rm i}\right\rangle\).
\(\left|\psi_{\rm i}\right\rangle\) undergoes \(t=L^{2}\) time steps under monitored-random-circuit dynamics with \(p\geq 0\). Denote by \(\rho_{A}\coloneqq\mathrm{Tr}_{\tilde{A}}(\left|\psi_{\rm f}\right\rangle\! \!\left\langle\psi_{\rm f}\right|)\) the final state of \(A\). We calculate the final entanglement entropy between \(A\) and the system:
\[S_{A}\coloneqq S(\rho_{A})\coloneqq-\mathrm{Tr}[\rho_{A}\log(\rho_{A})]. \tag{3}\]
(All logarithms are base-\(e\).) We anticipate that the measurements will purify the system at an exponential-in-\(t\) rate: \(S_{A}\sim e^{-t/t_{\rm P}(L)}\). Therefore, we plot \(\log(S_{A})\) in Fig. 2. Along the \(x\)-axes runs \(t/L^{2}\). At each \(p>p_{\rm c}\approx 0.35\), the different-\(L\) curves collapse. Hence this phase purifies according to \(S_{A}\sim\mathrm{e}^{-t/L^{2}}\) and so has a dynamical critical exponent \(z\)=2. This \(z\)-value characterizes diffusive scaling [51] and suggestively evokes ferromagnetic spin waves' dynamics [52, Ch. 33].
At lower measurement rates, \(p\ll p_{\rm c}\), we observe a mixed phase. Figure 3 shows the purification time plotted against \(L\), at several \(p\) values. At \(p=0.05\), \(t_{\rm P}\sim e^{L}\). At \(p\) values between \(0.05\) and \(0.35\), the scaling is unclear from the numerics; we cannot conclude to which phase this intermediate regime belongs. Still, the exponential purification time resembles that of asymmetric circuits [8]. We analyze this mixed phase analytically in Sec. V, using a duality between monitored circuits and a statistical-mechanics model.
### Entanglement dynamics
To characterize the critical phase further, we explore an initially pure state's late-time bipartite entanglement entropy, \(S_{\rm f}\). The purification transition manifests as a qualitative change in the \(L\)-dependence of \(S_{\rm f}\) at \(p=p_{\rm c}\).
We initialize the system in a short-range-entangled state \(\left|\psi_{\rm i}\right\rangle\), a tensor product of a triplet \(\left|\mathfrak{t}_{0}\right\rangle\) and \(\frac{L-2}{2}\) singlets \(\left|s_{0}\right\rangle\). This choice's details are unimportant. However, we choose this state so that \(\left|\psi_{\rm i}\right\rangle\) is in the same \(s\) sector at all system sizes \(L\). The state undergoes monitored-circuit dynamics for \(L^{2}\) time steps. This time suffices for the entanglement entropy to reach a steady value, regardless of the measurement rate, \(p\). Figure 7 in Appendix A illustrates this point at the extreme values \(p=0,1\). We measure the bipartite entanglement entropy, \(S_{\rm f}\), between two equal-size halves of the chain.
Figure 4 shows the dependence of \(S_{\rm f}\) on \(L\) at different measurement rates \(p\). At \(p=0\), we observe the volume-law phase common to monitored circuits: \(S_{\rm f}\sim L\). Figure 8 in Appendix A supports this claim more precisely than does Fig. 4. At larger \(p\) values,2 the entanglement scaling is less consistent with a linear fit. Better fits are logarithmic and small-power-law (\(S_{\rm f}\sim\sqrt{L}\)). One cannot definitively distinguish these behaviors at the accessible system sizes, as detailed in Appendix A. The statistical-mechanics model (Sec. V) provides stronger evidence for the absence of a volume-law phase at large \(p\).
Footnote 2: According to [8], the entanglement phase transition is equivalent to the purification transition. Our system’s purification transition happens at \(p_{\rm c}\approx 0.35\), according to the previous subsection. This section’s numerics are consistent with an entanglement transition at \(p\approx 0.35\), but the transition’s exact location is unclear.
Intuitively, the slow entanglement growth at large \(p\), even in the measurement-only (\(p=1\)) limit, arises from the noncommutativity of the charges measured. Similar logarithmic entanglement growth has been observed under measurement-only dynamics previously: Majorana
fermions were subjected to noncommuting measurements in [25].
Appendix B presents numerical results concerning the correlations between local observables at different sites. The limitation on system size makes it difficult to determine the functional form of the correlations' decay with distance. However, we find a qualitative change in how the correlations decay at \(p>p_{c}\) and at \(p<p_{c}\).
## IV Spin-Sharpening transition
Having explored the purification dynamics within an \(s\) sector, we explore the purification of a superposition spread across \(s\) sectors. We again entangle the chain with an ancilla qubit. This time, the ancilla is in \(|0\rangle\), and the chain has a spin quantum number \(s_{0}\), in superposition with the ancilla's being in \(|1\rangle\) and the chain's having \(s_{1}\). The dynamics may purify the ancilla in a given measurement trajectory. In this case, the chain's state has collapsed onto the \(s_{0}\) (or \(s_{1}\)) sector. Consequently, the measurement outcomes' probability of being compatible with the system's having \(s_{1}\) (or \(s_{0}\)) vanishes. An observer can therefore, knowing the circuit, learn the spin quantum number by monitoring the measurement outcomes.
Comparing spin sharpening with U(1)-charge sharpening is illuminating. One can estimate as follows the total charge of qubits undergoing a U(1)-symmetric hybrid circuit: Running the circuit, one obtains \(ptL\) measurement outcomes (0s and 1s), on average. Consider averaging the outcomes, multiplying by \(L\), and rounding to the nearest integer. If \(t\sim L\), this procedure estimates the charge accurately [53]. If the dynamics are SU(2)-symmetric on average (as in Sec. II), sequential measurements fail to commute. Hence later measurements render partially irrelevant the information obtained from earlier measurements. An observer cannot obviously learn \(s\) ever. Nevertheless, we numerically identify a measurement-induced
Figure 4: **The entanglement dynamics evidence no area-law phase.** The bipartite entanglement entropy reaches the long-time value \(S_{\rm f}\). At \(p=0\), \(S_{\rm f}\) is linear in \(L\). As \(p\) increases, \(S_{\rm f}\) gradually becomes logarithmic or power-law with a small exponent. When \(L=10\) to \(16\), we use \(30\,000\) samples; when \(L=18\) and \(20\), we use \(10\,000\).
Figure 3: **The purification time \(t_{\rm T}\) grows exponentially with \(L\) at small \(p\).** This mixed-phase behavior characterizes \(p=0.05\).
Figure 2: **The purification time reveals a \(z{=}2\) phase.** The entropy \(S_{A}\) quantifies the ancilla qubit’s entanglement with the system. We plot \(\log(S_{A})\) for clarity, as \(S_{A}\) decays exponentially. \(t/L^{2}\) runs along the \(x\)-axis to demonstrate the existence of a phase in which the system purifies over a time scale \(t_{\rm P}\sim L^{2}\). The curves’ collapsing at \(p>0.35\) evidences this phase. We used \(30\,000\) samples when \(L=8\) to \(16\); \(10\,000\) samples when \(L=18\); and \(1\,500\) samples when \(L=20\). The \(y\)-axis’s lower limit is \(\log\bigl{(}10^{-3}\bigr{)}\approx-6.91\).
transition at a measurement rate \(p_{\#}\). We call this transition a _spin-sharpening_ transition. It separates regimes in which an observer can (\(p>p_{\#}\)) and cannot (\(p<p_{\#}\)) identify \(s\) from the measurement outcomes, with a probability tending to unity as the \(L\to\infty\).
We diagnose the spin-sharpening transition using a similar procedure to the one in Sec. III A. The difference is that, unlike in Eq. (2), we construct \(|\tilde{\psi_{i}}\rangle\) from distinct \(\vec{S}^{2}\) eigenspaces:
\[|\tilde{\psi_{i}}\rangle=\tfrac{1}{\sqrt{2}}\left(|0\rangle_{A}\,|s_{0},m, \lambda_{0}\rangle+|1\rangle_{A}\,|s_{1},m,\lambda_{1}\rangle\right). \tag{4}\]
We choose \(m=0\), \(s_{0}=1\), and \(s_{1}=0\) for convenience: one can construct such a \(|\tilde{\psi_{i}}\rangle\) by tensoring together singlets and an \(m=0\) triplet, regardless of \(L\). After preparing \(|\tilde{\psi_{i}}\rangle\), we scramble the system under a \(p\)=0 circuit for \(L^{2}\) time steps, as in Sec. III A. This procedure prepares a state \(|\psi_{i}\rangle\). Then, we evolve the system under monitored-circuit dynamics with a fixed \(p\). Anticipating \(z\)=2 dynamical scaling in the spin-sharp phase, we evolve the system for \(L^{2}\) time steps. If the ancilla purifies after this short time, we say that the spin has sharpened. We denote the final state by \(|\psi_{\mathrm{f}}\rangle\).
Figure 5a shows the ancilla's final entanglement entropy, \(S_{A}\), plotted against \(p\). Different curves correspond to different system sizes \(L\). The curves cross at \(p_{\#}\approx 0.28\), suggesting that a spin-sharpening transition occurs at \(p_{\#}\). Furthermore, Fig. 5b displays a finite-size collapse. We used the scaling form \(\log(S_{A})=(p-p_{\#})L^{1/\nu}\), the correlation-length exponent \(\nu=3.0\), and \(p_{\#}=0.28\).
Figure 6 reveals the phases' spin-sharpening time scales: \(\sim L^{2}\) in the spin-sharp phase and \(\sim L^{3}\) in the spin-fuzzy phase. A simple argument supports the latter [54]: \(|\psi_{i}\rangle\) corresponds to an eigenvalue \(s(s+1)\in\{1,2\}\) of \(\vec{S}^{2}=\sum_{j,k}\vec{\sigma}_{j}\cdot\vec{\sigma}_{k}\). The system contains \(\sim L^{2}\) pairs \((j,k)\). One might expect all pairs to contribute roughly equally to \(\langle\vec{S}^{2}\rangle\), by ergodicity, in the spin-fuzzy phase. Hence \(\langle\vec{\sigma}_{j}\cdot\vec{\sigma}_{k}\rangle\sim s(s+1)/L^{2}\). To identify \(s(s+1)\), we therefore must measure \(L^{2}\) correlators \(\langle\vec{\sigma}_{j}\cdot\vec{\sigma}_{k}\rangle\). Measuring one correlator with an imprecision \(\sim 1/L\) requires \(\sim L^{2}\) measurements. We hence need \(\sim L^{4}\) measurements total. Since (const.)\(L\) measurements occur per time step, the spin should sharpen in a time \(\sim L^{3}\).
Our tentative identification of a spin-sharpening transition at \(p_{\#}\) is subject to at least two caveats. First, the crossing point drifts to larger \(p\) as \(L\) increases (perhaps coalescing with the purification transition at \(p_{c}\) as \(L\to\infty\)). Second, the scaling ansatz we chose for the data collapse in Fig. 5b may not be valid. The ansatz implies that the time scale for a size-\(L\) system to sharpen increases more quickly than \(L^{2}\) for \(p<p_{\#}\) and more slowly than \(L^{2}\) for \(p>p_{\#}\). However, our data for \(p>p_{\#}\) (see Fig. 9 in Appendix A) is compatible with a sharpening time scale \(\sim L^{2}\) deep in the critical phase. If the sharpening time indeed scales as \(L^{2}\) throughout the critical phase, the crossing in Fig. 5a must be a finite-size artifact. Precisely identifying \(p_{\#}\) and the sharpening transition's nature is outside the scope of this work, due to the paucity of \(L\) values accessible in exact computations. We defer a detailed analysis of the spin-sharpening time scales to future work.
Finally, the spin-sharpening transition suggests a postselection-free means of observing a measurement-induced transition experimentally [55, 53, 56, 57]: identify whether an observer can learn \(s\) from measurement outcomes in a given time interval. This learning would require "decoders" for estimating \(s\) from the outcomes. The decoders' accuracy, as a function of the measurement rate, would need to be tested. In principle, one can learn \(s\) most accurately via brute-force decoding [53]. One would, upon running the circuit and obtaining the measurement outcomes, simulate the circuit, postselected on the observed outcomes and operating on a state in the \(s_{0}\) sector. One would repeat the simulation with an initial
Figure 5: A spin-sharpening transition exists. The entropy \(S_{A}\) quantifies the ancilla qubit’s entanglement with the system. Different curves correspond to different system sizes \(L\). (a) The curves’ crossing at \(p\approx 0.28\) indicates a phase transition. (b) We identify a finite-size collapse using \(\nu=3.0\) and \(p_{\#}=0.28\).
Figure 6: The spin-sharpening time scale is \(\sim L^{3}\) in the fuzzy phase and \(\sim L^{2}\) in the sharp phase. The entropy \(S_{A}\) quantifies the ancilla qubit’s entanglement with the system. Different curves correspond to different system sizes \(L\). (a) \(t/L^{3}\) runs along the \(x\)-axis to demonstrate that the spin can sharpen over a time scale \(\sim L^{3}\). This time scale characterizes the charge-fuzzy phase (\(p<p_{\#}\)). (b) \(t/L^{2}\) runs along the \(x\)-axis to demonstrate that the spin can sharpen over a time scale \(\sim L^{2}\). This time scale characterizes the charge-sharp phase (\(p>p_{\#}\)). We used \(30\,000\) samples when \(L=8\) to \(16\); and \(10\,000\) samples when \(L=18\).
state in the \(s_{1}\) sector. From each simulation, one would infer the probability that \(s_{0}\) (or \(s_{1}\)) had engendered the observed outcomes.
However, this approach generically costs exponential-in-\(L\) time (even if a quantum computer performs the simulation, due to the postselection). Special classes of monitored dynamics [55; 53; 56; 57] may allow for approximate decoders that can be implemented efficiently on classical or quantum computers without postselection. In this case, the transition's nature will depend on both the circuit and the decoder and may differ, in location or universality class, from the spin-sharpening transition observed under optimal decoding. We leave for future work the problem of designing efficient decoders for spin-sharpening transitions.
## V Effective-Hamiltonian description of the monitored dynamics
To complement the numerics, we derive an effective statistical-mechanics model: a description of the monitored evolution as imaginary-time evolution under an effective Hamiltonian acting on copies (replicas) of the system. In the rest of this section, we describe the model. We elucidate its ground and low-lying excited states in Sec. V A. Leveraging these results, we elucidate the monitored circuit's purification transition in Sec. V B. Section V C explains the circuit's lack of an area law.
In the statistical-mechanics model, measurement outcomes act as quenched disorder for a quantum trajectory. A replica trick is needed to average nonlinear quantities, such as entanglement, over trajectories [10; 11]. One must average \(Q\) replicas of the density matrix, \(\rho^{\otimes Q}\). In the replica limit, \(Q\to 1\). More precisely, we want to calculate
\[\overline{\rho^{(Q)}(t)}=\sum_{\vec{m}}\int dU\left(K_{\vec{m},U}\rho_{0}K^{ \dagger}_{\vec{m},U}\right)^{\otimes Q}\,, \tag{5}\]
dependent on the evolution operator \(K_{\vec{m},U}\equiv\prod_{\ell=1}^{2t}P_{\ell,\vec{m}}U_{\ell}\). \(U_{\ell}\) denotes the unitary implemented by circuit layer \(\ell\). \(P_{\ell,\vec{m}}\) denotes the projector onto the subspace associated with the list \(\vec{m}\) of outcomes yielded by the measurements at time step \(i\). \(\int dU\) denotes an average over the SU(2)-symmetric gates (with the appropriate probability measure).
An alteration to the circuit model will facilitate the analytics: we deform the discrete-time, strong-measurement circuit dynamics into a continuous-time version. We replace the gates with Hamiltonian evolutions over infinitesimal time steps, and infinitesimally weak measurements replace the projective measurements. We expect the continuous-time deformation to preserve the purification/entanglement and charge-sharpening transitions' universal scaling properties. The reasons are analogous examples [53] and the system's lack of time-translation symmetry.
Here, we summarize the resulting Hamiltonian description. Appendix C contains a detailed derivation. The effective Hamiltonian equals a sum of contributions from the unitary dynamics and the weak measurements: \(H^{\text{eff}}=H^{\text{u}}+H^{\text{m}}\). The terms are
\[H^{\text{u}} =-J\sum_{i}\left[\sum_{a=1}^{Q}\left(\vec{S}^{a}_{i}\cdot\vec{S}^ {a}_{i+1}-\vec{S}^{a*}_{i}\cdot\vec{S}^{a*}_{i+1}\right)\right]^{2}\text{ and } \tag{6}\] \[H^{\text{m}} =\gamma\sum_{i}\sum_{a,b=1}^{2Q}\left(\vec{S}^{a}_{i}\cdot\vec{S} ^{a}_{i+1}\right)\Pi_{a,b}\left(\vec{S}^{b}_{i}\cdot\vec{S}^{b}_{i+1}\right). \tag{7}\]
The coupling constant \(J\) encapsulates the unitary dynamics' scrambling power; and \(\gamma\), the weak measurements' strength. \(P^{a}_{i,j}\) is the projector onto the singlet sector of spins \(i\) and \(j\) in replica copy \(a\). Equations (6)-(7) have two kinds of summations over the replica index, \(a\). When \(a\) runs from \(1\) to \(Q\), the summation is over forward copies of the replicas. \(a^{*}\) represents the corresponding backward copy. When \(a\) runs from \(1\) to \(2Q\), the summation is over both the backward and forward copies. The projector \(\Pi_{a,b}=\delta_{ab}-\frac{1}{2Q}\) is onto inter-replica fluctuation modes. The associated term in Eq. (7) is minimized when \(\vec{S}_{i}\cdot\vec{S}_{i+1}\) yields the same value, operating on any replica, as operating on any other. If the measurements are projective, all the replicas must yield the same measurement outcome. If the measurements are weak, as above, this restriction is softened; a finite energy cost accompanies inter-replica fluctuations in the measured operator \(\vec{S}\cdot\vec{S}\).
The effective Hamiltonian has a left/right \(S_{Q}\times S_{Q}\) symmetry: \(H^{\text{eff}}\) remains invariant under permutations of the \(Q\) forward copies and permutations of the \(Q\) backward copies. The monitored dynamics map to imaginary-time evolution under \(H^{\text{eff}}\) (in the replica limit \(Q\to 1\)). Thus, we must understand this Hamiltonian's low-energy properties to understand the monitored dynamics' late-time properties.
## V Ground state and collective excitations at low measurement rates
We begin with a measurement-free model: \(\gamma=0\) in Eq. (7). A ground state is a configuration that, when acted on by \(\sum_{a=1}^{Q}\left(\vec{S}^{a}_{i}\cdot\vec{S}^{a}_{i+1}-\vec{S}^{a*}_{i} \cdot\vec{S}^{a*}_{i+1}\right)\) for any nearest-neighbor pair \((i,i+1)\), vanishes. Such a configuration is achievable if and only if, for some pairing of \((a,b^{*})\), \(\vec{S}^{a}_{i}\cdot\vec{S}^{a}_{i+1}=\vec{S}^{b*}_{i}\cdot\vec{S}^{b*}_{i+1}\) for all \(i\). The ground states thus can be labeled by all such pairings \((a,b^{*})\). Furthermore, the ground states are represented by the elements \(\sigma\) of the permutation group \(S_{Q}\) such that \(\vec{S}^{a}_{i}\cdot\vec{S}^{a}_{i+1}=\vec{S}^{\sigma(a)*}_{i}\cdot\vec{S}^{ \sigma(a)*}_{i+1}\). To satisfy this condition for all \(i\), the interaction must be ferromagnetic, precluding frustration. We show in Appendix C that the ground space of \(H^{\text{u}}\) is that of an SU(4) ferromagnet. The ground
states can thus be labeled as \(|\otimes_{i=1}^{L}\sigma\rangle\!\!\rangle\). The permutation \(\sigma\in S_{Q}\), and the tensor product emphasizes the pairings' uniformity across space.3 Importantly, the ground space breaks the discrete symmetry \(S_{Q}\times S_{Q}\) to \(S_{Q}\).
Footnote 3: As noted, the ground space has a degeneracy labeled by the ground states of the SU(4) ferromagnet. The label depends on the initial state. We drop the label from our notation for simplicity, as the label does not impact the following discussion.
We now briefly sketch the low-lying energy eigenstates.4 If \(\gamma=0\), the excitations over a symmetry-broken state \(|\otimes_{i=1}^{L}\sigma\rangle\!\!\rangle\) are described by \(Q\) decoupled SU(4) ferromagnetic chains, each formed from two SU(2) chains. Let us focus on one SU(4) chain. An SU(4) ferromagnet's Goldstone modes live on a 6-dimensional manifold. They result in three gapless modes with energies vanishing as \(L^{-z}\), wherein \(z=2\). These gapless modes are of two types: Two modes arise from fluctuations within single SU(2) spin chains. The third mode arises from collective fluctuations of the two SU(2) chains. In summary, \(Q\) replicas lead to \(2Q\) diffusive (\(z\)=2) modes associated with fluctuations within single SU(2) chains, plus \(Q\) diffusive modes associated with inter-replica fluctuations. As noted above, measurements affect only the inter-replica fluctuations and thus couple the \(Q\) inter-replica modes [Eq. (7)].
Footnote 4: The gaps between these eigenstates’ energies and the ground-state energy vanishes in the thermodynamic limit but remains nonzero at finite \(L\).
Consider increasing the measurement parameter \(\gamma\) from 0. As in U(1)-symmetric circuits [58], measurements gap out some inter-replica degrees of freedom. Furthermore, the inter-replica gapless modes reduce to one diffusive mode (corresponding to the fluctuations in the replicas' average) and \(Q-1\) relativistic ballistic (\(z\)=1) modes, which describe inter-replica fluctuations. These ballistic modes cause the Renyi entropies with indices \(n>1\) to grow ballistically in the presence of measurements [58; 21]. The diffusive inter-replica modes are well-defined only for symmetry-broken states whose forward and backward copies are paired explicitly. Thus, these modes are expected to survive only in the replica-symmetry-broken phase (volume-law phase). However, the \(2Q\) intrachain-fluctuation SU(2) modes do not depend on such pairings. Hence these modes are expected to exist at all measurement strengths and so in the critical phase. As we discuss below, these surviving gapless \(z\)=2 modes likely underlie two circuit behaviors that we observed: the \(L^{2}\) purification time scale and the absence of area-law entanglement.
### Purification
Using the formalism above, we can understand the purification of an initially maximally mixed state, \(|\otimes_{i=1}^{L}e\rangle\!\!\rangle\). The \(e\in S_{Q}\) denotes a permutation that pairs replica \(a\) with \(a^{*}\). At a late time \(t\), the density matrix's trajectory-averaged purity \(\Pi(t)\) is given by
\[\Pi(t)=\lim_{Q\to 1}\frac{\left(\!\langle\otimes_{i=1}^{L}g|e^{-\beta H^{ \mathrm{eff}}}|\otimes_{i=1}^{L}e\rangle\!\!\rangle}{\left(\!\langle\otimes_{i =1}^{L}e|e^{-\beta H^{\mathrm{eff}}}|\otimes_{i=1}^{L}e\rangle\!\!\rangle \right)}\,. \tag{8}\]
\(g\) denotes the transposition that swaps replica 1 with 2* and 2 with 1* while acting as the identity on the other replicas [10; 11]. The purification time is when \(\Pi(t)\) becomes \(\mathcal{O}(1)\). In the absence of measurements, \(\gamma=0\) an initially maximally mixed state will fail to purify and will have \(\Pi(t)=\frac{1}{2^{t}}\) for all times. Indeed, at \(\gamma=0\), \(|\otimes_{i=1}^{L}e\rangle\!\!\rangle\) is a ground state of \(H^{\mathrm{eff}}\) and has vanishing energy. Thus, \(\Pi(t)|_{\gamma=0}=\lim_{Q\to 1}(\!\langle\otimes_{i=1}^{L}g|\otimes_{i=1}^{L }e\rangle\!\!\rangle=1/2^{L}\). Excitations of the discrete-symmetry-broken phase are gapped domain-wall configurations. Therefore, we expect this phase to be stable under the strengthening of the weak measurements to low rates \(\gamma\). For the system to transition between different replica-symmetry-broken ground states, a domain wall must tunnel across the entire system. Such transitions are thus expected occur over an exponential-in-system-size time, which we identify as the purification time: \(t_{\mathrm{P}}\sim e^{L}\). The SU(2) symmetry has little bearing on the replica-symmetry breaking; essentially identical arguments apply in the symmetry's absence. This behavior contrasts with that of lattice magnets that have continuous symmetries. There, a symmetry-broken state can be deformed smoothly into another symmetry-broken state over a \(\mathrm{poly}(L)\) time scale. Interestingly, monitored free fermions have an emergent, continuous inter-replica symmetry (as opposed to our discrete \(S_{Q}\) symmetry), resulting in linear purification times [59; 60; 61; 62].
The replica symmetry is restored at sufficiently high measurement strengths; the argument for \(t_{\mathrm{P}}\sim e^{L}\) breaks down. Instead, the purification time depends on the effective Hamiltonian's energy gap. We conjecture that this gap scales as \(1/L^{2}\), due to the gapless modes associated with the previous subsection's \(2Q\)\(z\)=2 modes. This gap scaling results in a purification time \(t_{\mathrm{P}}\sim L^{2}\).
### Absence of area law under strong measurements
We can establish the absence of an area-law phase at any measurement rate by adapting a Lieb-Shultz-Mattis-type-anomaly argument to the spin model in the replica trick with \(2Q\) copies, as first argued in [35]. (See also [63; 64], which generalize this result to statistical symmetries.) Each of the \(2Q\) copies has SU(2) symmetry, and the replica symmetry permutes the replicas. Additionally, under averaging over the measurements and circuit elements, the replica model has a \(\mathbb{Z}\) lattice-translation symmetry. Overall, the statistical-mechanics model's symmetry group is \(G=\mathbb{Z}\times\left[(\mathrm{SU}(2)^{\times Q}\rtimes S_{Q})\times( \mathrm{SU}(2)^{\times Q}\rtimes S_{Q})\rtimes\mathbb{Z}_{2}\right]\)[59]. Each site contains one (projective) spin-1/2 representation of each replica factor of SU(2). Therefore, there is a
mixed anomaly between translation symmetry and the SU(2) spin-rotation symmetry. This anomaly rules out the possibility of a featureless (short-range-entangled, symmetry-preserving) ground state. Moreover, naively applying the Mermin-Wagner theorem rules out spontaneous breaking of the SU(2) symmetry (although subtle examples may violate this principle in the replica limit [65]). Furthermore, we observe no tendency towards any spontaneous breaking of the lattice-translation symmetry. These arguments suggest that no area-law phase can arise, even in the measurement-only limit.
We can obtain further insight into the strong-measurement regime through our mapping to an effective-Hamiltonian model. Each replica has an SU(2) symmetry, which leads to gapless modes, as noted above, with ferromagnetic interactions and thus \(z\)=2 dynamics. These conclusions are consistent with the critical phase observed in our numerics.
Using the formalism above, one can calculate Renyi entropies of the reduced density matrix of an interval A. Let \(\rho\) denote any single-copy pure state; and \(|\rho\rangle\!\rangle\), the \(Q\)-replica defined on \(2Q\) copies of the Hilbert space. We focus on the Renyi index \(n=2\):
\[e^{-S^{2}(\rho_{A})}=\lim_{Q\to 1}\frac{\llangle\!\otimes_{i=1}^{L}g_{i}^{A}|e^{- \beta H^{\text{eff}}}|\rho\rangle\!\rangle}{\llangle\!\otimes_{i=1}^{L}e|e^{- \beta H^{\text{eff}}}|\rho\rangle\!\rangle}\,. \tag{9}\]
\(g_{i}^{A}=e\) if \(i\) does not belong to the interval \(A\), \(g_{i}^{A}=(12)(34)...(2k-1\ 2k)\) if \(i\in A\), and \(Q=2k+1\). We define a "twist" permutation \(\tau\) such that \(\tau\left(\otimes_{i=1}^{L}g_{i}^{A}\right)=\otimes_{i=1}^{L}e\). Using \(\tau\), we can rewrite (9). Since the initial state is pure, \(\tau|\rho\rangle\!\rangle=|\rho\rangle\!\rangle\), and
\[e^{-S^{2}(\rho_{A})}=\lim_{Q\to 1}\frac{\llangle\!\otimes_{i=1}^{L}e|\tau^{-1}e^{- \beta H^{\text{eff}}}\tau|\rho\rangle\!\rangle}{\llangle\!\otimes_{i=1}^{L}e|e ^{-\beta H^{\text{eff}}}|\rho\rangle\!\rangle}\,. \tag{10}\]
\(\tau\), operating on \(H^{\text{eff}}\), introduces a twist operator at the interval's boundary, \(\partial A\): \(\tau^{-1}e^{-\beta H^{\text{eff}}}\tau\equiv T_{\partial A}\ e^{-\beta H^{ \text{eff}}}\). Hence a size-\(|A|\) interval's Renyi-2 entropy is related to the two-point correlator of the twist operator acting on sites separated by a distance \(|A|\). In the infrequent-measurement phase with spontaneously broken replica symmetry, this correlator decays exponentially, leading to volume-law Renyi entropies. Intuitively, domain walls in this discrete ferromagnetic phase have finite line tensions. Hence creating a domain wall costs an extensive (volume-law) amount of free energy. Under frequent measurements, in the putative critical phase, the permutation degrees of freedom are gapped. The twist operator should likely, instead, couple to the remaining low-energy SU(2) modes. Analyzing the critical phase's nature, requiring the effective Hamiltonian's replica limit, presents a clear challenge for future work.
## VI Outlook
We studied the dynamics of monitored random circuits with SU(2) symmetry, i.e., with three noncommuting charges: the total spin angular momentum's components. First, we numerically discovered a purification transition between a mixed phase (at \(p<p_{\text{c}}\approx 0.35\)) and a critical phase (at \(p>p_{\text{c}}\)). In the critical phase, the purification time scales as \(t_{\text{P}}\sim L^{2}\). The purification transition doubles as an entanglement transition, which separates volume-law (at \(p<p_{\text{c}}\)) and subextensive (logarithmic or small-power-law, at \(p>p_{\text{c}}\)) entanglement scalings. Even in the measurement-only limit (at \(p=1\)), the symmetry's non-Abelian nature enables nontrivial entanglement scaling. Additionally, we observed a spin-sharpening transition across which there is a parametric change in the time at which one can (in principle) learn the system's total spin by monitoring measurements. The time scale is \(t\sim L^{2}\) in the "spin-sharp" phase and \(t\sim L^{3}\) in the "spin-fuzzy" phase.
Finally, we interpreted our results within an effective replica statistical-mechanics model. The model supports the mixed-phase prediction that \(t_{\text{P}}\sim e^{L}\). Also, the model hints at a possible spin-wave mechanism for the \(t_{\text{P}}\sim L^{2}\) dynamics in the critical phase. Furthermore, a Lieb-Schultz-Mattis-type anomaly obstruction implies the absence of an area-law phase. Instead, the entanglement should scale logarithmically with \(L\) in the critical phase, consistently with our numerics.
Our results open several opportunities for future work. One is to understand the purification/entanglement and sharpening transitions analytically. Second, one might leverage spin sharpening to observe an MIPT experimentally, avoiding the postselection problem (Sec. I). The thermodynamics of noncommuting charges have already been observed experimentally with trapped ions [45]. Superconducting qubits, quantum dots, and spinful fermionic atoms are natural candidates, too [46; 47]. Third, our system offers a playground for numerically exploring the recent result that non-Abelian symmetries constrain local unitary circuits more than Abelian symmetries do and so may constrain chaos more [66; 67; 68; 69]. Finally, efficient classical and quantum spin-sharpening decoders merit exploration.
###### Acknowledgements.
We thank Ehud Altman, Fergus Barratt, David Huse, Andreas Ludwig, and Xiaoliang Qi for helpful discussions. We also thank Michael Gullans for galvanizing this collaboration. This work received support from the National Science Foundation under QLCI grant OMA-2120757 (N.Y.H.), the John Templeton Foundation under award No. 62422 (N.Y.H. and S.M.), the Air Force Office of Scientific Research under Grant No. FA9550-21-1-0123 (R.V.), the Alfred P. Sloan Foundation through Sloan Research Fellowships (A.C.P. and R.V.), National Science Foundation under NSF Grants No. DMR-1653271 (S.G.), and the Vanier C.G.S. (S.M.). This work was supported by the Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons Foun
dation (651440, U.A.)
## Appendix A Additional numerics elucidating the entanglement dynamics and spin sharpening
In Sec. III B, we claimed that \(L^{2}\) time steps suffice for the bipartite entanglement entropy, \(S_{\rm f}\), to plateau. Figure 7 justifies this claim, presenting \(S_{\rm f}\) as a function of \(\log(t)\) for \(\leq L^{2}\) time steps at the extreme values \(p=0,1\). At both extrema, \(S_{\rm f}\) stops changing (to within minor fluctuations) by \(L^{2}\) time steps.
Section III B also discussed different fittings for \(S_{\rm f}\) versus \(L\). Figure 8 presents three fittings [\(L\), \(\log(L)\), and \(\sqrt{L}\)] at each of three measurement rates (\(p=0\), \(p=1\), and \(p\approx p_{\rm c}\)). At \(p=0\), the linear fit is the best. This observation is consistent with the existence of a volume-law phase at \(p=0\). At \(p=0.35\approx p_{\rm c}\), it is unclear which fit is most accurate. However, the two nonlinear fits are visibly best. The \(p=1\) fits resemble the \(p=0.35\) ones.
Section IV claimed that our \(p>p_{\#}\) data are compatible with a sharpening time scale \(\sim L^{2}\) deep in the critical phase. Figure 9 justifies this claim. We plot \(\log(S_{A})\) against \(t/L^{2}\) at various \(p\) values. The initial collapse occurs at \(p>p_{\#}\). The \(L\)=8 numerics deviate from the collapse when \(p\in[0.35,0.45]\cup[0.8,1]\). We suspect that these deviations arise from finite-size effects.
## Appendix B Mutual information
In Sec. III, we studied purification and entanglement dynamics. We complement that numerical analysis by studying mutual information. To introduce the mutual information, we consider a quantum system in a state \(|\psi\rangle\). Let \(\mathcal{A}\) and \(\mathcal{B}\) denote subsystems. The reduced state of \(\mathcal{A}\) is \(\rho_{\mathcal{A}}\coloneqq\operatorname{tr}_{\mathcal{A}}(|\psi\rangle \langle\psi|)\). The reduced states of \(\mathcal{B}\) and \(\mathcal{AB}\) are defined analogously. The mutual information between \(\mathcal{A}\) and \(\mathcal{B}\) is
\[I(\mathcal{A}:\mathcal{B})\coloneqq S(\rho_{\mathcal{A}})+S(\rho_{\mathcal{B} })-S(\rho_{\mathcal{AB}}). \tag{11}\]
The mutual information upper-bounds equal-time correlators between local operators acting nontrivially on \(A\) alone and on \(B\) alone [70]. We denote by \(I^{(1)}_{j,k}\) the mutual information between sites \(j\) and \(k\). We denote by \(I^{(2)}_{j,k}\) the mutual information between the pair \((j,j+1)\) and the pair \((k,k+1)\).
Figure 10 presents \(I^{(1)}_{1,j}\) and \(I^{(2)}_{1,j}\), plotted against \(j\), at \(L=20\). \(I^{(1)}_{1,j}\) grows with \(p\) and rapidly decays with \(j\).5 At all \(p\), \(I^{(1)}_{j,k}\) decays rapidly over distances \(|j-k|\) larger than a few sites. This result is intuitive, since \(I^{(1)}_{j,k}\) contains information about correlations between individual spin components, whereas the measurements and unitaries correlate spin-fusion channels (a property of two or more spins).
Footnote 5: Throughout these numerics, the last layer of gates was applied on the odd bonds (sites 1 and 2, sites 3 and 4, etc.), leading to larger \(I^{(1)}_{j,j+1}\) for odd \(j\) than even \(j\).
In comparison, \(I^{(2)}_{j,k}\) decays more gradually with the distance \(|j-k|\) at all \(p>0\). For particular sites \(j\) and \(k\), \(I^{(2)}_{j,k}\) may depend on \(p\) nonmonotonically (Fig. 10). However, the asymptotic decay rate monotonically decreases as \(p\) decreases. To explore this decay rate, we examine the mutual information between antipodal pairs of sites: \(I^{(2)}_{1,L/2}\) (Fig. 11a). Given the limitations on system size, we cannot convincingly determine the asymptotic
Figure 8: Long-time bipartite entanglement entropy vs. system size. At \(p=0\), \(S_{\rm f}\sim L\), signaling a volume law. At \(p=1\), the entropy scales logarithmically or as a small power law: \(S_{\rm f}\sim\log(L)\), or \(S_{\rm f}\sim\sqrt{L}\).
Figure 7: The bipartite entanglement entropy saturates after \(L^{2}\) time steps. At the extreme \(p\) values \(p=0,1\), \(S_{\rm f}\) units changing (to within minor fluctuations).
decay's functional form. A power-law decay fits the data reasonably well (Fig. 11b). The fitted power \(a\) gradually decreases with \(p\). Furthermore, \(a\) changes qualitatively around \(p_{\rm c}=0.35\)--from changing quickly with \(p\), at \(p<p_{\rm c}\), to drifting slowly near \(-2\), at \(p>p_{\rm c}\). Given the small range of system sizes available, exponential decay fits the data reasonably well, too; we cannot rule out this behavior. Yet, given the other critical scaling behavior at \(p>p_{\rm c}\), we expect that power-law decay to be more natural in this regime. The data also prohibit confident distinction between (i) one power at \(p>p_{\rm c}\), with drifts in the fitted exponent, due to finite-size corrections, and (ii) continuously evolving power laws (as would arise in, say, a Luttinger liquid).
## Appendix C Effective Hamiltonian
In this appendix, we map the monitored dynamics onto the imaginary-time evolution of a replica effective Hamiltonian. Since the two-site gates are sampled indepen
Figure 11: **Mutual information at antipodal sites.** We call sites \(1\) and \(L/2\) antipodal. (a) \(\log\Bigl{(}I_{1,L/2}^{(2)}\Bigr{)}\) is plotted against \(\log(L)\) at several \(L\) values. Using the fit function \(\log\Bigl{(}I_{1,L/2}^{(2)}\Bigr{)}=a\log(L)+b\), we identify the critical exponent \(a\) in \(I_{1,L/2}^{(2)}\sim L^{a}\). (b) Plotting \(a\) against \(p\), we find that \(I_{1,L/2}^{(2)}\) decays as a power law in both phases, where \(a\) seems to be drifting.
Figure 10: **Mutual information between sites.** The mutual information between (a) sites \(1\) and \(j\) decays more quickly than between (b) sites \((1,2)\) and \((j,j+1)\). The inset highlights how \(I_{j,k}^{(2)}\) increases and then decreases as \(p\) grows, for some \(j\). The errorbars represent one standard deviation.
Figure 9: **In the critical phase, the numerics are consistent with a \(\sim L^{2}\) sharpening time scale.** The entropy \(S_{A}\) quantifies the ancilla qubit’s entanglement with the system. We plot \(\log(S_{A})\) for clarity, as \(S_{A}\) decays exponentially. \(t/L^{2}\) runs along the \(x\)-axis to demonstrate the numerics are consistent with a \(\sim L^{2}\) sharpening time scale. We used \(30\,000\) samples when \(L=8\) to \(14\); and \(10\,000\) samples when \(L=16\) to \(L=18\). The \(y\)-axis’s lower limit is \(\log\bigl{(}10^{-3}\bigr{)}\approx-6.91\).
dently, the above average over \(U\) factorizes over averages of two-site \(\mathrm{SU}(2)\)-symmetric gates. We parameterize the \(\mathrm{SU}(2)\)-symmetric gates as
\[U_{i,j}=P_{t}+e^{i\theta}P_{s}=e^{i\theta P_{s}}. \tag{10}\]
\(P_{t}\) and \(P_{s}\) denote projectors onto the triplet and singlet sectors of the \(\mathrm{SU}(2)\) symmetry on two qubits. \(i\) and \(j\) denote the qubits being acted on. We will suppress the subscript \(i\) and \(j\) unless they are needed to avoid confusion. This is similar Eq. (1), modulo a global phase factor. The variable \(\theta\) is sampled from a random distribution--for example, from a uniform distribution between \(0\) and \(2\pi\). The \(Q\) moment of the unitary gates is
\[\mathcal{U}\equiv U^{\otimes Q}\otimes\left(U^{\dagger}\right)^{\otimes Q}= \mathrm{e}^{i\sum_{a}\theta\left(P_{s}^{(a)}-P_{s}^{(a^{*})}\right)}\,. \tag{11}\]
\(P_{s}^{(a)}\) denotes the projector onto the singlet sector for replica index \(a\). The \(*\) symbol implies that the operator operates on the conjugate (backward) copy of the \(a\) replica (with \(a=1,2,\ldots,Q\)). We assume that \(\theta\) is sampled from a Gaussian distribution \(P(\theta)=\frac{1}{\sqrt{2\pi J}}e^{-\theta^{2}/(2J)}\), where \(J\) is a large constant controlling the unitary dynamics' scrambling strength. Performing average over \(\theta\) yields
\[\int d\theta\,P(\theta)\,\mathcal{U} =\exp\left(-J\left\{\sum_{a=1}^{Q}\left[P_{s}^{(a)}-P_{s}^{(a^{*}) }\right]\right\}^{2}\right) \tag{12}\] \[\equiv\exp(-H_{ij}^{u}).\]
To model measurements in the continuum-time limit, we consider the weak-measurement protocol of [58]. The action of the measurement of a local operator \(O\) on the replica density matrix is described as
\[\sum_{m}P_{m}^{\otimes Q}\rho^{(Q)}P_{m}^{\otimes Q} \tag{13}\] \[\to\int dm\,e^{-\gamma\sum_{a=1}^{Q}\left[(O^{a}-m)^{2}+(O^{a^{* }}-m)^{2}\right]}\rho^{(Q)}\] \[=\exp\left(-\gamma\sum_{a,b=1}^{2Q}O^{a}\Pi_{ab}O^{b}\right)\rho ^{(Q)}\] (14) \[\equiv\exp(-\gamma H^{\mathrm{m}})\rho^{(Q)}.\]
\(m\) denotes the weak measurement outcome, and \(\Pi_{a,b}=\delta_{a,b}-1/(2Q)\). We have identified (\(a^{*}\)) with index \((Q+a)\) and \(a=1,2,\ldots,Q\). From now, on this identification will be implicit whenever the replica index \(a\) is summed from \(1\) to \(2Q\). As in the main text, \(\gamma\) denotes the weak-measurement strength. For \(\mathrm{SU}(2)\)-symmetric systems, we measure the operators \(O=\vec{S}_{i}\cdot\vec{S}_{j}\).
The expressions above are for averages of single \(2\)-site unitary gates and measurements. We combine these local averages and assume a Trotter decomposition. The monitored evolution of the density matrix's averaged \(Q^{\mathrm{th}}\) moment is given by imaginary-time evolution under an effective Hamiltonian \(H^{\mathrm{eff}}\): \(\overline{\rho^{(Q)}(t)}=e^{-tH^{\mathrm{eff}}}\rho_{0}^{(Q)}\). The effective Hamiltonian decomposes as \(H^{\mathrm{eff}}=\sum_{i}\left(H_{i,i+1}^{u}+H_{i,i+1}^{m}\right)\), with
\[H_{i,j}^{\mathrm{m}}=\gamma\sum_{a,b}\left(\vec{S}_{i}^{a}\cdot\vec{S}_{j}^{a }\right)\Pi_{a,b}\left(\vec{S}_{i}^{b}\cdot\vec{S}_{j}^{b}\right). \tag{15}\]
The long-time properties of \(\overline{\rho^{(Q)}(t)}\) are thus described by low-temperature/ground-state properties of \(H^{\mathrm{eff}}\). The Hamiltonian has a \(S_{Q}\times S_{Q}\) symmetry, corresponding to the global permutation among the \(Q\) forward replicas and \(Q\) backward replicas.
A more illuminating way of understanding the structure of the Hamiltonian's ground states is to combine spin-\(1/2\) particles at replica \(a\) and \(\sigma(a)\), to form a fundamental representation of \(\mathrm{SU}(4)\). The Hamiltonian's unitary part, in this identification, can be written as [the label \(\sigma\) signifies that we have combined replicas \((a,\sigma(a))\) to form an \(\mathrm{SU}(4)\) representation] \(H^{u}=\frac{J}{2}\left(H_{0}[\sigma]+V^{u}[\sigma]\right)\). The \(H_{0}[\sigma]\) are \(Q\) copies of the \(\mathrm{SU}(4)\) ferromagnet, and
\[V^{u}[\sigma]= \sum_{i}\sum_{a<b}\left(\mathrm{Sw}_{i,i+1}^{a}-\mathrm{Sw}_{i,i+1}^{ \sigma(a)*}\right)\left(\mathrm{Sw}_{i,i+1}^{b}-\mathrm{Sw}_{i,i+1}^{\sigma(b )*}\right). \tag{16}\]
\(\mathrm{Sw}_{i,j}^{a}\) denotes the SWAP operator between the spins at sites \(i\) and \(j\) in replica index \(a\): \(\mathrm{Sw}_{i,j}=1/2+2\vec{S}_{i}\cdot\vec{S}_{j}\). In terms of these SWAP operators, the \(\mathrm{SU}(4)\) ferromagnet is
\[H_{0}[\sigma]= \sum_{a=1}^{Q}\sum_{i}\left[1-\left(\mathrm{Sw}_{i,i+1}^{a} \right)\left(\mathrm{Sw}_{i,i+1}^{\sigma(a)*}\right)\right]. \tag{17}\]
The measurement part of Hamiltonian also decomposes into two terms. One part is the \(\mathrm{SU}(4)\) ferromagnet, \(H^{\mathrm{m}}=\frac{\gamma}{Q}H_{0}[\sigma]-\frac{\gamma}{2Q}V^{\mathrm{m}}[\sigma]\), where
\[V^{\mathrm{m}}[\sigma]=\sum_{i}\sum_{a\neq b}\left(\mathrm{Sw}_{i,i+1}^{a}+ \mathrm{Sw}_{i,i+1}^{\sigma(a)*}\right)\left(\mathrm{Sw}_{i,i+1}^{b}+\mathrm{ Sw}_{i,i+1}^{\sigma(b)*}\right). \tag{18}\]
|
2307.12631
|
Fate of localization in coupled free chain and disordered chain
|
It has been widely believed that almost all states in one-dimensional (1d)
disordered systems with short-range hopping and uncorrelated random potential
are localized. Here, we consider the fate of these localized states by coupling
between a disordered chain (with localized states) and a free chain (with
extended states), showing that states in the overlapped and un-overlapped
regimes exhibit totally different localization behaviors, which is not a phase
transition process. In particular, while states in the overlapped regime are
localized by resonant coupling, in the un-overlapped regime of the free chain,
significant suppression of the localization with a prefactor of $\xi^{-1}
\propto t_v^4/\Delta^4$ appeared, where $t_v$ is the inter-chain coupling
strength and $\Delta$ is the energy shift between them. This system may exhibit
localization lengths that are comparable with the system size even when the
potential in the disordered chain is strong. We confirm these results using the
transfer matrix method and sparse matrix method for systems $L \sim 10^6 -
10^9$. These findings extend our understanding of localization in
low-dimensional disordered systems and provide a concrete example, which may
call for much more advanced numerical methods in high-dimensional models.
|
Xiaoshui Lin, Ming Gong
|
2023-07-24T09:11:25Z
|
http://arxiv.org/abs/2307.12631v1
|
# Fate of localization in coupled free chain and disordered chain
###### Abstract
It has been widely believed that almost all states in one-dimensional (1d) disordered systems with short-range hopping and uncorrelated random potential are localized. Here, we consider the fate of these localized states by coupling between a disordered chain (with localized states) and a free chain (with extended states), showing that states in the overlapped and un-overlapped regimes exhibit totally different localization behaviors, which is not a phase transition process. In particular, while states in the overlapped regime are localized by resonant coupling, in the un-overlapped regime of the free chain, significant suppression of the localization with a prefactor of \(\xi^{-1}\propto t_{\rm v}^{4}/\Delta^{4}\) appeared, where \(t_{\rm v}\) is the inter-chain coupling strength and \(\Delta\) is the energy shift between them. This system may exhibit localization lengths that are comparable with the system size even when the potential in the disordered chain is strong. We confirm these results using the transfer matrix method and sparse matrix method for systems \(L\sim 10^{6}-10^{9}\). These findings extend our understanding of localization in low-dimensional disordered systems and provide a concrete example, which may call for much more advanced numerical methods in high-dimensional models.
Anderson localization (AL), which describes the phenomenon that the disorder totally suppresses the diffusion of the system, has attracted a great deal of attention for many decades [1; 2; 3; 4; 5; 6]. It has been found that the spatial dimension plays an essential role in AL [7; 8; 9; 10; 11]. In the one-dimensional (1d) tight-binding model with random potential, the localization length is given by [12; 13]
\[\xi_{0}^{-1}(E)=\frac{v^{2}}{8t^{2}-2E^{2}}=\frac{V^{2}}{96t^{2}-24E^{2}}, \tag{1}\]
where \(v^{2}=\langle v_{\rm f}^{2}\rangle\) is the variance of the potential \(v_{i}\in U[-V/2,V/2]\), with \(V\) being the disorder strength, \(t\) is the hopping strength between the neighboring sites and \(E\) is the eigenvalue (see Eq. 14 in Ref. [12] for more details ). When the system size \(L\) is much larger than \(\xi_{0}\), \(L\gg\xi_{0}\), localization of wave functions can be observed and the system is in the localized phase without conductance. In the presence of weak disorder, \(V\ll t\), we have \(|E|<2t\), thus all states should be localized with \(\xi_{0}^{-1}>0\). For example, when \(V\sim 0.1t\), \(\xi_{0}\sim 10^{4}\), which can be easily confirmed by numerical simulation. The fate of the states in 1d systems will be changed fundamentally with incommensurate potentials [14; 15; 16; 17], long-range correlated disorders [18; 19; 20; 21; 22] and many-body interactions [23; 24; 25; 26; 27].
While the physics of disordered models have been widely discussed [3; 9; 13; 28], the fate of localization by the coupling of extended and localized states are much lesser investigated. We are interested in this issue for the reason of the dilemma that (I) the random uncorrelated potential induces localization for the extended states in 1d systems [8]; and (II) the hybridization between localized and extended states may lead to delocalization [29]. The interplay between these two mechanisms may lead to different physics. In this work, we propose a coupled disordered model (Fig. 1) to address this problem. Our model is constructed from one free chain (with all states extended) and one disordered chain (with all states localized). Two major conclusions have been established: (1) While all states exhibit localization in the presence of inter-chain coupling, their localization length exhibits a distinct difference in the overlapped spectra and un-overlapped spectra, which is not a phase transition process; (2) The localization length for states in the un-overlapped regime of the free chain is significantly suppressed given by
\[\xi^{-1}(E)\simeq\frac{t_{\rm v}^{4}V^{2}}{(96t^{2}-24(E-\Delta)^{2})\Delta^{4 }}=\frac{t_{\rm v}^{4}}{\Delta^{4}}\xi_{0}^{-1}(E-\Delta), \tag{2}\]
in the limit when \(\Delta\gg|V|\). Here \(t_{\rm v}\) is the inter-chain coupling, and \(\Delta\) is the energy shift between the free and disordered chains. Thus localization is greatly suppressed when \(t_{\rm v}\ll\Delta\). For instance, when \(t_{\rm v}=0.1t\) and \(\Delta=10t\), the localization length can be suppressed by eight orders of magnitude. We examine the above conclusions using the transfer matrix method and the sparse matrix method with system sizes \(L\sim 10^{6}-10^{9}\). Our results show that the inter-chain coupling, disorder strength, and energy shift are the three major factors influencing the localization length of states in the un-overlapped regime. In the regime when \(\xi\gtrsim L\), we can
Figure 1: The realization of the coupled disordered model. The free chain does not have random potential, thus all states are extended; yet in the disordered chain with random potential, all states are localized. The coupling between them is the major concern of this work.
understand the localization of wave functions with the following general theorem based on the large amount of researches; see review articles [3; 30; 31; 9].
_Theorem_: In 1d disordered systems with short-range hopping and uncorrelated random potential, almost all states are localized in the thermodynamic limit (\(L\to\infty\)).
The exact proof of this theorem is still a great challenge at the present stage; however, it can be understood intuitively from the observation by Mott _et al_[7], the argument by Thouless [32], and the scaling argument by Abrahams _et al_, in which the \(\beta\) function is always negative [8]. This theorem is also addressed by the celebrated Dorokhov-Mello-Pereyra-Kumar equation [30] and the non-linear sigma model [33]. Here, not all states are localized because, in the 1d model with off-diagonal random potential, the state with \(E=0\) is extended while all the other states are localized [34]. The mathematicians have great interest in this problem and have proved this theorem with random potentials [35; 36; 37; 38], showing of no continuous spectra for extended states. This theorem will play a determinative role for localization when \(\xi\gtrsim L\), which is beyond the capability of the numerical simulations.
_Physical model and methods_: We consider the following coupled disordered model (see Fig. 1)
\[H=H_{0}+H_{1}+\sum_{m,\sigma}t_{\rm v}a^{\dagger}_{m,\sigma}a_{m,\bar{\sigma}}, \tag{3}\]
where \(H_{\sigma}=\sum_{m}t_{\sigma}a^{\dagger}_{m,\sigma}a_{m+1,\sigma}\) and \(H_{1}\) for the disordered chain. Here \(V_{m,0}=\Delta\) is the energy shift of the free chain \(H_{0}\) and \(V_{m,1}\in U[-V/2,V/2]\) is the random potential in the disordered chain \(H_{1}\), and \(\bar{\sigma}=1-\sigma\in\{0,1\}\). When \(t_{\rm v}=0\), this model is reduced to a free chain with all states extended, and a disordered chain with all states localized. By changing \(\Delta\), the energy spectra of these two chains can be un-overlapped (Fig. 2 (a)) or overlapped (Fig. 2 (b)). By the theorem, all states should be localized in the presence of inter-chain coupling (with \(t_{v}\neq 0\)). The fundamental question is what are the quantitative differences between the states in the overlapped regime and the un-overlapped regime of the coupled model during localization?
We apply the transfer matrix and sparse matrix methods, whose available size is \(L\sim 10^{6}-10^{9}\), to understand the localization of wave functions in these two regimes. In the transfer matrix method [39], the Lyapunov exponent \(\gamma(E)=\xi(E)^{-1}\) is defined as the smallest positive eigenvalue of the matrix
\[\Gamma(E)=\lim_{L\to\infty}\frac{1}{2L}\ln\Bigl{(}T^{\dagger}_{1}\ldots T^{ \dagger}_{L}T_{L}\ldots T_{1}\Bigr{)}, \tag{4}\]
where \(T_{i}\) is the transfer matrix at the \(i\)-th site. From the Oseledets ergodic theorem [40; 41], the above multiplication of transfer matrices is converged when \(L\to\infty\). When \(\gamma(E)\neq 0\), the state with eigenvalue \(E\) is localized. In the sparse matrix method, we use the shift-invert method [42; 43] to obtain about \(N_{E}=20\) eigenstates \(|\psi_{E_{i}}\rangle\) with eigenvalues \(E_{i}\) around a given \(E\) and define the averaged inverse participation ratio (IPR) as [3]
\[\langle{\rm IPR}\rangle_{E}=\frac{1}{N_{E}}\sum_{i=1}^{N_{E}}\sum_{m=0}^{2L-1} |\psi_{E_{i}}(m)|^{4}. \tag{5}\]
For the extended state, \(\langle{\rm IPR}\rangle_{E}\propto L^{-1}\) and for the localized state, \(\langle{\rm IPR}\rangle_{E}\) is finite. Furthermore, we can define the fractal dimension \(\tau_{2}(E,L)=-\ln(\langle{\rm IPR}\rangle_{E})/\ln(L)\) and its limit \(\tau_{2}(E)=\lim_{L\to\infty}\tau_{2}(E,L)\). We have \(\tau_{2}(E)=0\) for localized states, \(\tau_{2}(E)=1\) for extended states, and \(0<\tau_{2}(E)<1\) for critical states, respectively [44; 45; 3]. We note that the IPR should be proportional to the Lyapunov exponent \(\gamma(E)\) for an exponentially localized state \(\psi_{m}\sim e^{-|m|/\xi}\), with \({\rm IPR}\propto\xi^{-1}=\gamma\), in the limit \(L\gg\xi\).
_Physics in the overlapped and un-overlapped regimes_: Although all states are expected to be localized in our model, the effect of inter-chain coupling in the overlapped regime and un-overlapped regime should be different, leading to distinct localization behavior. We consider
Figure 2: (a) (b) The averaged density of states \(\rho(E)\) for the free and disordered chains with \(t_{\rm v}=0\). Other parameters are (a) \(\Delta=-10\), \(V=10\); and (b) \(\Delta=-6\) and \(V=10\). (c) (d) Logarithm of the wave function amplitude with different energy against the lattice index \(m\) for the coupled disordered model (\(t_{\rm v}=0.1\)). The index \(m<L\) (\(m\geq L\)) is for Hilbert space of \(H_{0}\) (\(H_{1}\)). Inset of (c) shows the detailed wave function around its localized center. In panel (a) and (b), \(L=10^{3}\); (c) and (d) \(L=10^{6}\).
two different cases, which are shown in Fig. 2 (a) and (b). In the first case, the spectra in the two chains are un-overlapped to avoid the resonant coupling between the extended and localized states; while in the second case, resonant coupling is induced in their overlapped regime. Furthermore, we present their typical wave functions in these regimes in Fig. 2 (c) and (d) with \(t_{\rm v}=0.1\). The results show that the wave functions in the un-overlapped regime of the disordered chain are exponentially localized with localization length around unity (see the state with \(E=0\) in Fig. 2 (c) and (d)). The wave functions in the overlapped regime are also exponentially localized, however, with localization length \(\xi\sim 0.05L\), which is much larger than the localized states in the un-overlapped regime of the disordered chain. Strikingly, the wave functions in the un-overlapped regime of the free chain are extended even when the system size \(L=10^{6}\). Similar features can be found when \(L\) is increased to \(L\sim 10^{9}\) for smaller \(t_{\rm v}\). This seems to contradict with the general theorem. This dilemma is the major concern of this work.
Next, we investigate the asymptotic behavior of the wave function using the transfer matrix method. In Fig. 3 (a) and (b), we present the Lyapunov exponent \(\gamma(E)\) against the energy \(E\) for a system with size \(L=10^{9}\). When \(t_{\rm v}=0\), the states in the free chain are extended and the states in the disordered chain are localized. When \(t_{\rm v}=0.1\), all states become localized with \(\gamma(E)>L^{-1}\). However, there are three distinct energy regimes for \(\gamma(E)\), corresponding to the overlapped and un-overlapped regimes. In the overlapped regime, we have \(\gamma(E)\sim 10^{-4}\), while in the un-overlapped regime, we have \(\gamma(E)\sim 10^{0}\) (disordered chain) or \(\gamma(E)\sim 10^{-7}\) (free chain). These distinct behaviors are unique features of the coupled disorder model, which should not be regarded as some kind of phase transition between extended and localized states; see below.
We further characterize the three energy regimes using IPR and fractal dimension \(\tau_{2}(E,L)\), with results presented in Fig. 3 (c) and (d). It is found that \(\tau_{2}(E,L)\to 0\) for states in the overlapped regime and un-overlapped regime of the disordered chain, indicating localization. However, \(\tau_{2}(E,L)\to 1\) for states in the un-overlapped regime of the free chain, contradicting the previous theorem at first sight. This contradiction is actually a finite-size effect since \(L=2^{18}\sim 10^{5}<\xi\sim 10^{7}\). Thus, it is expected that all these states will be localized from the general theorem in the thermodynamic limit (\(L\gg\xi\), or equivalently, \(L\gamma(E)\gg 1\)). This also clarifies the previous disagreement presented in Fig. 2.
_Origin of the suppressed localization length_: The above results raise some fundamental questions that need to be addressed much more carefully. In the overlapped regime, the localized states and extended states are coupled through resonant coupling because their energy is close to each other. From perturbation theory [1], all the higher-order terms will become important, leading to significant modification of the wave functions for localization. In the un-overlapped regime of the free chain, wave function localization is greatly suppressed by a different mechanism.
To this end, we first consider the localization in the following minimal model
\[H=H_{0}+(t_{\rm v}d_{m,0}^{\dagger}a_{m,1}+{\rm h.c.})+V_{m,1}a_{m,1}^{ \dagger}a_{m,1}. \tag{6}\]
As compared with Eq. 3, here we set \(t_{1}=0\) and \(t_{0}=t=1\) (see Fig. 4 (a)). When \(t_{\rm v}=0\), the eigenstates of component \(\sigma=1\) are fully localized at one site with eigenvalue distributed in the interval \([-V/2,V/2]\). On the other hand, the eigenstates of component \(\sigma=0\) are \(\psi(m)\propto e^{ikm}\) with energy spectra in \([\Delta-2t,\Delta+2t]\). Thus, the energy spectra of each component are well separated when \(|\Delta|>|V/2|+2t\). We focus on the physics in this regime (with \(t_{\rm v}\neq 0\)) for the suppressed localization.
The central problem is to verify the major result of Eq. 2. We employ the sparse matrix method to examine the physic in Eq. 6, and the results for \(E=\Delta\) are presented in Fig. 4. It is found that \(\langle{\rm IPR}\rangle_{E}\propto t_{\rm v}^{4}\) with intermediate \(t_{\rm v}\) with \(\xi<L\). In the small inter-chain coupling limit, the IPR will be saturated to \(L^{-1}\) when \(\xi\gtrsim L\) due to the finite size effect. Thus the relation of \({\rm IPR}\propto t_{\rm v}^{4}\) always holds in the thermodynamical limit. In Fig. 4 (c), we also
Figure 3: (a) (b) The Lyapunov exponent \(\gamma(E)\) against the energy \(E\) for \(t_{\rm v}=0\) and \(t_{\rm v}=0.1\), with system size \(L=10^{9}\). The shift is (a) \(\Delta=-10\); (b) \(\Delta=-6\). The vertical dashed lines in (a) denote \(E=-8\) and \(E=-6.4\), and in (b) denote \(E=-6.4\) and \(E=-4\). The red dashed lines are estimated by Eq. 2. (c) (d) The fractal dimension \(\tau_{2}(E,L)\) versus energy \(E\) for \(t=0.1\). The system size is \(L=2^{11}\) (blue); \(L=2^{14}\) (green); \(L=2^{16}\) (yellow); \(L=2^{18}\) (red). The inset present the results \(\tau_{2}(E,L)\) against \(1/\ln(L)\) for different energy \(E\).
examine the dependence of IPR as a function of energy shift \(\Delta\), finding that \(\text{IPR}\propto\Delta^{-4}\) in the large \(\Delta\) limit. However, when \(t_{\text{v}}/\Delta\ll 1\) we have \(\xi\gtrsim L\) and saturation of IPR is found again for the same reason as Fig. 4 (b). Combining these two power-law dependence will yields \(\xi^{-1}\propto(t_{\text{v}}/\Delta)^{4}\) in Eq. 2, using IPR\(\propto 1/\xi\).
To more accurately describe localization length as a function of eigenvalue \(E\), we then derive an effective Hamiltonian as
\[H_{\text{eff}}=\sum_{m}ta_{m+1,0}^{\dagger}a_{m,0}+\text{h.c.}+ \sum_{m}(\Delta+W_{m})a_{m,0}^{\dagger}a_{m,0}, \tag{7}\]
where \(W_{m}=t_{\text{v}}^{2}/(\Delta-V_{m,1})\) for states with \(E\sim\Delta\). A direct calculation shows that
\[\langle W_{m}\rangle =\frac{1}{V}\int_{-V/2}^{V/2}W_{m}dV_{m,1}=\frac{t_{\text{v}}^{2} }{V}\ln\biggl{(}\frac{2|\Delta|/V+1}{2|\Delta|/V-1}\biggr{)},\] \[\langle W_{m}W_{n}\rangle =\frac{4t_{\text{v}}^{4}}{4\Delta^{2}-V^{2}}\delta_{m,n}, \tag{8}\]
with \(\langle\cdot\rangle\) represents its disorder averaged value. The variance of \(W_{m}\) can be written as
\[v^{2}=\langle W_{m}^{2}\rangle-\langle W_{m}\rangle^{2}=t_{\text{v}}^{4}(\frac {4}{4\Delta^{2}-V^{2}}-f(\Delta,V)), \tag{9}\]
with \(f(\Delta,V)=(\frac{1}{V}\ln\Bigl{(}\frac{2|\Delta|/V+1}{2|\Delta|/V-1}\Bigr{)} )^{2}\). It yields
\[v^{2}=\frac{t_{\text{v}}^{4}V^{2}}{12\Delta^{4}}+\frac{11t_{\text{v}}^{4}V^{4 }}{360\Delta^{6}}+\mathcal{O}(\Delta^{-6}), \tag{10}\]
when \(\Delta\gg V\). The leading term yields the localization length in Eq. 2 with a suppressed prefactor of \(t_{\text{v}}^{4}/\Delta^{4}\), which is numerically confirmed in Fig. 4 in a large system. This completes the proof of Eq. 2.
The previous conclusion is based on the minimal model with \(t_{1}=0\). We then move to examine the effect of hopping \(t_{1}\) in the disordered chain, which is expected to extend the wave functions and hence suppressed the localization length in the free chain. Thus it is crucial to ask to what extent \(t_{1}\) can influence the localization length \(\xi\). To this end, we fix \(\Delta\) and \(t_{\text{v}}\) and change the value of \(t_{1}\), and the results of IPR against \(t_{1}\) are presented in Fig. 5. We find that the IPR is slightly decreased with the increasing of \(t_{1}\), indicating that \(t_{1}\) is not the essential term for the localization of the free chain. Therefore, we expect that Eq. 2 serves as a good approximation for the localization length even with finite \(t_{1}\), which accounts for the excellent agreement between the numerical and theoretical results in Fig. 3 (a) and Fig. 6. Finally, we present the relation between the localization length as a function of energy \(E\) in a single disordered chain and in a coupled disordered model in Fig. 6, which further confirms the empirical formulas of Eq. 1 and Eq. 2. In Fig. 3 (b), we have used
\[\xi^{-1}(E)=(\frac{4}{4\Delta^{2}-V^{2}}-f(\Delta,V))\frac{t_{\text{v}}^{4}}{8 t^{2}-2(E-\Delta)^{2}}, \tag{11}\]
which yields Eq. 2 in the large \(\Delta\) limit. We point out that for finite \(\Delta\), the higher-order term \(\mathcal{O}(\Delta^{-4})\) in \(f(\Delta,V)\) is important. Therefore, while the localization length in the overlapped and un-overlapped spectra exhibits distinct behaviors with all wave functions being localized, it is not a phase transition process.
_Conclusion and discussion_: In this work, we present a coupled disordered model by coupling a disordered chain with a free chain, where the localization lengths in the overlapped and un-overlapped regimes differ by several orders of magnitude. In the overlapped regime, the states from the free chain are localized by resonant coupling be
Figure 5: The logarithm of IPR versus the intra-chain hopping \(t_{1}\), with \(\Delta=-10\) and \(t_{\text{v}}=1\) for fixed energy \(E=-10\). The black dashed lines are linear fitting with \(\log_{10}((\text{IPR})_{E})\sim\nu\log_{10}(t_{1})\), with \(\nu=-0.0026\) for \(L=10^{4}\), \(\nu=-0.0066\) for \(L=10^{5}\), and \(\nu=-0.0075\) for \(L=10^{5}\)
Figure 4: (a) The schematic of the model in Eq. 7. The free chain \(H_{0}\) (red) is coupled to a disordered chain \(H_{1}\) (blue) with \(t_{1}=0\). (b) \(\log_{10}(\text{IPR})\) against \(\log_{10}(t_{\text{v}})\) for different energy shift \(\Delta\) and lattice size with \(E=\Delta\). The colors represent \(\Delta=-20\) (red); \(\Delta=-10\) (green); \(\Delta=-7\) (blue). The system size is \(L=10^{4}\) (square), \(10^{5}\) (circular), and \(10^{6}\) (cross). The two black dashed lines are linear fitting with \(\log_{10}((\text{IPR})_{E})\sim 4\log_{10}(t_{\text{v}})\). (c) \(\log_{10}(\text{IPR})\) against \(\log_{10}(|\Delta|)\) for different \(t_{\text{v}}\). The meaning of symbols is the same as (b). The colors represent \(t_{\text{v}}=1.5\) (red); \(t_{\text{v}}=1.0\) (green); and \(t_{\text{v}}=0.5\) (blue), and the black dashed lines denote \(\log_{10}((\text{IPR})_{E})\sim-4\log_{10}(|\Delta|)\).
tween the localized and extended chains. However, in the un-overlapped regime of the free chain, while the states are still localized by the general theorem, they exhibit suppressed localization with a prefactor of \(t_{\rm v}^{4}/\Delta^{4}\). We find that the inter-chain coupling, disorder strength, and energy shift play a leading role in localization, yet the effect of intra-chain hopping \(t_{1}\) in the disordered chain is not significant. The results presented in this work are in 1d disordered models, and extending this research to higher dimensional models [46, 47, 48] and many-body models [49, 50, 51] is also intriguing, in which we expect the overlapped regime and un-overlapped regime will also exhibit totally different behaviors [52]. Furthermore, for a system with a large localization length in the higher-dimensional models, it may call for much more advanced numerical methods.
These results can be readily confirmed in the states-of-art experiments with ultracold atoms [53, 54, 55, 56, 5], in which the two chains can be realized by the hyperfine states. The inter-chain coupling can be realized by Raman coupling and their potential shift is a natural consequence of detuning and Zeeman field. In these systems, the wave functions in each chain can be independently realized in the limit \(t_{\rm v}\sim 0\), and their localization can be measured individually using the time-of-flight imaging technique. In recent years, AL in disordered systems has been an important direction in ultracold atoms and huge progress has already been achieved [57, 58, 59, 60, 61, 62, 5] and we expect the experimental confirmation of these results can provide perspicuous evidences for the dilemma of (I) and (II).
Finally, it is necessary to emphasize that the disordered potential (with short-range correlation) has totally different features from the incommensurate potential. In the coupled free chain and the incommensurate chain, without the guarantee of the general theorem, one can realize a critical phase in the overlapped spectra [45], in which the overlapped and un-overlapped spectra also exhibit distinct behaviors in localization. The similar critical phase by coupling of extended and localized states in the Floquet model with incommensurate potential has also been presented by Roy _et al_ in Ref. [63]. Here, we present a much-simplified model, which can be solved analytically in the limiting condition, in the hope that these intriguing results to be found in the more complicated coupled many-body models and coupled random matrices [64].
_Acknowledgments_: This work is supported by the National Natural Science Foundation of China (NSFC) with No. 11774328, and Innovation Program for Quantum Science and Technology (No. 2021ZD0301200 and No. 2021ZD0301500).
|
2305.12048
|
Pharmacokinetic parameters quantification in DCE-MRI for prostate cancer
|
Tumor vascularity detection and quantification are of high relevance in the
assessment of cancer lesions not only for disease diagnostics but for therapy
considerations and monitoring. The present work addressed the quantification of
pharmacokinetic parameters derived from the two-compartment Brix model by
analyzing and processing Dynamic Contrast-Enhanced Magnetic Resonance Images
(DCE-MRI) of prostate cancer lesions. The 3D image sets were acquired at
regular time intervals, covering all the phases implied in contrast injection
(wash-in and wash-out phases), and the standardized image intensity is
determined for each voxel, conforming to a 4D data set. Previous voxel
classification was carried out by the three-time-point method proposed by
Degani et al. (1997) and Furman-Haran et al. (1998) to identify regions of
interest. Relevant pharmacokinetic parameters, such as kel, the vascular
elimination rate, and kep, the extravascular transfer rate, are extracted by a
novel interpolation method applicable to compartment models. Parameter
distribution maps were obtained for either pathological or unaffected glandular
regions indicating that a three-compartment model, including fast and slow
exchange compartments, provides a more suitable description of the contrast
kinetics. Results can be applied to prostate cancer diagnostic evaluation and
therapy follow-up.
|
Jhonalbert Aponte, Álvaro Ruiz, Jacksson Sánchez, Miguel Martín-Landrove
|
2023-05-20T00:38:56Z
|
http://arxiv.org/abs/2305.12048v1
|
# Pharmacokinetic parameters quantification in DCE-MRI for prostate cancer
###### Abstract
Tumor vascularity detection and quantification are of high relevance in the assessment of cancer lesions not only for disease diagnostics but for therapy considerations and monitoring. The present work addressed the quantification of pharmacokinetic parameters derived from the two-compartment Brix model by analyzing and processing Dynamic Contrast-Enhanced Magnetic Resonance Images (DCE-MRI) of prostate cancer lesions. The 3D image sets were acquired at regular time intervals, covering all the phases implied in contrast injection (wash-in and wash-out phases), and the standardized image intensity is determined for each voxel, conforming to a 4D data set. Previous voxel classification was carried out by the three-time-point method proposed by Degani et al. (1997) and Furman-Haran et al. (1998) to identify regions of interest. Relevant pharmacokinetic parameters, such as \(k_{el}\), the vascular elimination rate, and \(k_{ep}\), the extravascular transfer rate, are extracted by a novel interpolation method applicable to compartment models. Parameter distribution maps were obtained for either pathological or unaffected glandular regions indicating that a three-compartment model, including fast and slow exchange compartments, provides a more suitable description of the contrast kinetics. Results can be applied to prostate cancer diagnostic evaluation and therapy follow-up.
DCE-MRI, Levenberg-Marquardt, prostate cancer, tumor vascularity, two-compartment pharmacokinetic model figure table
*Miguel Martin-Landrove, [email protected]
## 1 Introduction
Dynamic Contrast-Enhanced MRI or DCE-MRI has been applied extensively to diagnose and quantify several pathologies associated with cancer.[1, 2, 3, 4, 5, 6] It essentially consists of the controlled intravenous delivery of a contrast agent, typically a Gadolinium compound, followed by the acquisition of a volumetric \(T_{1}\)-weighted MRI. The resulting image dataset can be analyzed either qualitatively[7, 8, 9, 10, 11, 12] or quantitatively.[4, 5, 6, 13, 14] Several compartmental models are proposed for the pharmacokinetics of the contrast agent.[15, 16, 17, 18, 14] In the case of prostate DCE-MRI, pharmacokinetic two-compartment models are applicable.[5, 16, 15] The present work is addressed to the quantification of pharmacokinetic parameters, as derived from the two-compartment Brix-Tofts model[15, 16] and
multi-compartmental models [19, 20, 21, 22], through the analysis and processing of DCE-MRI of prostate cancer lesions, using the Levenberg-Marquardt algorithm [23, 24] and modified de Prony method [25, 26].
The work is organized as follows, in Section 2, general aspects of the two-compartment Brix-Tofts model, three-compartment models, image acquisition and qualitative analysis of the image data set, and quantitative evaluation of the pharmacokinetic parameters are discussed; in Section 3, the results are discussed and some conclusions are presented.
## 2 Materials and methods
### Two-compartment Brix-Tofts model
Essentially, the model establishes that the contrast intake occurs in two connected environments or compartments: one, which is called the central compartment related to the capillary and vascular space, and the second, the peripheral compartment, generally related to extravascular and extracellular space. The model can be depicted schematically as shown in Figure 1, after [15],
The parameters of the model \(k_{in}\), \(k^{trnas}\), \(k_{ep}\), \(k_{el}\), \(v_{p}\) and \(v_{e}\) describe the pharmacokinetics in the system, i.e., concentration exchange velocities and volumes of the compartments, respectively.
Figure 1: Schematic representation of e two-compartment Brix-Tofts model
The differential equations that describe the pharmacokinetics are,
\[\frac{dM_{1}}{dt}= k_{in}-(k^{trans}+k_{el})M_{1}+k_{ep}M_{2}\] \[\frac{dM_{2}}{dt}= k^{trans}M_{1}-k_{ep}M_{2} \tag{1}\]
where \(M_{1}\) and \(M_{2}\) are the contrast agent total mass in compartments 1 and 2 respectively. Assuming that the transfer velocity between both compartments is equal,
\[k^{trans}v_{p}=k_{ep}v_{e} \tag{2}\]
and \(v_{e}\ll v_{p}\), equation 1 can be written,
\[\frac{dC_{1}}{dt}=\frac{k_{in}}{v_{p}}-k_{el}C_{1},\frac{dC_{2}}{dt}=\frac{v_{ p}}{v_{e}}k^{trans}C_{1}-k_{ep}C_{2} \tag{3}\]
where \(C_{1}\) and \(C_{2}\) are the contrast concentrations in the compartments. Because the contrast agent is delivered as a bolus, the set defined in equation 3 has a solution,
\[C_{1}(t)= \frac{k_{in}}{v_{p}k_{el}}(e^{k_{el}t^{\prime}}-1)e^{-k_{el}t}\] \[C_{2}(t)= \frac{k_{in}k^{trans}}{v_{e}}\times \tag{4}\] \[\times\left[v(e^{k_{el}t^{\prime}}-1)e^{-k_{el}t}-u(e^{k_{ep}t^{ \prime}}-1)e^{-k_{ep}t}\right]\]
where coefficients \(u\) and \(v\) are given by,
\[u= [k_{ep}(k_{ep}-k_{el})]^{-1}\] \[v= [k_{el}(k_{ep}-k_{el})]^{-1} \tag{5}\]
In the solution set, equation 4, the time parameter \(t^{\prime}\) defines the different phases for the progression of the contrast agent. During the wash-in phase, \(0\leq t\leq\tau\), where \(\tau\) is the contrast infusion time, \(t^{\prime}\) is taken equal to \(t\). In the wash-out phase, \(t>\tau\), \(t^{\prime}\) is set equal to \(\tau\).
### Fast and Slow Exchange Compartments
The Brix-Tofts model represents a simple approach to a real system. Several authors[19, 20, 21, 22] have proposed multi-compartmental models to describe the kinetics of the contrast in the tissue. Among these models, the simplest one corresponds to the assumption that the system can be described by a three-compartment model[20, 22] with the following compartmental equations,
\[\frac{dC_{p}}{dt}= \frac{k_{in}}{v_{p}}-(k_{s}^{trans}+k_{f}^{trans}+k_{el})C_{p}+\] \[+\frac{k_{eps}v_{es}}{v_{p}}C_{s}+\frac{k_{epf}v_{ef}}{v_{p}}C_{f}\] \[\frac{dC_{s}}{dt}= \frac{k_{s}^{trans}v_{p}}{v_{es}}C_{p}-k_{eps}C_{s}\] \[\frac{dC_{f}}{dt}= \frac{k_{f}^{trans}v_{p}}{v_{ef}}C_{p}-k_{epf}C_{f} \tag{6}\]
Under the same assumptions made for the Brix-Tofts model, equations (6) have the solution,
\[C_{p}(t) =\frac{k_{in}}{v_{p}k_{el}}(e^{k_{el}t^{\prime}}-1)e^{-k_{el}t}\] \[C_{s}(t) =\frac{k_{in}k_{s}^{trans}}{v_{es}}\times\] \[\times\left[v_{s}(e^{k_{el}t^{\prime}}-1)e^{-k_{el}t}-u_{s}(e^{k_ {eps}t^{\prime}}-1)e^{-k_{eps}t}\right]\] \[C_{f}(t) =\frac{k_{in}k_{f}^{trans}}{v_{ef}}\times\] \[\times\left[v_{f}(e^{k_{el}t^{\prime}}-1)e^{-k_{el}t}-u_{f}(e^{k_ {ep}t^{\prime}}-1)e^{-k_{ep}t}\right] \tag{7}\]
with,
\[u_{s,f} = \left[k_{ep;s,f}(k_{ep;s,f}-k_{el})\right]^{-1}\] \[v_{s,f} = \left[k_{el}(k_{ep;s,f}-k_{el})\right]^{-1} \tag{8}\]
and similarly, in equations (7) during the wash-in phase, \(0\leq t\leq\tau\), where \(\tau\) is the contrast infusion time, \(t^{\prime}\) is taken equal to \(t\), and in the wash-out phase, \(t>\tau\), \(t^{\prime}\) is set equal to \(\tau\). The contrast concentration in the tissue is the sum of \(C_{s}\) and \(C_{f}\).
### Image acquisition and quantitative analysis
\(T_{1}\)-weighted MRI intensity is related to the extravascular and extracellular contrast concentration \(C_{2}\) through the following relation [5],
\[S_{t}\approx S_{0}(1+FC_{2}(t)) \tag{9}\]
which holds if certain conditions apply for the MRI sequence parameters, i.e., \(TR\alpha C_{2}\ll 1\) and \(TE\beta C_{2}\), where \(\alpha\) and \(\beta\) determine the enhancement of longitudinal and transversal relaxation rates, respectively, due to the contrast agent, and \(S_{0}\) is the image intensity without contrast. In general, DCE - MRI protocols are fine-tuned to fulfill the necessary conditions for equation 9 to apply, and in such a case, equations for the concentration \(C_{2}\) can be obtained from the image data set as,
\[C_{2} =A\left[v(1-e^{-k_{el}t})-u(1-e^{-k_{ep}t})\right],t\leq\tau\] \[C_{2} =A\left[v(e^{k_{el}\tau}-1)e^{-k_{el}\tau}-u(e^{k_{ep}\tau}-1)e^{ -k_{ep}t}\right],t>\tau \tag{10}\]
Taking into account that \(u\) and \(v\) are given by equation 5, only four parameters are needed to fit the model to experimental data: \(k_{e}l\), \(k_{e}p\), \(\tau\) and \(A\). Prostate DCE-MRI was obtained from the Collection Prostate-Diagnosis at The Cancer Imaging Archive (TCIA), National Cancer Institute [27].
The image data set was previously registered to the image for the initial condition, i.e., \(T_{1}\)-weighted MRI with no contrast, to diminish physiological and involuntary patient movements. 3D rigid image registration was performed using MATLAB Image Processing Toolbox, with Mutual Information [28] as a similarity metric for the Regular Step Gradient Descent [29, 30] optimization algorithm.
#### 2.3.1 Levenberg-Marquardt fit.
Quantification of pharmacokinetic parameters was also performed with MATLAB, using least square fitting by the Levenberg-Marquardt algorithm [23, 24]. As previously mentioned, only four parameters are needed to fit the model to experimental data; if it is assumed that only the ratio
between \(u\) and \(v\) is used instead of their actual expressions, equation 5 and the proportionality coefficient \(A\) is incorporated to either \(u\) or \(v\), then the number of parameters is maintained, and equation 10 can be rewritten as,
\[C_{2} =A^{\prime}\left[\frac{k_{ep}}{k_{el}}(1-e^{-k_{el}t})-(1-e^{-k_{ep} t})\right],t\leq\tau\] \[C_{2} =A^{\prime}\left[\frac{k_{ep}}{k_{el}}(e^{k_{el}\tau}-1)e^{k_{el} t}-(e^{k_{ep}\tau}-1)e^{-k_{ep}t}\right],t>\tau \tag{11}\]
On the other hand, if the two-exponential dependence of the model is preserved, i.e., it is a two-compartment model, some freedom can be gained for the fitting of experimental data and only five parameters are needed,
\[C_{2} =A_{1}(1-e^{-k_{el}t})-A_{2}(1-e^{-k_{ep}t}),t\leq\tau\] \[C_{2} =A_{1}(e^{k_{el}\tau}-1)e^{-k_{el}t}-A_{2}(e^{k_{ep}\tau}-1)e^{-k _{ep}t},t>\tau \tag{12}\]
An example of the application of the model with a different number of fitting parameters is shown in Figure 2. As expected, the model with five parameters, yields the best fit, while the four-parameter model imposes too many restrictions between the parameters avoiding data fitting. Nevertheless, the four-parameter model could be used to confirm the Brix-Tofts model's exact validity within the data noise limits.
#### 2.3.2 Modified de Prony method fit. Two Compartments Model
A modified version of the de Prony method[25] was implemented for multi-echo \(T_{2}\)-weighted MRI[26] to quantify transversal rate distributions in solid brain tumors. Essentially, the method assumes that the signal or image intensity is described by a superposition of a finite number of decaying exponential functions and it is sampled at regularly fixed time intervals as is the case for DCE-MRI. If the Brix-Tofts model is assumed and following equation 12, voxel intensity is given by,
\[p_{i}=(A_{1}-A_{2})-A_{1}X_{1}^{i}+A_{2}X_{2}^{i},1\leq i\leq n,i \delta t\leq\tau\] \[p_{n+j}=B_{1}X_{1}^{n+j}-B_{2}X_{2}^{n+j},1\leq j\leq m,(n+j) \delta t>\tau \tag{13}\]
Figure 2: Different data fitting results for the same pharmacokinetic model, fitted curve is indicated in red. (a) Four parameters, (b) Five parameters. In both cases, the Levenberg Marquardt algorithm was used
where the following definitions apply,
\[X_{1} =e^{-k_{el}\delta t}\] \[X_{2} =e^{-k_{ep}\delta t}\] \[A_{1} =F\frac{k_{in}k^{trans}}{v_{e}}\left[k_{el}(k_{ep}-k_{el})\right]^{-1}\] \[A_{2} =F\frac{k_{in}k^{trans}}{v_{e}}\left[k_{ep}(k_{ep}-k_{el})\right]^{-1}\] \[B_{1} =(e^{k_{el}\tau}-1)A_{1}\] \[B_{2} =(e^{k_{ep}\tau}-1)A_{2} \tag{14}\]
As it can be seen in the definitions shown in equation 14, the parameters are not independent if the Brix-Tofts model strictly applies and only a set of four independent parameters is left as discussed in the previous section. In particular, there is some relationship between the parameters,
\[\frac{A_{1}}{A_{2}}=\frac{k_{ep}}{k_{el}} \tag{15}\]
Because the shortest sampling interval in DCE-MRI depends on the volume acquisition time, the number of points and the precise definition of \(\tau\), which is required for the application of this method, is limited. Nevertheless, as a first approach, each section of the voxel intensity evolution, i.e., wash-in or wash-out, can be analyzed separately and the resulting exponents compared. Let us consider the case of the wash-in section, which according to equation 13 contains a time-independent term and can be eliminated by taking differences between consecutive points,
\(q_{i}=p_{i}-p_{i+1}\), for \(1\leq i\leq n-1\). The original system, equation 13 is transformed to,[26]
\[\left[\begin{array}{c}q_{3}\\ q_{4}\end{array}\right]=\left[\begin{array}{cc}q_{2}&-q_{1}\\ q_{3}&-q_{2}\end{array}\right]\left[\begin{array}{c}X_{1}+X_{2}\\ X_{1}X_{2}\end{array}\right] \tag{16}\]
with solutions \(Z_{1}^{*}\equiv X_{1}+X_{2}\), \(Z_{2}^{*}\equiv X_{1}X_{2}\). The solution for \(X_{1}\) and \(X_{2}\) are obtained by finding the roots of a second-degree polynomial,
\[X^{2}-Z_{1}^{*}X+Z_{2}^{*}=0 \tag{17}\]
with the condition that the roots must be real and \(0<X<1\). Exponents and coefficients are determined straightforwardly by substitution in equations 13 and 14. An analogous procedure could be performed for the wash-out section and the resulting exponents and coefficients compared to those determined for the wash-in section. In principle, parameters extracted from each trend, i.e., wash-in and wash-out, should be equal if a strict Brix-Tofts model applies. This procedure is somewhat cumbersome and could be affected by the SNR in the image intensity and patient movement so the matching of parameters seems to be unlikely. The method is then limited only to the analysis of the wash-in or wash-out data.[31]
A possible solution to this fact is to use the whole set of data points, assuming a common \(\tau\)
which can be modeled as the following discretized set of equations,
\[p_{i}=A_{1}(1-X_{1}^{i})-A_{2}(1-X_{2}^{i}),1\leq i\leq n,i\delta t\leq\tau\] \[p_{n+j}=A_{1}(X_{1}^{j}-X_{1}^{n+j})-\] \[\qquad\qquad-A_{2}(X_{2}^{j}-X_{2}^{n+j}),1\leq j\leq m,(n+j)\delta t>\tau \tag{18}\]
which can be written in matrix form as
\[\left(\begin{array}{c}p_{1}\\ \vdots\\ p_{i}\\ \vdots\\ p_{n}\\ p_{n+1}\\ \vdots\\ p_{n+j}\\ \vdots\\ p_{n+m}\end{array}\right)=\left(\begin{array}{ccc}(1-X_{1}^{1})&-(1-X_{2}^{ 1})\\ \vdots&\vdots\\ (1-X_{1}^{i})&-(1-X_{2}^{i})\\ \vdots&\vdots\\ (1-X_{1}^{n})&-(1-X_{2}^{n})\\ (X_{1}^{1}-X_{1}^{n+1})&-(X_{2}^{1}-X_{2}^{n+1})\\ \vdots&\vdots\\ (X_{1}^{j}-X_{1}^{n+j})&-(X_{2}^{j}-X_{2}^{n+j})\\ \vdots&\vdots\\ (X_{1}^{m}-X_{1}^{n+m})&-(X_{2}^{m}-X_{2}^{n+m})\end{array}\right)\left( \begin{array}{c}A_{1}\\ A_{2}\end{array}\right) \tag{19}\]
with \(X_{1}>X_{2}\).
Equation 19 resembles a modified version of Vandermonde matrix in terms of geometric progressions and can be solved by a nonnegative least-square method for the coefficients \(A_{1}\) and \(A_{2}\), if a pair \(X_{1}\), \(X_{2}\) is given. It is necessary to search the space \((X_{1},X_{2})\) for an optimal solution of equation 19. To do so these values are selected initially at random within the square of side 1 and
subjected to the condition \(X_{1}>X_{2}\), as shown in Figure 3. The best solution for \(A_{1}\), \(A_{2}\), \(X_{1}\), and \(X_{2}\) is obtained by following the steepest-descent optimization method to minimize the residuals of equation 19, A possible path is shown in Figure 3.
One of the advantages of the Vandermonde matrix formulation is that it allows for an irregular sampling of the kinetics data as compared to the modified de Prony formulation which requires regular sampling. This fact allows for the implementation of DCE-MRI protocols suited to the particular requirements of pharmacokinetics.
#### 2.3.3 Modified de Prony method fit. Three Compartments Model
In the case of a three compartments model, the total concentration in the tissue is
\[C\equiv C_{s}+C_{f} \tag{20}\]
Figure 3: Location of exponential parameters for the set given by equation 19. Dashed areas represent possible solutions. In red, the iterative path followed by a steepest-descent method to obtain an optimal solution for \(A_{1}\), \(A_{2}\) of equation 19.
so equations (12) have to be modified to
\[C= A_{1}(1-e^{-k_{el}t})-\] \[-A_{2}(1-e^{-k_{eps}t})-A_{3}(1-e^{-k_{ep}t}),t\leq\tau\] \[C= A_{1}(e^{k_{el}\tau}-1)e^{-k_{el}t}-\] \[-A_{2}(e^{k_{eps}\tau}-1)e^{-k_{eps}t}-\] \[-A_{3}(e^{k_{ep}\tau}-1)e^{-k_{ep}t},t>\tau \tag{21}\]
and the straightforward modifications to equations (18) and (19) with the condition \(X_{1}>X_{2}>X_{3}\), which in matrix form is,
\[\left(\begin{array}{c}p_{1}\\ \vdots\\ p_{i}\\ \vdots\\ p_{n}\\ p_{n+1}\\ \vdots\\ p_{n+j}\\ \vdots\\ p_{n+j}\\ \vdots\\ p_{n+m}\end{array}\right)\quad\left(\begin{array}{ccc}(1-X_{1}^{1})&-(1-X_{2 }^{1}&-(1-X_{3}^{1})\\ \vdots&\vdots\\ (1-X_{1}^{i})&-(1-X_{2}^{i})&-(1-X_{3}^{i})\\ \vdots&\vdots\\ (1-X_{1}^{n})&-(1-X_{2}^{n})&-(1-X_{3}^{n})\\ (X_{1}^{1}-X_{1}^{n+1})&-(X_{2}^{1}-X_{2}^{n+1})&-(X_{3}^{1}-X_{3}^{n+1})\\ \vdots&\vdots\\ (X_{1}^{j}-X_{1}^{n+j})&-(X_{2}^{j}-X_{2}^{n+j})&-(X_{3}^{j}-X_{3}^{n+j})\\ \vdots&\vdots\\ (X_{1}^{m}-X_{1}^{n+m})&-(X_{2}^{m}-X_{2}^{n+m})&-(X_{3}^{m}-X_{3}^{n+m})\\ \end{array}\right) \tag{22}\]
### Tissue classification by semi-qualitative methods
Tissue classification has been accomplished in DCE-MRI data in different ways [7, 8, 9, 10, 11, 12], exploiting properties of the data time evolution, and has been extensively used clinically. Among them, the so-called three time point method [7, 8, 10, 12, 32] has an extended use. The method consists of selecting three temporal points where the time evolution of the contrast agent uptake is represented appropriately. These points are commonly selected as the time for the initial image set, corresponding to an absence of contrast, an intermediate time point, associated with the infusion time \(\tau\), and a third point at the end of the contrast washout region. Image intensity differences are calculated for each voxel and are used for tissue classification according to the scheme shown in Figure 4.
Color intensity is determined by the image difference \(C_{1}\) also shown in Figure 4. The selection of the color scheme is made to correlate with histological measurements [7] assuming that'red' means pathological tissue with a Type III kinetics, 'blue' means not affected tissue with a Type I kinetics, and 'green' applies to those tissues with uncertain conditions, with a Type II kinetics. If this color code scheme is applied to the Brix-Tofts pharmacokinetic model it is possible to obtain a color map for the pharmacokinetic parameters. In the present work, semi-qualitative analysis is
Figure 4: Three-time points method for tissue classification in DCE-MRI [7, 7].
used to classify quantitative results.
## 3 Results and Discussion
### Semi-qualitative results
Results obtained with the application of the three-time point method are partially shown in Figure 5. The prostate was manually segmented and only points within the ROI were evaluated. There was a gradual color transition for all the cases, i.e., red to green to blue, from the tumor lesion's interior to its periphery, as shown in some examples at the bottom rows in Figure 5. Spurious points are neglected based on their intensity, i.e., \(C_{i}\) value, and local neighborhood.
Some semi-quantitative information can be extracted from the qualitative analysis. The semi-quantitative parameters \(C_{1}\) and \(C_{2}\), defined in Figure 4 can be used to determine semi-quantitative evaluations that help to characterize the pathology or changes during a therapy follow-up.[33, 34, 35, 36] An example of this approach is shown in the right-hand side of Figure 5, where a distribution is obtained for the \(C_{2}/C_{1}\) ratio.
### Quantitative results
#### 3.2.1 Levenberg-Marquardt results.
The quantitative analysis of the image data set was performed in a previous work[37, 31] using only a five-parameter fit of the Brix-Tofts model to the measured data since as can be seen in Figure 2, it was substantially better than the four-parameter implementation. Figure 6 it is shown an example of the results for different points within the prostate, demonstrating the discrimination capacity of the method to establish significant differences between the pharmacokinetic parameters \(k_{el}\) and \(k_{ep}\), on a voxel by voxel basis.
The procedure was extended to determine the distribution of these parameters over a region of interest like the one represented in Figure 5, as shown in Figure 7.
Figure 6: Results of the Levenberg-Marquardt five parameters fitting at two different points: top, a point located within the cancerous lesion, a red region in Figure 5 with fitted parameters \(k_{ep}=0.1321s^{-1}\) and \(k_{el}=0.0034s^{-1}\); bottom, located in a region outside the cancerous lesion, blue-green (Figure 5) with parameters, \(k_{ep}=0.0123s^{-1}\), and \(k_{el}=0.00009s^{-1}\)
Figure 7: Parameter distributions obtained from Levenberg-Marquardt fit
Figure 7 reflects an interesting result that possibly requires further analysis. It is the apparent difference in point clustering, particularly for those points associated with tumor lesions (red) and undefined and possibly infiltrated tissue (green). This difference can be used to further classify the tumor lesions or their evolution during therapy follow-up. Even though the Levenberg-Marquardt fit is a very reliable method it depends strongly on the actual parameter structure of the pharmacokinetic model as previously discussed and shown in Figure 2, making it unsuitable for models that include more than two compartments. In the following, we will limit our discussion to a simpler approach based on the modified de Prony method.
#### 3.2.2 Modified de Prony method results. Two Compartments Model
The analysis of the quantitative results based on a Levenberg-Marquardt indicates the validity of the Brix-Tofts two-compartment model in the sense that it applies to a specific voxel. When the whole set of voxels is analyzed, as shown in Figure 7, it is evident that there are two distinct regions for the kinetic parameters that can be understood as the existence of fast exchange compartments (\(k_{ep}\) high) and slow compartments (\(k_{ep}\) low), allowing for the proposition of a three-compartment model as previously discussed. The application of the two-compartment model using the modified de Prony algorithm proposed in this work reveals the same behavior as shown by the results in Figure 8 and Figure 9
In particular, in Figure 8, the distribution for the \(k_{ep}\) parameter is bi-modal with a high exchange region mostly populated by voxels associated with the presence of tumor (red and green colors) while the slow exchange region is predominantly composed of voxels that belong to benign tissue (blue color). This feature can be greatly emphasized if the logarithm of the quotient \(\frac{A_{1}}{A_{2}}\) is plotted against the logarithm of \(\frac{k_{ep}}{k_{el}}\), as shown in Figure 10.
Figure 8: Two compartment model distributions \((k_{ep},k_{el})\). Left, color map; Middle, \(k_{ep}\) distribution; Right, \(k_{el}\) distribution. Colors are given according to the 3-time-point classification
Figure 9: Distributions \((k_{ep},k_{el})\). (a) 2D histograms; (b) \((k_{ep},k_{el})\) plot. Colors are given according to the 3-time-point classification
If only one type of compartment is available for the tissue, corresponding to a pure Brix-Tofts model, the plot should behave as a straight line derived from equation 15 and shown as a red line in Figure 10. The real trend of the data supports what is observed in the \(k_{ep}\) distribution shown in Figure 8 with evident bi-modal behavior. All these observations suggest that the description of the system should be addressed with a three or more-compartment model. The spatial distribution of the fast and slow exchange compartments is shown at the bottom of Figure 10. Specific values of the parameters are shown in Figure 11.
Figure 10: Top: Data trends for \(\frac{C_{el}}{C_{ep}}\) (or \(\frac{A_{1}}{A_{2}}\), according to equation 14) and \(\frac{k_{ep}}{k_{el}}\) showing how these ratios are correlated. Red straight lines represent the condition \(\frac{C_{el}}{C_{ep}}=\frac{k_{ep}}{k_{el}}\) which corresponds to a pure two-compartment model. Fast exchange compartments (red dots) are differentiated from slow exchange compartments (blue dots). Black lines identify trends in either the fast or slow compartments. Bottom rows: Examples of tissue classification according to the type of exchange compartments. **(F)** Fast exchange compartments (\(k_{ep}\) high). **(S)** Slow exchange compartments (\(k_{ep}\) low). Colors are given according to the 3-time-point classification. Notice that the majority of the fast exchange component is associated with tumor activity, while the slow exchange component is associated with benign tissue
the kinetic parameters for the different kinetic types as classified by the three-time point method are summarized in Table 1 and Figure 11.
It is important to remark that fast exchange compartments are mostly associated with Type II and Type III kinetics while Type I kinetics corresponds to slow exchange compartments. This allows for an effective \(\langle k_{ep}\rangle\) defined as a weighted average between both compartment types. The result is summarized in Table 2.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Classification & \(k_{epf}(min^{-1})\) & \(k_{eps}(min^{-1})\) & \(k_{el}(min^{-1})\) \\ \hline Type I & \(5.72\pm 0.32\) & \(0.63\pm 0.17\) & \(0.04\pm 0.05\) \\ Type II & \(5.15\pm 0.25\) & \(0.96\pm 0.31\) & \(0.05\pm 0.05\) \\ Type III & \(5.12\pm 0.24\) & \(0.78\pm 0.22\) & \(0.12\pm 0.08\) \\ \hline \end{tabular} Average values for the kinetic parameters in the regions classified by the three-time point method. Type I corresponds to benign tissue and Type III to definitely malignant tissue. Type II corresponds to an intermediate condition, possibly related to tumor invasion.
\end{table}
Table 1: Fast and Slow Exchange Kinetic Parameters
Figure 11: Data points in \((k_{epf},k_{eps})\) space. Big circles represent average values from Table 1. Color-shaded areas represent points within a standard deviation. Colors are given according to the 3-time-point classification
#### 3.2.3 Modified de Prony method results. Three Compartments Model
The previous section discussed the analysis with a pure Brix-Tofts model (two compartments system) with a clear indication of the presence of fast and slow exchange compartments. With the assumption of a three-compartment model, as stated in section 2.3.3, a more detailed picture of the kinetic parameters is obtained, as shown in Figure 13, with some additional structure for the fast exchange parameter, \(k_{epf}\). Also, in Figure 14 it is shown the distribution of kinetic parameters in the \((k_{el},k_{eps})\) and \((k_{el},k_{epf})\) spaces. Nevertheless, comparing the \(k_{epf}\) distribution, Figure 13b with the \(k_{ep}\) distribution in Figure 8, they are both bi-modal distributions and very similar indeed, so it can be concluded that the addition of an extra set of parameters to the model does not improve
\begin{table}
\begin{tabular}{|l|l|l|} \hline Classification & \(f_{fast}\) & \(\langle k_{ep}\rangle(min^{-1})\) \\ \hline Type I & \(0.20\pm 0.10\) & \(1.75\pm 0.60\) \\ Type II & \(0.70\pm 0.22\) & \(4.00\pm 0.99\) \\ Type III & \(0.77\pm 0.20\) & \(4.21\pm 0.97\) \\ \hline \end{tabular}
\end{table}
Table 2: Proportion of fast exchange compartments and average \(\langle k_{ep}\rangle\)
Figure 12: Average value \(k_{ep}\) for different kinetic types. Colors are given according to the 3-time-point classification
the overall picture increasing notoriously computational times.
## Conclusions
In the present work, it has been shown a general methodology to classify and quantify tumor lesions in the prostate. The classification of the tissues was performed successfully by the implementation of a suitable three-time point method algorithm. This classification was used as a reference for the analysis of quantitative results. The quantitative analysis was successfully performed by a novel point-wise method based on the two-compartment Brix-Tofts model and validated with the Levenberg-Marquardt optimization method. The results confirmed the existence of fast exchange compartments associated with tumor activity and slow exchange compartments associated with benign tissue. In any case, the combined qualitative and quantitative analysis can be used to establish differences for a patient undergoing therapy or pathology progression. In particular, some results suggest the future possibility of combining the qualitative and quantitative analysis into a single representation for prostate cancer evaluation and therapy follow-up.
## Acknowledgment
The authors would like to thank the National Insitute for Bioengineering, INABIO, at Universidad Central de Venezuela, Venezuela, Universidad Nacional Pedro Henriquez Urena, Dominican Republic and Fundacion Arturo Lopez Perez, Chile, for providing the environment for the realization of this work. Also, we would like to express our gratitude to the scientific community at the Physics and Mathematics in Biomedicine Consortium which provided helpful discussions during the realization of this work.
|
2310.16098
|
Bounding the approach to oligarchy in a variant of the yard-sale model
|
We present analytical results for the Gini coefficient of economic inequality
under the dynamics of a modified Yard-Sale Model of kinetic asset exchange. A
variant of the Yard-Sale Model is introduced by modifying the underlying binary
transaction of the classical system. It is shown that the Gini coefficient is
monotone under the resulting dynamics but the approach to oligarchy, as
measured by the Gini index, can be bounded by a first-order differential
inequality used in conjunction with the differential Gronwall inequality. This
result is in the spirit of entropy -- entropy production inequalities for
diffusive PDE. The asymptotics of the modified system, with a redistributive
tax, are derived and shown to agree with the original, taxed Yard-Sale Model,
which implies the modified system is as suitable for matching real wealth
distributions. The Gini -- Gini production inequality is shown to hold for a
broader class of models.
|
David W. Cohen, Bruce M. Boghosian
|
2023-10-24T18:02:48Z
|
http://arxiv.org/abs/2310.16098v2
|
# Bounding the approach to oligarchy in a variant of the Yard-Sale model
###### Abstract
We present analytical results for the Gini coefficient of economic inequality under the dynamics of a modified Yard-Sale Model of kinetic asset exchange. A variant of the Yard-Sale Model is introduced by modifying the underlying binary transaction of the classical system. It is shown that the Gini coefficient is monotone under the resulting dynamics but the approach to oligarchy, as measured by the Gini index, can be bounded by a first-order differential inequality used in conjunction with the differential Gronwall inequality. The asymptotics of the modified system, with a redistributive tax, are derived and shown to agree with the original, taxed Yard-Sale Model, which implies the modified system is as suitable for matching real wealth distributions.
yard-sale model, econophysics, non-linear Fokker-Planck equation, mean-field theory, McKean-Vlasov equations
91B80, 82C22, 82C31
## 1 Introduction
The Yard-Sale Model is a well-studied model of kinetic asset exchange introduced by A. Chakraborti and named by B. Hayes [13, 19]. At its core, the Yard-Sale Model is a specification of how a wealth transaction is to be conducted between two agents randomly selected from a population. From this point, the system may be altered and then studied as a stochastic finite-agent system or, through the use of thermodynamic limits and other techniques from mathematical physics, as a deterministic, continuum equation of motion for the distribution of wealth.
The classical Yard-Sale Model transaction is stochastic: At each integer time \(t\), two agents from an \(N\) agent population are selected at random without replacement and their wealths are updated according to the rule
\[\begin{pmatrix}w^{i}_{t+1}\\ w^{j}_{t+1}\end{pmatrix}=\begin{pmatrix}w^{i}_{t}\\ w^{j}_{t}\end{pmatrix}+\sqrt{\gamma}\begin{pmatrix}w^{i}_{t}\wedge w^{j}_{t} \end{pmatrix}\begin{pmatrix}1\\ -1\end{pmatrix}\eta, \tag{1}\]
where \(\gamma\in(0,1)\) is a transaction intensity parameter, \(\eta\) is a random variable with outcomes \(-1\) and \(+1\) with equal probability, and \(\wedge\) is the min operator.
Under this rule the expected change of wealth for each agent is zero yet it is well known that wealth condenses and an oligarchy forms as time progresses. The definition of the classical transaction rule is premised on the belief that the poorer agent's wealth should determine the magnitude of a binary transaction and that the changes in wealth are associated to a lack of perfect information in a given transaction.1 Adhering to this principle, the Yard-Sale Model is the simplest, non-trivial stochastic transaction that cannot send an agent to negative wealth when all agents are initialized with positive wealth.
Footnote 1: Informally, in a transaction with a very wealthy agent, a poorer agent is willing to stake a small fraction of their wealth but cannot possibly stake the same fraction of the other’s wealth as losing would horrifically bankrupt the poorer agent.
In [3] a time step is introduced into the transaction (1) to give an infinitesimal characterization of the process by a limiting procedure involving the number of agents and time step combined with an analogue of the
(_Stosszahlansatz_) - termed the random-agent approximation. This procedure produces a deterministic continuum equation. The equation of motion for the probability distribution of agents in wealth-space under the classical Yard-Sale Model dynamics of (1.1) is
\[\frac{\partial\rho(w,t)}{\partial t}=\frac{\partial^{2}}{\partial w^{2}}\left[ \left(\frac{\gamma}{2}\int_{0}^{\infty}\,dx\,\left(w\wedge x\right)^{2}\rho(x, t)\right)\rho(w,t)\right]. \tag{1.2}\]
A frequently studied summary statistic of distributions of wealth is the Gini coefficient of economic inequality, which was introduced by Corrado Gini in 1912 [18]. The Gini coefficient maps a wealth distribution to a value in \([0,1]\). Values near zero correspond to more egalitarian distributions, whereas values of the Gini approaching unity indicate inequality and oligarchy. If \(\mu\) is the mean wealth of a distribution of wealth with density \(\rho\) and \(X,Y\) are i.i.d. random variables with density \(\rho\) then
\[G[\rho]:=\frac{\mathbb{E}\left[|X-Y|\right]}{2\mu}.\]
Boghosian et al. in [6] established that the Gini coefficient is monotone under the dynamics of the continuum model (1.2) of the classical Yard-Sale Model. This result was shown both for the master equation and the resulting non-linear partial integro-differential Fokker-Planck equation first derived in [3]. Thus the Gini coefficient is a Lyapunov functional [17, 20] for the Yard-Sale Model. Chorro showed via a martingale convergence theorem argument in [14] that the finite-agent system likewise approaches oligarchy (in a probabilistic sense). Borgers and Greengard in [7] produced yet simpler proofs of wealth condensation for the classical finite-agent system. Borgers and Greengard [7] and Boghosian [5] have similar results for transactions that are biased in favor of the wealthier agent. Beyond the results for the Yard-Sale Model and its variants, Cardoso et al. in [9, 10] have shown that wealth condensation is more likely the rule than the exception for more general unbiased binary exchanges.
Despite the many methods of showing that wealth condensation occurs, we are not aware of any explicit bounds on the rate of increase of the Gini coefficient under models for which the Gini coefficient increases monotonically.
Cao in [8] investigated the rate of change of the Gini coefficient under the dynamics of the repeated averaging model (sometimes called the divorce model) where the binary transaction sends each agent to the mean wealth of the two transacting agents. Cao mentions that in "econophysics literature, analytical results on [the] Gini index are comparatively rare."
Here we produce an analytic result on the Gini coefficient for a modified, yet reasonable, Yard-Sale Model that steps beyond the standard propositions of monotonicity. In particular, the result is a bound on the rate of production of inequality, as measured by the Gini coefficient, under the modified dynamics. From a physical perspective, this is akin to finding a bound on the rate of entropy production while still having a thermodynamic second law.
The paper is organized as follows. The variant of the Yard-Sale Model on which the present paper focuses is motivated and defined in section 2. The Gini coefficient is briefly reviewed in section 3 with particular focus on its invariance under a normalization of the equations of motion. In section 4 it is proven both that the Gini coefficient increases monotonically in time under the induced dynamics and that its rate of increase may be bounded. Finally the asymptotics of the modified system
when a redistributive tax is incorporated are derived in section 5 and shown to match the classical Yard-Sale Model with taxation.
## 2 The modified Yard-Sale Model
Henceforth we sometimes use the abbreviation YSM for the yard-sale model.
Let there be \(N>1\) agents each with a dimensionally-meaningful wealth \(\theta_{s}^{i}\) indexed by \(i=1,\ldots,N\) and a time \(s\geq 0\). Time subscripts \(s\) are occasionally omitted for clarity when the time is unimportant for the expression. Let \(W_{\theta}:=\sum\theta^{i}\) be the total wealth and \(\mu_{\theta}:=W_{\theta}/N\) be the average wealth per agent. To each of \(N\) agents, associate a dimensionless quantity \(w^{i}:=\theta^{i}/\mu_{\theta}\) obtained by dividing wealth by mean wealth.2
Footnote 2: In what follows, we still call \(w\) wealth despite its dimensionless nature.
Let \(k=0,1,2,\ldots\) and \(\Delta t\in(0,1)\). The modified version of the Yard-Sale Model has a binary transaction between agents indexed by \(i\) and \(j\) given by
\[\begin{pmatrix}w^{i}_{(k+1)\Delta t}\\ w^{j}_{(k+1)\Delta t}\end{pmatrix}=\begin{pmatrix}w^{i}_{k\Delta t}\\ w^{j}_{k\Delta t}\end{pmatrix}+\sqrt{\gamma\Delta t}\phi\left(w^{i}_{k\Delta t },w^{j}_{k\Delta t}\right)\begin{pmatrix}1\\ -1\end{pmatrix}\eta, \tag{1}\]
with
\[\phi\left(w^{i},w^{j}\right)=\begin{cases}(w^{i}\wedge w^{j})&\quad\text{if }w^{i} \wedge w^{j}<1;\\ \sqrt{(w^{i}\wedge w^{j})}&\quad\text{if }w^{i}\wedge w^{j}\geq 1,\end{cases} \tag{2}\]
where \(\gamma\in(0,1)\), \(\wedge\) is the min operator, and \(\eta\) is a random variable that takes values \(-1\) and \(+1\) with equal probability. The standard YSM has \(w^{i}\wedge w^{j}\) outside of the square root function regardless of the size of \(w^{i}\wedge w^{j}\). The importance of the square root in (2) will become clearer below in light of the more general statement of Corollary 4 and the relation of the diffusion coefficient kernel to that of the Gini coefficient.
This modification maintains the important features that the poorer agent's wealth is the determining quantity in the exchange and no exchange can send an agent to negative wealth. Note that the comparison of dimensionless wealth in the piecewise micro-transaction is equivalent to checking if the minimum of the two dimension-full wealths is above or below mean wealth \(\mu_{\theta}\).
Under the same assumptions common in mathematical physics that are laid out in [2] and [3] - namely assuming independence of the laws of each agent as the number of agents goes to infinity - the limit as \(\Delta t\to 0\) and \(N\to\infty\) leads to a continuum equation for the law of a single prototypical agent in the population. The nonlinear evolution equation is found via the Kramers-Moyal expansion of the Chapman-Kolmogorov equation after making the random-agent approximation [3] and applying Pawula's Theorem. A more formal derivation from the stochastic process would use methods from the propagation of chaos literature on interacting particle systems of the Boltzmann type, see [11, 12, 21] for more details. The yard-sale model and its variants result in McKean-Vlasov stochastic differential equations [1] for which the drift and/or diffusion coefficients are functionals of the dependent variable.
Let \(\rho(w,t)\) be the probability density of agents in the dimensionless wealth variable. Since the dimensionless quantity \(w\) was obtained by dividing wealth \(\theta\) by mean wealth \(\mu_{\theta}\), the first moment of \(\rho(w,t)\) is also \(1\), that is \(\int_{\mathbb{R}^{+}}w\rho=1\). Therefore, by construction, the mean (dimensionless) wealth is \(1\).
This leads to a Fokker-Planck equation for the normalized agent distribution
\[\frac{\partial\rho(w,t)}{\partial t}=\frac{\partial^{2}}{\partial w^{2}}\left[ \underbrace{\frac{\gamma}{2}\left(\int_{0}^{\infty}\,dx\,\kappa(w,x)\rho(x,t) \right)}_{:=D[w,\rho(\cdot,t)]}\rho(w,t)\right], \tag{3}\]
where
\[\kappa(w,x):=\begin{cases}(w\wedge x)^{2}&\quad\text{if }w\wedge x<1;\\ (w\wedge x)&\quad\text{if }w\wedge x\geq 1.\end{cases} \tag{4}\]
Both the zeroth and first moments of \(\rho\) are conserved quantities; the former being the probability mass (implying conservation of number of agents) and the latter corresponding to conservation of total wealth, which we take canonically to be \(1\).
Throughout this paper we assume that each agent has positive wealth and the support of all distributions are \(\mathbb{R}^{+}\).
## 3 The Gini coefficient under normalization
Here for the reader's convenience we define the Gini coefficient and show its invariance under a particular transformation of a distribution of wealth.
Let \(Q:[0,\infty)\to[0,\infty)\) have finite zeroth and first moment denoted \(N_{Q}\) and \(W_{Q}\), respectively. Define \(\mu_{Q}=W_{Q}/N_{Q}\). The transformation to a distribution with unit zeroth and first moment is given by
\[q(w):=\frac{\mu_{Q}}{N_{Q}}Q\left(\mu_{Q}w\right).\]
\(Q\) can be viewed as a wealth distribution of a population of \(N_{Q}\) agents with total population wealth \(W_{Q}\). The Gini coefficient of economic inequality can be expressed as
\[G[Q] =1-\frac{1}{N_{Q}W_{Q}}\int_{0}^{\infty}\,dw\,\int_{0}^{\infty}\, dx\,\left(w\wedge x\right)Q(w)Q(x) \tag{5}\] \[=1-\int_{0}^{\infty}\,dw\,\int_{0}^{\infty}\,dx\,\left(w\wedge x \right)q(w)q(x),\]
which we can equivalently call \(G[q]\) under the normalizing transformation \(Q\mapsto q\). Thus \(G\) is invariant under the transformation between \(q\) and \(Q\).
The Gini coefficient is \(0\) for a density concentrated on mean wealth (that is, for a wealth-egalitarian society) whereas it approaches its upper limit of \(1\) as the wealth is concentrated into an ever-vanishing proportion of the population. See [4, 15] for a discussion of the nonstandard properties of wealth distributions that extremize the Gini coefficient under the dynamics of (2).
We will work in the space of normalized wealth distributions in which both the zeroth and first moments are unity.
**Lemma 3**: _Two derivatives of the Frechet derivative of the Gini coefficient under normalization is twice the density, that is_
\[\frac{d^{2}}{dw^{2}}\,\frac{\delta G}{\delta\rho}=2\rho(w). \tag{6}\]
This is a routine calculation and follows from the symmetry of the kernel in the double integral that defines \(G[\rho]\). We state the proof, despite its simplicity, since the result is repeatedly used.
Since the integral kernel of \(G\) is symmetric in its arguments, the Frechet derivative of \(G\) is
\[\frac{\delta G}{\delta\rho}=-2\int_{\mathbb{R}^{+}}\,dx\,\left(w\wedge x\right) \rho(x).\]
The first \(w\)-derivative of the Frechet derivative is
\[\frac{d}{dw}\frac{\delta G}{\delta\rho}=-2\int_{w}^{\infty}\,dx\,\rho(x)\]
and the next \(w\)-derivative is
\[\frac{d^{2}}{dw^{2}}\frac{\delta G}{\delta\rho}=2\rho(w).\]
## 4 Bounding the rate of increase of the Gini coefficient
We now turn to the main results of the paper: That the Gini coefficient, despite being monotonically increasing under the modified Yard-Sale Model dynamics, can have a non-trivial bound on its rate of change in time. This bound may be carried over into a bound on the value of the Gini coefficient at a future time.
By \(G(t)\), we mean \(G[\rho(\cdot,t)]\) where \(\rho\) is a solution to (3).
[Increasing inequality] The Gini coefficient (11) is monotone increasing under the dynamics of (3).
Proof: \[\frac{dG}{dt} =\int_{0}^{\infty}\,dw\,\frac{\delta G}{\delta\rho}\frac{\partial \rho}{\partial t}\] \[=\int_{0}^{\infty}\,dw\,\frac{\delta G}{\delta\rho}\frac{\partial ^{2}}{\partial w^{2}}\left[D\left[w,\rho(\cdot,t)\right]\rho(w,t)\right] \mbox{(by (\ref{eq:1}))}\] \[=\int_{0}^{\infty}\,dw\,\left(\frac{\partial^{2}}{\partial w^{2 }}\frac{\delta G}{\delta\rho}\right)D\left[w,\rho(\cdot,t)\right]\rho(w,t) \mbox{(via two IbP)}\] \[=2\int_{0}^{\infty}\,dw\,D\left[w,\rho(\cdot,t)\right]\rho(w,t)^ {2} \mbox{(by Lemma \ref{lem:1})}\] \[\geq 0 \mbox{(since }D\left[w,\rho\right]\geq 0\mbox{)}\]
The above result can be strengthened when there is mass that is not concentrated at the origin.
[Strictly increasing inequality] Let \(s>0\) and \(\epsilon\in(0,1).\) If there exists \(a>0\) such that
\[\int_{a}^{\infty}\,dw\,\rho(w,s)>\epsilon\]
then there exists \(\delta>0\) such that
\[\left.\frac{dG}{dt}\right|_{t=s}>\delta.\]
Proof: Let \(C=(a^{2}\wedge a)\), which is positive. Making use of the calculation in Theorem 4, we have
\[\left.\frac{dG}{dt}\right|_{t=s} =\int_{\mathbb{R}^{+}}\,dw\,D[w,\rho(s,\cdot)]\left(\rho(s,w) \right)^{2}\] \[\geq\int_{a}^{\infty}\,dw\,\int_{a}^{\infty}\,dx\,\kappa(w,x)\rho( s,x)\left(\rho(s,w)\right)^{2}\] \[\geq C\epsilon\int_{a}^{\infty}\,dw\,\left(\rho(s,w)\right)^{2}\] \[\geq C\epsilon\frac{\epsilon^{2}}{4(b-a)}\] (Jensen's inequality) \[>0.\]
The step prior to the application of Jensen's inequality uses that there must exist \(b\) such that \(0<a<b<\infty\) for which
\[\int_{a}^{b}\,dx\,\rho(s,x)>\frac{\epsilon}{2}.\]
Letting
\[\delta=\frac{C\epsilon^{3}}{4(b-a)}>0\]
completes the proof.
Corollary also holds for the classical Yard-Sale Model.
[Bounding inequality] If the initial datum \(\rho_{0}(w)=\rho(w,0)\) is in \(L^{\infty}(\mathbb{R}^{+})\) and the dynamics keep \(\rho(w,t)\) in \(L^{\infty}(\mathbb{R}^{+})\) up to time \(s>0\) then the rate of change of the Gini coefficient (1) is bounded by
\[\frac{dG}{dt}\leq\gamma||\rho(\cdot,t)||_{\infty}\left(1-G(t)\right) \tag{1}\]
for \(0<t<s.\)
Proof: Let \(t\in(0,s).\) Starting from the penultimate line in the main computation of the proof of Theorem 4, we have
\[\left.\frac{dG}{dt}\right. =2\int_{0}^{\infty}\,dw\,D[w,\rho]\left(\rho(w)\right)^{2}\] \[\leq 2||\rho||_{\infty}\int_{0}^{\infty}\,dw\,D[w,\rho]\rho(w) \text{(H\"{o}lder's inequality)}\] \[=2||\rho||_{\infty}\int_{0}^{\infty}\,dw\,\rho(w)\frac{\gamma}{2} \int_{0}^{\infty}\,dx\,\kappa(w,x)\rho(x) \text{(definition of }D[w,\rho])\] \[\leq\gamma||\rho||_{\infty}\int_{0}^{\infty}\,dw\,\rho(w)\int_{0 }^{\infty}\,dx\,(w\wedge x)\rho(x) \text{($\kappa(w,x)\leq(w\wedge x)$ )}\] \[=\gamma||\rho||_{\infty}(1-G) \text{(by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
**Corollary 4.4**: _For a system with transaction kernel \(\widetilde{\phi}:\mathbb{R}^{+}\times\mathbb{R}^{+}\to\mathbb{R}^{+}\) such that_
\[\widetilde{\phi}\left(w^{i},w^{j}\right)\leq\left(w^{i}\wedge w^{j}\right)\]
_and_
\[\widetilde{\phi}\left(w^{i},w^{j}\right)^{2}\leq\left(w^{i}\wedge w^{j}\right)\]
_on \(\mathbb{R}^{+}\times\mathbb{R}^{+}\), we have that_
\[0\leq\frac{dG}{dt}\leq\gamma||\rho||_{\infty}(1-G).\]
The two conditions on the kernel correspond to not sending agents to negative wealth in the finite model for \(\Delta t\in(0,1]\) and the fourth step in the proof of Theorem 4.3, respectively. However, the class of kernels satisfying these conditions need not have the stake of a transaction be independent of the wealthier agent.
The lower bound implies that the Gini coefficient increases under the dynamics of the modified YSM, whereas the upper bound on \(\dot{G}\) bounds \(G(t)\).
**Remark 4.5**: If the upper bound of Theorem 4.3 were in fact attained as \(\dot{G}=\gamma||\rho||_{\infty}(1-G)\), it could be viewed as asserting that at a lower inequality the accessible rearrangements of wealth lead quickly to increasing inequality. This speculative remark seems to agree with numerical simulations of the dynamics in which the random initial condition quickly changes in conjunction with a rapid increase of the Gini coefficient followed by a slowing of the change in the distribution and the Gini coefficient.
To produce the bound on \(G(t)\) we need a slightly modified version of the differential Gronwall's inequality as presented in [16].
**Lemma 4.6**: (A version of Gronwall's differential inequality)_Let \(\eta,\phi,\) and \(\psi\) be functions of \(t\in[0,T]\). Further let \(\eta\) be absolutely continuous and \(\phi\) and \(\psi\) be integrable on \([0,T]\). If for almost every \(t\)_
\[\dot{\eta}\leq\phi\eta+\psi,\]
_then_
\[\eta(t)\leq\exp\left(\int_{0}^{t}\,ds\,\phi(s)\right)\left[\eta(0)+\int_{0}^{t }\,ds\,\psi(s)\exp\left(-\int_{0}^{s}\,dr\,\phi(r)\right)\right].\]
By direct calculation first and then applying the assumed inequality, we have that
\[\frac{d}{ds}\left[\eta(s)\exp\left(-\int_{0}^{s}\,dr\,\phi(r) \right)\right] =\exp\left(-\int_{0}^{s}\,dr\,\phi(r)\right)\left[\dot{\eta}(s)- \phi(s)\eta(s)\right]\] \[\leq\exp\left(-\int_{0}^{s}\,dr\,\phi(r)\right)\psi(s).\]
Integrating from \(0\) to \(t\) in \(s\) yields
\[\eta(t)\exp\left(-\int_{0}^{t}\,dr\,\phi(r)\right)-\eta(0)\leq\int_{0}^{t}\, ds\,\psi(s)\exp\left(-\int_{0}^{s}\,dr\,\phi(r)\right),\]
which upon re-arranging proves the lemma.
If \(\phi\) were assumed to be nonnegative then \(\exp\left(-\int_{0}^{s}\,dr\,\phi(r)\right)\) is at most unity for all \(s\), in which case the upper bound on that term by unity gives the standard inequality.
**Claim 4.7**: _Let_
\[M_{T}:=\sup_{t\in[0,T]}||\rho(\cdot,t)||_{\infty}.\]
_For \(t\in(0,T]\),_
\[G(t)\leq G(0)\exp\left(-\gamma M_{T}t\right)+\frac{\exp\left(\gamma M_{T}t \right)-1}{\exp\left(\gamma M_{T}t\right)}.\]
_\({}_{\Box}\)_
Proof: Apply Lemma 4.6 with \(\eta=G\), \(\phi=-\gamma M_{T}\), and \(\psi=\gamma M_{T}\). \({}_{\blacksquare}\)
Thus the magnitude of the exponential rate of convergence to oligarchy under the modified YSM is at most \(\gamma M_{T}\) in a finite time horizon \([0,T]\).
**Claim 4.8**: _Let \(M(t):=||\rho(\cdot,t)||_{\infty}\). For \(t>0\),_
\[G(t)\leq G(0)\exp\left(-\gamma\int_{0}^{t}\,ds\,M(s)\right)+\frac{\exp\left( \gamma\int_{0}^{t}\,ds\,M(s)\right)-1}{\exp\left(\gamma\int_{0}^{t}\,ds\,M(s) \right)}.\]
_\({}_{\Box}\)_
Proof: Apply Lemma 4.6 with \(\eta=G\), \(\phi=-\gamma M(s)\), and \(\psi=\gamma M(s)\). Note that the term
\[\int_{0}^{t}\,ds\,\psi(s)\exp\left(-\int_{0}^{s}\,dr\,\phi(r)\right)\]
simplifies to
\[\exp\left(\gamma\int_{0}^{t}\,ds\,M(s)\right)-1.\]
\({}_{\blacksquare}\)
## 5 Asymptotic analysis
Theorem 4 asserts that inequality monotonically increases under the dynamics of (3), which leads to total wealth condensation. The present state of affairs in the world is not one of total wealth condensation so in order for this model to have explanatory or predictive power, it is necessary to introduce a regularization term and ask if the resulting wealth distributions are close to those of the world.
In [5], real distributions of wealth were compared to those arising asymptotically from the YSM with differing combinations of: (1) taxation and redistribution, (2) a bias in the stochastic transaction in favor of the wealthier agent, and (3) negative wealth. It was found that the model's asymptotics match very well with real wealth distributions. Thus, rather than carrying out such comparisons from scratch, we instead show that the asymptotics of the modified YSM match those of the classical model.
To regularize the modified YSM, we introduce a deterministic taxation and redistribution term that reverts the population to the mean wealth. This scheme can be viewed in two equivalent ways:
* As an exogenous force that collects the same fraction of each agent's wealth, pools the total, and redistributes the total equally amongst the agents; or
* As a partial divorce model so that in addition to each stochastic transaction between agents there is also a deterministic exchange towards their pairwise mean wealth.
These two perspectives yield the same continuum equation of motion.
The binary transaction for the modified YSM with redistribution is
\[\begin{pmatrix}w^{i}_{(k+1)\Delta t}\\ w^{j}_{(k+1)\Delta t}\end{pmatrix}=\begin{pmatrix}w^{i}_{k\Delta t}\\ w^{j}_{k\Delta t}\end{pmatrix}+\left[\chi\Delta t\left(w^{j}_{k\Delta t}-w^{i}_ {k\Delta t}\right)+\sqrt{\gamma\Delta t}\phi\left(w^{i}_{k\Delta t},w^{j}_{k \Delta t}\right)\eta\right]\begin{pmatrix}1\\ -1\end{pmatrix}, \tag{5}\]
where \(\chi\in[0,\frac{1}{2})\) and the other definitions and parameters from (1) and (2) carry over.
The associated equation of motion for the agent density in wealth-space is
\[\frac{\partial\rho(w,t)}{\partial t}=-\frac{\partial}{\partial w}\left[\chi(1-w )\rho(w,t)\right]+\frac{\partial^{2}}{\partial w^{2}}\left[\frac{\gamma}{2} \left(\int_{0}^{\infty}\,dx\,\kappa(w,x)\rho(x,t)\right)\rho(w,t)\right], \tag{22}\]
where \(\kappa\) is as before.3
Footnote 3: Starting from (22), both \(\gamma\) and \(\chi\) lose their initial restrictions imposed from the binary transactions and we demand only their non-negativitity.
In this section, we closely follow the procedure of asymptotic analysis laid out in [5]. In what follows, \(\rho_{\infty}(w)\) is the asymptotic state of (22) for which the time derivative is set to zero and the ODE is studied.
### Analysis for equilibrium at \(w\ll 1\)
For \(w\ll 1\), \(\kappa(w,x)=\left(w\wedge x\right)^{2}\) thus
\[D[w,\rho]=\frac{\gamma}{2}\left(\int_{0}^{w}\,dx\,x^{2}\rho(x,t)+w^{2}\int_{w }^{\infty}\,dx\,\rho(x,t)\right).\]
We approximate this by \(D[w,\rho]\approx\frac{\gamma}{2}w^{2}\). This assumption demands the same _a posteriori_ justification as noted in [5]. At equilibrium
\[\chi(1-w)\rho_{\infty}(w)=\frac{d}{dw}\left[\frac{\gamma}{2}w^{2}\rho_{\infty} (w)\right].\]
This is solved by
\[\rho_{\infty}(w)=\frac{c_{0}}{w^{2+2\chi/\gamma}}\exp\left(-\frac{2\chi}{ \gamma w}\right),\]
where \(c_{0}\) is a positive constant. This agrees with the results in [5] where this analysis was carried out for the classic Yard-Sale Model with redistribution. Thus the justification for the earlier assumption about the behavior of \(D[w,\rho]\) at \(w\ll 1\) is equally valid as in the original paper.
### Analysis for equilibrium at \(w\gg 1\)
Upon integrating once and rearranging, the large-\(w\) equilibrium condition is
\[\frac{d\log\rho_{\infty}(w)}{dw}=\frac{\chi(1-w)-\frac{dD[w,\rho_{\infty}]}{ dw}}{D[w,\rho_{\infty}]}. \tag{23}\]
We first investigate the behavior of \(D[w,\rho]\) for \(w\gg 1\). The piecewise conditions of \(\kappa(w,x)\) partially lose their dependence on \(w\) so that
\[D[w,\rho] = \frac{\gamma}{2}\int_{0}^{\infty}\,dx\,\rho(x)\begin{cases}x^{2}& \text{ if }x<1;\\ (w\wedge x)&\text{ if }x\geq 1,\end{cases}\] \[= \frac{\gamma}{2}\left(\int_{0}^{1}\,dx\,x^{2}\rho(x)+\int_{1}^{w }\,dx\,x\rho(x)+w\int_{w}^{\infty}\,dx\,\rho(x)\right).\]
Let
\[c_{1}:=\int_{0}^{1}\,dx\,x^{2}\rho(x)\]
and
\[c_{2}:=\int_{0}^{\infty}\,dx\,x\rho(x)-\int_{0}^{1}\,dx\,x\rho(x),\]
noting that \(c_{2}>0\). Thus
\[D[w,\rho]=\frac{\gamma}{2}\left(c_{1}+c_{2}-\int_{w}^{\infty}\,dx\,x\rho(x)+w \int_{w}^{\infty}\,dx\,\rho(x)\right). \tag{5.4}\]
Therefore we also have
\[\frac{dD[w,\rho]}{dw}=\frac{\gamma}{2}\int_{w}^{\infty}\,dx\,\rho(x). \tag{5.5}\]
We make the ansatz that
\[\rho_{\infty}(w)\approx c_{\infty}\exp(-aw^{2}-bw) \tag{5.6}\]
for \(w\gg 1\), \(a>0\), \(b\in\mathbb{R}\), and \(c_{\infty}>0\). In this case,
\[\frac{d\log\rho_{\infty}(w)}{dw}=-2aw-b. \tag{5.7}\]
We simplify the right hand side of (5.3) term-by-term under the ansatz (5.6) to check whether it is an affine function of \(w\), and if so identify the appropriate constants.
In doing so, we make repeated use of the asymptotic approximation
\[\operatorname{erfc}(z)\approx\frac{1}{z\sqrt{\pi}}\exp(-z^{2}).\]
Starting first with (5.5)
\[\int_{w}^{\infty}\,dx\,\rho_{\infty}(x) =\frac{c_{\infty}}{2}\sqrt{\frac{\pi}{a}}\exp\left(\frac{b^{2}}{4 a}\right)\operatorname{erfc}\left(\frac{2aw+b}{2\sqrt{a}}\right)\] \[\approx\frac{c_{\infty}}{2}\sqrt{\frac{\pi}{a}}\exp\left(\frac{b ^{2}}{4a}\right)\frac{2\sqrt{a}}{2aw+b}\frac{1}{\sqrt{\pi}}\exp\left(-\left( \frac{2aw+b}{2\sqrt{a}}\right)^{2}\right) \tag{5.8}\] \[=\frac{c_{\infty}}{2aw+b}\exp(-aw^{2}-bw). \tag{5.9}\]
Combining (5.8) and (5.9), we have that
\[-\int_{w}^{\infty}\,dx\,x\rho_{\infty}(x)+w\int_{w}^{\infty}\,dx\,\rho_{ \infty}(x)\approx 0,\]
which implies \(D[w,\rho]\approx\frac{\gamma}{2}(c_{1}+c_{2})\).
Taking these results together, we see that upon using (5.6) and approximating asymptotically, (5.3) simplifies to
\[\frac{d\log\rho_{\infty}(w)}{dw}=\frac{2\chi}{\gamma(c_{1}+c_{2})}(1-w)-\underbrace {\frac{c_{\infty}}{c_{1}+c_{2}}\frac{\exp\left(-aw^{2}-bw\right)}{2aw+b}}_{ \text{subdominant corrections}}.\]
In particular, we note from (5.7) that \(a=\frac{\chi}{\gamma(c_{1}+c_{2})}>0\).
## 6 Conclusions
We introduced a variant of the Yard-Sale Model for which the Gini coefficient of economic inequality monotonically increases under the resulting continuum dynamics yet the rate of change in time of the Gini coefficient permits an upper bound.
The method of bounding \(\dot{G}\) used techniques from the analysis of deterministic equation of motions. It is not clear if or how this bound could be applied to the stochastic, finite-agent models that precede the diffusion approximation. Nor is it clear how a similar bound, phrased in a probabilistic way, could be derived directly from the stochastic, finite-agent system.
Since the bound in Theorem 4 includes \(||\rho||_{\infty}\), the \(L^{\infty}\) norm of the density, it is natural to ask how the bound could be tested with empirical data since the transition from empirical data to a density function involves user choices (e.g., width in kernel density estimation) that affect the essential supremum of the resulting density.
By showing that the asymptotics of this modified model with redistribution match that of the original Yard-Sale Model with redistribution, we have put forward an argument that the macroscopic asymptotics are robust to changes of the microscopic transactions. Thus many microscopic transactions (and not _just_ a practitioner's favorite) may give rise to simple yet accurate descriptions of the evolution of wealth that depend on dramatically fewer parameters than most economic theories.
Finally, the upper bound on \(\dot{G}\) was shown to hold not only for the particular modification studied but also for a broader class of models described in Corollary 4. In doing so, we can consider classes of kinetic asset exchange models with "entropy production bounds" where the phrase should instead be considered as "inequality production bounds."
## Acknowledgments
We thank M. Johnson and D. Gentile of Tufts University for productive conversations.
|
2307.04053
|
How is Fatherhood Framed Online in Singapore?
|
The proliferation of discussion about fatherhood in Singapore attests to its
significance, indicating the need for an exploration of how fatherhood is
framed, aiding policy-making around fatherhood in Singapore. Sound and holistic
policy around fatherhood in Singapore may reduce stigma and apprehension around
being a parent, critical to improving the nations flagging birth rate. We
analyzed 15,705 articles and 56,221 posts to study how fatherhood is framed in
Singapore across a range of online platforms (news outlets, parenting forums,
Twitter). We used NLP techniques to understand these differences. While
fatherhood was framed in a range of ways on the Singaporean online environment,
it did not seem that fathers were framed as central to the Singaporean family
unit. A strength of our work is how the different techniques we have applied
validate each other.
|
Tran Hien Van, Abhay Goyal, Muhammad Siddique, Lam Yin Cheung, Nimay Parekh, Jonathan Y Huang, Keri McCrickerd, Edson C Tandoc Jr., Gerard Chung, Navin Kumar
|
2023-07-08T22:03:00Z
|
http://arxiv.org/abs/2307.04053v1
|
# How is Fatherhood Framed Online in Singapore?
###### Abstract
The proliferation of discussion about fatherhood in Singapore attests to its significance, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. We analyzed 15,705 articles and 56,221 posts to study how fatherhood is framed in Singapore across a range of online platforms (news outlets, parenting forums, Twitter). We used NLP techniques to understand these differences. While fatherhood was framed in a range of ways on the Singaporean online environment, it did not seem that fathers were framed as central to the Singaporean family unit. A strength of our work is how the different techniques we have applied validate each other.
Keywords: fatherhood, singapore, social media
## 1 Introduction
Fatherhood is now an unprecedentedly visible cultural phenomenon in Singapore. This increased attention is related to the inaugural nationwide fatherhood movement, Dads for Life, the continual development of parenting magazines and the recent emergence of fatherhood blogs within the Singapore internet sphere. In recent times, various fatherhood-related initiatives in Singapore have collaborated with government agencies, business corporations, and community organizations on initiatives to create awareness of the importance of the father's
role, develop commitment to good fathering, and encourage fathers to spend time with their children. In Singapore, the introduction of paternity leave and encouragement for fathers to play a bigger role in childcare and child-raising suggest that the government is sympathetic to the pursuit of gender equality. However, there is a gap between the perception of the importance of fathers and the actual involvement of fathers in their children's lives. In addition, the role of fathers continues to be recognized primarily as that of a breadwinner. Yet fathers want to do more and experience parenthood as a very fulfilling experience, to which they are highly committed [3]. The proliferation of discussion about fatherhood in Singapore attests to its significance as a commercial, ideological, and cultural subject, indicating the need for an exploration of how fatherhood is framed, aiding policy-making around fatherhood in Singapore. While there has been research around how fatherhood is framed in the Singapore context, there is limited analysis of how fatherhood is framed on social media, news outlets, or online forums. Such platforms are where opinions or news on fatherhood are forwarded, people get parenting information, or get quick answers to fatherhood questions. Studying how fatherhood is framed in the online Singaporean context is central to crafting progressive and effective policy around parenting in Singapore, as well as managing the media landscape. Sound and holistic policy around fatherhood in Singapore may reduce stigma and apprehension around being a parent, critical to improving the nation's flagging birth rate. Policies developed in Singapore around fatherhood may then be implemented in nearby East Asian countries, which have similarly low birth rates, to mitigate a rapidly aging society and a shrinking taxpayer base. In this paper, we demonstrate how fatherhood in Singapore is framed on multiple online platforms (news outlets, parenting forums, Twitter). Our main research question (RQ) is as follows: How is fatherhood in Singapore framed on various online platforms? Our findings suggested that while fatherhood was framed in a multiplicity of forms online, it did not seem that fathers were core to the family.
## 2 Related Work
**Fatherhood Framing Online** Work on fatherhood in Singapore is limited. Recent work proposed the concept of Confucian masculinity to explain how the depiction of active fatherhood reinforced the ubiquitous _normal family_ that upholds patriarchal ideology and perpetuates patriarchal power, obscuring the contradictions of class, race, and sexuality that exist in Singapore [3]. Other work examined the fatherhood discourses in _new dad_ ads; feature articles from Today's Parents, a parenting magazine; articles from Life Dads, a government electronic newsletter on fatherhood; and blog entries from three fatherhood blogs [4]. The study employed critical discourse analysis, and proposed a Hegemonic Fatherhood Discourse Schema to postulate that the _new father/man and traditional father/man_ ideology is the hegemonic fatherhood in Singapore, ultimately serving the interests of the Singapore state. While past work detailed framing around fatherhood in Singapore, previous research did not compare framing across on
line platforms, or provide an overview of fatherhood framing to develop policy or informational tools. While there was limited fatherhood research in the Singapore context, there was relatively more research on fatherhood framing online in other contexts. For example, recent work [5] used discussion threads from two Web-based parenting communities, r/Daddit and r/PreDaddit from Reddit. Results demonstrated that men used web-based communities to share the joys and challenges of the fatherhood experience.
## 3 Data and Method
**Data** We first selected three content experts who had published at least ten peer-reviewed articles in the last three years around fatherhood. We ensured the content experts were either from Singapore or conducted research on fatherhood/parenthood in Singapore. Given the wide disciplinary focus of fatherhood research, we sought to select a range of experts across disciplines. We recruited one expert from each of these disciplines: Public policy, social work, computational social science. Selecting experts from a range of fields allows results to be contextualized to fields where fatherhood research is concentrated, allowing for findings to be drawn on by stakeholders in public policy, social work, and computational social science. The context experts separately developed lists of online platforms most relevant to fatherhood in Singapore. Each expert developed a list of ten platforms independently, and we selected only platforms common to all three experts' lists. For each online platform, experts also provided up to 10 examples, where applicable, of websites, or forums, and we selected examples common to all experts' lists. The final list of platforms is as follows: Singapore news outlets (Straits Times, Channel NewsAsia, TODAYonline), parenting forums (singaporemotherhood.com, singaporeparents.com.sg/forum, forums.hardwarezone.com.sg/threads/welcome-to-hwzs-parenting-kids-early-learning-forum.5684416, mummysg.com/forums), Twitter (filtering only posts related to Singapore). Examples of platforms not selected: Facebook, Instagram, Reddit, LinkedIn. We were not able to collect Facebook and Instagram data as there was limited support for CrowdTangle, the main mode of Facebook/Instagram data collection. Similarly, the pushshift.io Reddit API had limited support and Reddit data collected was incomplete. LinkedIn had limited fatherhood posts and posts were mostly centered on non-family content. To capture fatherhood-related text on these platforms, we used queries based on a related systematic review e.g., father* OR dad* OR patern* OR paternal OR paternity OR step-dad* OR step-dad* OR Step-father* OR papa. We used only English-language keywords as most of discussion in the Singapore internet environment is in English. English is also the major language of communication in Singapore. For forums, we used automated scraping techniques (Beautiful Soup) to obtain forum posts from 2010 to 2023, with the same set of keywords. We ran a search for querying the keywords in the title of the forum post or replies to the forum post. We collected all posts that contained these keywords within the forum posts and replies. Regarding Twitter, we used the Twitter API and
the indicated keywords to collect tweets from 2011 to 2023. Finally, for news articles, we used Nexis to obtain news archives from 1992 to 2023. To prepare the data for analysis, English stop words such as _the, a, an_ were removed, along with abbreviations, and terms were stemmed using Porter's stemming algorithm. Stemming converts words with the same stem or root (e.g., innovative and innovator) to a single word type (e.g., innovate). We organized data into four streams for analysis: Twitter (tweets), news (news articles), forums (forum posts).
**Sentiment** Sentiment analysis can aid us in comprehending how sentiment around fatherhood is expressed in the online arena. As an example, forums may be more likely to have lower sentiment compared to news. DistilBERT was used for sentiment analysis. DistilBERT was used separately on data from each platform. The model assigns sentiment based on each article or post. Sentiment is from a -1 to 1 scale, where values \(<\)0 are negative sentiment, \(>\)0 are positive sentiment, and close to 0 are neutral. To stay within the admitted input size of the model, the text length (title + body text) was clipped to to 512 tokens.
**Emotion Recognition** Emotion recognition can help us understand how emotions are expressed across various platforms, indicating differences in how fatherhood is framed in Singapore. For example, forums may be more likely to contain anger compared to news. We used DistilBERT for emotion recognition. The model was applied separately on data from each platform. The model assigns emotions (anger, fear, joy, love, sadness, surprise) based on each article or post. To stay within the admitted input size of the model, we clipped the length of the text (title + body text) to 512 tokens.
We provided an overview of the data in Table 1. Two reviewers independently examined 10% of the articles or posts within each dataset to confirm salience with our research question. The reviewers then discussed their findings and highlighted items deemed relevant across both lists. We noted the following relevance proportions: News outlets (82%), Twitter (90%), Parenting forums (78%).
## 4 Results
**Overview** We first explored sample posts across platforms. News outlets generally mentioned fatherhood in the context of providing demographic data about interviewees, with excerpts such as _So the 40-year-old eye specialist and father of three had to wrap up his work at the hospital quickly_, or when interviewees were referring to their fathers with no specific reference to fatherhood e.g., _Mr Lee,
\begin{table}
\begin{tabular}{|l|r|} \hline Platform & Data collected (e.g., N of posts, articles) \\ \hline News outlets & 15,705 articles, 9,811,513 words \\ \hline Twitter & 54,283 tweets, 900,939 words \\ \hline Parenting forums & 969 threads, 425,966 words \\ \hline \end{tabular}
\end{table}
Table 1: Data collected across online platforms
whose father founded the clan association, rents out its third floor to a small media firm._ Broadly, news outlets did not seem to focus on the experience of fatherhood, with the bulk of articles mentioning fathers as a demographic indicator. Twitter posts focused on people recounting incidents, often humorous or heart-warming, with their fathers e.g., _My dad was telling me something serious and he hit his leg against the table and I burst out laughing so he had no choice but to laugh_, _Dad brought back homemade fresh hornfun (noodles) from the temple. It's delicious_. Twitter seemed to have a greater focus on fathers playing a core function in the Singapore family unit. Posts from forums were very diverse topically. Several posts were about hiring a helper for a young child: _My husband is totally against the idea of employing a helper, as he does not like a stranger living with us_; _I am a father of a newborn baby girl. I recently engaged a confinement lady by the name of Auntie Judy_. Such posts suggest the significant role domestic helpers play in the Singaporean family, and how a portion of a father's role is perhaps to oversee the hiring of the domestic helper. Other posts were about suspected infidelity e.g., _So my Wife of 2 years has been cheating on me with another male colleague_, perhaps indicative of the strain parenting is related to within some Singaporean families.
We then provided word clouds in Figure 1 as an overview of the data. Across all datasets, words such as _time, work, now_ were prominent, perhaps indicative of how work and likely limited time are central to fatherhood in Singapore. Most common trigrams for news articles centered on leaders of Singapore, who were father and son: _Lee Kwan Yew_ and _Lee Hsien Loong_. This may indicate that the mainstream news media discussion around fatherhood had little to do with fathers' role in a family, but simply around familial relationships within major news stories. In 1992 - 2003, common trigrams in the news were _engineer success story_ and _pressure parent counting_. From 2004 - 2019, common trigrams were _two baby boy_, _first new baby_, and _first time parent_. From 2020 - 2022, common trigrams were _generation grit family_, and _grit family love_. Broadly, news trigrams may detail how the initial focus was on children bringing pride and wealth to their families, with a transition toward celebrating new births. In more recent
Figure 1: Word cloud visualizations for news (1a), Twitter (3b), forums (3c) based on keywords relevant to fatherhood in Singapore
years, forums tended to focus on how the family unit could overcome struggles. The most common trigrams in Twitter focused on celebrating fathers through specific events such as Father's Day and birthdays: _happy father's day_, _happy birthday daddy_. Such phrases indicated that Twitter may be used to celebrate fathers, but only in relation to pre-defined events, instead of fathers being celebrated for time put toward caregiving etc. Common trigrams in 2011 - 2020 were _love u dad_, _dad love love_. 2021 onwards, popular trigrams were _feel fulfilling husband_, and _last nite daddy_. Twitter data demonstrated a shift from declaring love for one's father, to fathers indicating how they were fulfilled in their role. Unlike other datasets, there appears to be a shift towards a more active form of fatherhood in Singapore, where fathers describe pride in their role. Trigrams in forums centered on perceived marital infidelity, such as _wife unfaithful husband_, and assisted reproductive technologies, such as _ivf mommy token_, and _cousin egg donor_. Forums seemed to be platforms where people sought support around spousal infidelity and assisted reproductive technologies, rather than discuss fathers' role in the family unit. The most common trigrams in forums changed over time, with phrases such as _gave birth daughter_, and _first time dad_ in 2010 - 2019, but with phrases such as _happen file divorce_, and _judged urged divorcing_ in 2020. In 2021, common trigrams were _conceiving single women_, while in 2022, trigrams such as _crave physical intimacy_, and _physicial intimacy normal_ were popular. Forums, while initially around celebrating birth, may have become places where people sought information around divorce, assisted reproductive technologies, and physical intimacy. Broadly, descriptive data indicated shifting framing around fatherhood, but a limited focus on fathers as core to the Singapore family.
**Sentiment**
We presented sentiment analysis results across each platform in Table 2. News and Twitter had higher proportions of positive sentiment (53.7% and 57.0% respectively) compared to forums (27.2%). Forums had the highest proportion of negative sentiment (65.9%), compared to news and Twitter (43.8% and 33.8% respectively). We then presented sentiment analysis results over time for each platform in Figure 2. News data exhibited several fluctuations but had the greatest rise in positive sentiment post-2009. The nationwide fatherhood movement, Dads for Life, started in 2009, may explain the increase in positive sentiment. Examples of news article content with positive sentiment were as follows: _A group of prominent figures from various organisations and businesses have banded to
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & News & Twitter & Forums \\ \hline Positive & 53.7\% & 57.0\% & 27.2 \% \\ \hline Negative & 43.8\% & 33.8\% & 65.9 \% \\ \hline Neutral & 2.5\% & 9.1\% & 6.9 \% \\ \hline \end{tabular}
\end{table}
Table 2: Sentiment analysis breakdown for various platforms.
Figure 2: Sentiment analysis over time for various platforms.
gether to start up the Fathers Action Network. The network aims to kick-start a movement called Dads for Life to get fathers more involved with their families, especially in their children' lives. This follows a fatherhood perception survey conducted in April and May this year by a Ministry. Most felt that being a father and raising children is one of the most fulfilling experiences a man can have._; _Work is work and family is family. Our ultimate goal is still our family. Work is just a means to get the money so we should be very clear about it. And that is the sort of spirit that the Dads for Life movement wants to inspire._ After 2017, positive sentiment declined over time, and was overtaken by negative sentiment. Forums had broadly negative sentiment 2015 onward, reaching a peak in 2017, followed by a steady decline. Twitter exhibited mostly positive sentiment 2013 onward with a steady decline after. We suggest that the high proportion of positive sentiment in the news may be related to governmental initiatives and the high proportion of negative sentiment in forums may be related to a more frank discussion of the stresses of parenting.
**Emotion Recognition**
We presented emotion recognition results across each platform in Table 3. News had the highest proportion of joyous (61.3%) and loving (34.2%) posts, perhaps reflecting governmental initiatives around fatherhood. While Twitter and forums had similar levels of joyous posts (56.6% and 44.2% respectively), they were still not as high as news. Similarly, loving posts on Twitter and forums (2.4% and 4.1% respectively) were far lower than news outlets. We suggest that the emotion in the news reflects pro-fatherhood governmental initiatives, but these do not always filter successfully to other media. We then presented emotion recognition results over time for each platform in Figure 3. News data exhibited several fluctuations but had the steepest rise post-2009. Dads for Life, started in 2009, may explain the uptick in news articles, especially around joy. Examples of news article content that were coded as joy: _It's a happy Father's Day for SAFRA, as it is set to receive funds from the "Dads for Life" movement to pump up father-friendly activities for its members over the next two years._; _He will be running alongside his daughter in the Dads For Life 800m Father and Child Challenge, a new category in the annual SAFRA Singapore Bay Run and Army Half-Marathon. Mr Shariff, who was born without part of his left leg, said:
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & News & Twitter & Forums \\ \hline Anger & 32.7\% & 28.5\% & 25.5 \\ \hline Fear & 18.8\% & 4.1\% & 6.3 \\ \hline Joy & 61.3\% & 56.6\% & 44.2 \\ \hline Love & 34.2\% & 2.4\% & 4.1 \\ \hline Sadness & 2.0\% & 7.6\% & 18.9 \\ \hline Surprise & 11.4\% & 0.8\% & 1.0 \\ \hline \end{tabular}
\end{table}
Table 3: Emotion recognition breakdown for various platforms.
Figure 3: Emotion recognition over time for various platforms.
I signed us up because I want to show her how running can make her happy._ Both Twitter and forum posts saw a sudden spike post-2013 onward, mostly around joy. We suggest that the shift in emotion may be due to a delayed reaction to Dads for Life. Broadly, we forward that the 2009 Dads for Life movement and other similar policies may have catalyzed emotional reactions around fatherhood in the Singapore online arena. However, the rises in emotion were not sustained and seemed to decline by 2023, perhaps indicative that new policy levers may need to be rolled out.
## 5 Discussion
Our RQ was to explore how fatherhood in Singapore is framed on various online platforms. A strength of our work is how the different techniques we applied validate each other as well as reveal differences across platforms. While fatherhood was framed in a range of ways on the Singapore online environment, it did not seem that fathers were framed as central to the Singaporean family unit. Results also indicated that governmental initiatives may have some effect on altering the framing of fatherhood, but are not lasting in effect. The concordance in our results suggests the veracity of our findings and we hope that results can add to research and policy around fatherhood in Singapore. Our evidence adds to previous research, where we provided data on how governmental initiatives may initially buttress framing around fatherhood, but needs to be sustained to provide broad and lasting support for fathers. Key to how fatherhood is framed in Singapore is the inclusion of fathers' viewpoints when writing news articles on fatherhood. Where possible, fathers themselves should be consulted on articles about fatherhood. For example, a panel staffed by fathers can comment on fatherhood-related online news articles, providing suggestions on how articles can more accurately represent fathers' concerns [1, 2]. Our findings relied on the validity of data collected with our search terms. We used a range of established techniques to search for all articles/posts relevant to fatherhood, and our data contained text aligned with how fatherhood is framed. We were thus confident in the comprehensiveness of our data. We only used English-language text but will include other languages in future work. Given the token limits for the emotion recognition technique, we were not able to use emotion recognition for the entirety of longer news articles. We note that the recall of the search string was not tested. We note that our data may not be generalizable to how fatherhood is framed globally. Our goal was not to identify who was doing the framing around fatherhood e.g., family members or government. Future studies will seek to identify which stakeholders were likely involved in the framing.
|
2302.10820
|
Device Tuning for Multi-Task Large Model
|
Unsupervised pre-training approaches have achieved great success in many
fields such as Computer Vision (CV), Natural Language Processing (NLP) and so
on. However, compared to typical deep learning models, pre-training or even
fine-tuning the state-of-the-art self-attention models is extremely expensive,
as they require much more computational and memory resources. It severely
limits their applications and success in a variety of domains, especially for
multi-task learning. To improve the efficiency, we propose Device Tuning for
the efficient multi-task model, which is a massively multitask framework across
the cloud and device and is designed to encourage learning of representations
that generalize better to many different tasks. Specifically, we design Device
Tuning architecture of a multi-task model that benefits both cloud modelling
and device modelling, which reduces the communication between device and cloud
by representation compression. Experimental results demonstrate the
effectiveness of our proposed method.
|
Penghao Jiang, Xuanchen Hou, Yinsi Zhou
|
2023-02-21T16:55:48Z
|
http://arxiv.org/abs/2302.10820v1
|
# Device Tuning for Multi-Task Large Model
###### Abstract
Unsupervised pre-training approaches have achieved great success in many fields such as Computer Vision (CV), Natural Language Processing (NLP) and so on. However, compared to typical deep learning models, pre-training or even fine-tuning the state-of-the-art selfattention models is extremely expensive, as they require much more computational and memory resources. It severely limits their applications and success in a variety of domains, especially for multi-task learning. To improve the efficiency, we propose Device Tuning for efficient multi-task model, which is massively multi-task framework across cloud and device, and is designed to encourage learning of representations that generalize better to many different tasks. Specifically, we design Device Tuning architecture of multi-task model that benefit both cloud modeling and device modeling, which reduces the communication between device and cloud by representation compression. Experimental results demonstrate the effectiveness of our proposed method.
## Introduction
Self-attention-based models, especially vision transformers [1], are an alternative to convolutional neural networks (CNNs) to learn visual representations. Briefly, ViT divides an image into a sequence of non-overlapping patches and then learns inter-patch representations using multi-headed self-attention in transformers [16]. The general trend is to increase the number of parameters in ViT networks to improve the performance (e.g., 2020, 2021, 2020). However, these performance improvements come at the cost of model size (network parameters) and latency. Many real-world applications (e.g., augmented reality and autonomous wheelchairs) require visual recognition tasks (e.g., object detection and semantic segmentation) to run on resource-constrained mobile devices in a timely fashion. To be effective, ViT models for such tasks should be lightweight and fast. Even if the model size of ViT models is reduced to match the resource constraints of mobile devices, their performance is significantly worse than light-weight CNNs. For instance, for a parameter budget of about 5-6 million, DeIT [11] is 3less accurate than MobileNetv3 [14]. However, it is still extremely expensive to pretrain or even just to fine-tune the Transformer layers, as they require much more computational and memory resources compared to traditional models. This largely limits their applications and success in more fields.
To reduce the computational and memory resources for cloud centralized model, recent works [13] explored a split deployment across cloud and device, which could reduce the inference cost and memory resources. Such works about mobile computing and the Internet of Things (IoTs) are driving computing toward dispersion. The increasing capacity of mobile devices makes it possible to consider the intelligence services, such as online machine translation and online dialogue modeling, from cloud to device modeling. Several recent works in different perspectives like privacy [13], efficiency [15], applications [16] have explored this pervasive computing advantages. There have been some efforts to distill BERT into resource-limited mobile devices. However, how to leverage the advantages of the device modeling and the cloud modeling jointly to benefit both sides is still a challenge for unsupervised pre-training models.
The first issue of this challenge is how to design an architecture that not only has a lower resource-to-performance ratio on device but also take advantage of the device modeling and the cloud modeling jointly. The second issue of this challenge is how to design an effective multi-task framework which could learn one general scalable and lighter model.
To overcome challenges mentioned above, we propose Device Tuning framework, which is one general framework across cloud and device for multiple tasks. As shown in Figure 1, previous unsupervised pre-training methods learn a centralized could model, models designed for resource-limited mobile devices learn a task specific device model. Different from these methods, our device tuning method share parameters in could and learn task specific parameters in device. Specifically, to overcome the first issue, we propose a general framework including device encoder and cloud decoder, which reduces the communication by representation compression. Then, to overcome the second issue,
we consider a gradient normalization method which automatically balances training in multi-task framework by dynamically tuning gradient magnitudes. In summary, the contributions of this paper are:
* Different from existing works that either only consider the cloud modeling, or on-device modeling, we design Device Tuning architecture of multi-task model that benefit both cloud modeling and device modeling.
* We consider a novel method which reduces the communication between device and cloud by representation compression.
* Extensive experiments show that our proposed Device tuning framework can significantly improve methods in different tasks.
### RelatedWork
Dosovitskiy et al. (2021) apply transformers of Vaswani et al. (2017) for large-scale image recognition and showed that with extremely large-scale datasets (e.g., JFT-300M), ViTs can achieve CNN-level accuracy without image-specific inductive bias. With extensive data augmentation, heavy L2 regularization, and distillation, ViTs can be trained on the ImageNet dataset to achieve CNN-level performance (Touvron et al., 2021, 2020). However, unlike CNNs, ViTs show substandard optimizability and are difficult to train. Subsequent works (e.g., Graham, El-Nouby, Touvron, Stock, Joulin, Jegou, and Douze (2021); Dai, Liu, Le, and Tan (2021); Liu, Lin, Cao, Hu, Wei, Zhang, Lin, and Guo (2021); Wang, Xie, Li, Fan, Song, Liang, Lu, Luo, and Shao (2021); Yuan, Chen, Wang, Yu, Shi, Jiang, Tay, Feng, and Yan (2021); Chen, Dai, Chen, Liu, Dong, Yuan, and Liu (2021)) shows that this substandard optimizability is due to the lack of spatial inductive biases in ViTs. Incorporating such biases using convolutions in ViTs improves their stability and performance. Different designs have been explored to reap the benefits of convolutions and transformers. For instance, ViT-C of Xiao et al. (2021) adds an early convolutional stem to ViT. CvT (Wu et al., 2021) modifies the multi-head attention in transformers and uses depth-wise separable convolutions instead of linear projections. BoTNet (Srinivas et al., 2021) replaces the standard 3 \(\times\) 3 convolution in the bottleneck unit of ResNet with multi-head attention. ConviT (d'Ascoli et al., 2021) incorporates soft convolutional inductive biases using a gated positional self-attention. PiT (Heo et al., 2021) extends ViT with depth-wise convolution-based pooling layer. Though these models can achieve competitive performance to CNNs with extensive augmentation, the majority of these models are heavy-weight. For instance, PiT and CvT learns 6.1 \(\times\) and 1.7 \(\times\) more parameters than Efficient-Net (Tan and Le, 2019) and achieves similar performance (top-1 accuracy of about 81.6%) on ImageNet-1k dataset, respectively. Also, when these models are scaled down to build light-weight ViT models, their performance is significantly worse than light-weight CNNs. For a parameter budget of about 6 million, ImageNet-1k accuracy of PiT is 2.2% less than MobileNetv3.
## Method
### Preliminary
TransformerTransformer layers (Vaswani et al., 2017) have achieved state-of-the-art performance across various tasks, which is a highly modularized neural network. Each Transformer layer consists of two sub-modules: multi-head self-attention (S-Attn) and position-wise feed-forward network (P-FFN). A residual connection and layer normalization warp both sub-modules. The computation of a single Transformer layer with a length \(T\) sequence of hidden states \(\mathbf{h}=[h_{1},\dots,h_{T}]\) can be expressed as
\[\mathbf{h}\leftarrow\mathrm{LayerNorm}(\mathbf{h}+\mathrm{S-Attn }(\mathrm{Q},\mathrm{K},\mathrm{V}=\mathbf{h})) \tag{1}\] \[h_{i}\leftarrow\mathrm{LayerNorm}\left(h_{i}+\mathrm{P-FFN} \left(h_{i}\right)\right),\quad\forall i=1,\cdots,T \tag{2}\]
### Device Tuning
To design a general and efficient framework across cloud and device for multiple tasks, the main challenge is to solve the computational efficiency problem and reduce the communication. To achieve representation compression and computation reduction, our model employs a device encoder that reduces the sequence length of the hidden states, which keeps the same overall skeleton of interleaved multi-head self-attention and position-wise feed-forward network and inheriting the high capacity and optimization advantages of the Transformer architecture.
To solve the computational efficiency problem and reduce the communication, we consider a device encoder that reduces the sequence length of the hidden states as shown in Figure 2. Device encoder reduces the length of the hidden sequence by performing a certain type of pooling along the sequence dimension. For hidden sequence \(h\), we have \(\mathbf{h}^{\prime}\leftarrow\mathrm{Pooling}(\mathbf{h})\), where \(\mathbf{h}\in\mathbb{R}^{T\times D}\) and \(\mathbf{h}^{\prime}\in\mathbb{R}^{T^{\prime}\times D}\) for some \(T^{\prime}<T\). Thus, the query, key and value vectors in self-attention layer are
\[\mathbf{h}\leftarrow\mathrm{LayerNorm}\,\left(\mathbf{h}^{\prime}+\mathrm{ S-Attn}\left(\mathrm{Q},\mathrm{K},\mathrm{V}=\mathbf{h}^{\prime}\right)\right) \tag{3}\]
It is worth noting that this multi-head self-attention (SAttn module) module's output sequence is the same length as the pooled sequence \(\mathbf{h}^{\prime}\). Such pooling strategy merge (or compress) close tokens into a larger semantic component, which
Figure 1: Image recognitions on mobiles.
tuitively follows the linguistic prior. The rest of the encoder computation just follows the typical updates in Eq.(2) and (1) once the sequence length is halved following the pooling attention. The output of device encoder is given to cloud decoder.
## Experiments
In this section, we conduct experiments on benchmarks to evaluate the effectiveness of the proposed frameworks by first pretraining it and then fine-tuning it in downstream tasks.
### Performance Comparison
#### Same-scale Results
In same-scale, we compare device tuning to the standard Transformer models with similar amount of computation. We choose recent models with similar paraters as ours. The results are shown in Table 1. From Table 1, we could find that our proposed method outperform baselines in all cases, which demonstrate the effectiveness of proposed method.
### Different-scale Results
To show the effectiveness of our proposed Device Tuning, we compare our Device Tuning with different backbones. The results are shown in Table 2. The models are trained in the same settings. Similar to the similar-scale results, our method outperforms baselines in all cases, suggesting the good scalability of our proposed Device Tuning.
## Conclusion
Recently, unsupervised pre-training methods have achieved great success in many fields such as Computer Vision (CV), Natural Language Processing (NLP) and so on. However, it is extremely expensive to pretrain or even just to fine-tune the state-of-the-art self-attention models, as they require much more FLOPs and memory resources compared to traditional models. To improve the efficiency, we propose Device Tuning for efficient multi-task model, which is massively multi-task framework across cloud and device, and is designed to encourage learning of representations that generalize better to many different tasks. Specifically, we design an architecture that not only has a lower resource-to-performance ratio on device but also take advantages of the device modeling and the cloud modeling jointly. Experimental results demonstrate the effectiveness of our proposed method.
|
2306.13606
|
Machine Learning methods for simulating particle response in the Zero
Degree Calorimeter at the ALICE experiment, CERN
|
Currently, over half of the computing power at CERN GRID is used to run High
Energy Physics simulations. The recent updates at the Large Hadron Collider
(LHC) create the need for developing more efficient simulation methods. In
particular, there exists a demand for a fast simulation of the neutron Zero
Degree Calorimeter, where existing Monte Carlo-based methods impose a
significant computational burden. We propose an alternative approach to the
problem that leverages machine learning. Our solution utilises neural network
classifiers and generative models to directly simulate the response of the
calorimeter. In particular, we examine the performance of variational
autoencoders and generative adversarial networks, expanding the GAN
architecture by an additional regularisation network and a simple, yet
effective postprocessing step. Our approach increases the simulation speed by 2
orders of magnitude while maintaining the high fidelity of the simulation.
|
Jan Dubiński, Kamil Deja, Sandro Wenzel, Przemysław Rokita, Tomasz Trzciński
|
2023-06-23T16:45:46Z
|
http://arxiv.org/abs/2306.13606v1
|
Machine Learning methods for simulating particle response in the Zero Degree Calorimeter at the ALICE experiment, CERN
###### Abstract
Currently, over 50% of the computing power at CERN's GRID is used to run High Energy Physics simulations. The recent updates at the Large Hadron Collider (LHC) create the need for developing more efficient simulation methods. In particular, there exist a demand for a fast simulation of the neutron Zero Degree Calorimeter, where existing Monte Carlo-based methods impose a significant computational burden. We propose an alternative approach to the problem that leverages machine learning. Our solution utilises neural network classifiers and generative models to directly simulate the response of the calorimeter. In particular, we examine the performance of variational autoencoders and generative adversarial networks, expanding the GAN architecture by an additional regularisation network and a simple, yet effective post-processing step. Our approach increases the simulation speed by 2 orders of magnitude while maintaining the high fidelity of the simulation.
## 1 Introduction
At the European Organisation for Nuclear Research (CERN) located near Geneva, Switzerland physicists and engineers study the fundamental properties of matter through High Energy Physics (HEP) experiments. Inside the Large Hadron Collider (LHC), two particle beams are being accelerated nearly to the speed of light and brought to collide in order to recreate the extreme conditions of the early universe just after the Big Bang.
Understanding what happens during these collisions requires complex simulations that generate the expected response of the detectors inside the LHC. The currently used methods are based on statistical Monte Carlo simulations of physical interactions of particles. The high-fidelity results they provide come at a price of high computational cost. Currently, standard simulation procedures occupy the majority of CERN's computing grid system (over 500 000 CPUs in 170 centres). To address the shortcomings of this approach, an alternative solution for simulation in high-energy physics experiments that leverages generative machine learning techniques has been proposed recently. [4, 6, 13]
In this work, we examine the performance of machine learning models on the task of simulating the data from the neutron Zero Degree Calorimeter (ZDC) from the ALICE experiment, CERN. We apply a variational autoencoder and generative adversarial networks to the problem treating the results as baselines. Moreover, we expand the GAN architecture with an additional regularisation network and a simple, yet effective postprocessing step. Our solution uses a neural network classifier to filter inputs that do not cause any response of the calorimeter before passing the data to the generative model.
The proposed models are able to generate the end data directly, without simulating the effect of every physical law and interaction between particles and the experiment's matter separately. Therefore, this approach greatly reduces the demand for computational power. Our approach increases the simulation speed by 2 orders of magnitude while maintaining the high fidelity of the simulation.
## 2 Related work
The need for simulating complex processes exists across many scientific domains. In recent years, solutions based on generative machine learning models have been proposed as an alternative to existing methods in cosmology [16] and genetics [15]. However, one of the most profound applications for generative simulations is in the field of High Energy Physics, where machine learning models can be used as a resource-efficient alternative to classic Monte Carlo-based [10] approaches.
Recent attempts to simulate High Energy Physics experiments [7, 11, 13] leverage solutions based on Generative Adversarial Networks [8] or Variational Autoencoders [12]. To the best of our knowledge [13] is the first attempt to simulate a CERN calorimeter with generative machine learning models. The authors combine three parallel GAN processing streams and an attention mechanism to simulate the response of an electromagnetic calorimeter. The authors of [3] use Wasserstein GAN to simulate the response of another electromagnetic calorimeter. Similarly to our method, the authors embed a regressor pretrained on predicting input particle parameters in the model. This network extension allows them to overcome the difficulties with conditioning on continuous values. In [2] the authors investigate the use of a network architecture dubbed Bounded Information Bottleneck Autoencoder to simulate an electromagnetic calorimeter. Their approach employs multiple additional regularization networks. Additionally, this work utilizes a post-processing network which must be trained jointly with the remaining network components. Our method employs a similar post-processing step, however, it does not require the training of an additional neural network.
## 3 Zero Degree Calorimeter simulation
The neutron Zero Degree CalorimeterN is a quartz-fiber spaghetti calorimeter, which will measure the energy of the spectator neutrons in heavy ion collisions
at the CERN LHC. Its principle of operation is based on the detection of the Cherenkov light produced by the charged particles of the shower in silica optical fibres, embedded in a W-alloy absorber. [1]. One out of every two fibres is sent to a photomultiplier (PMTc), while the remaining fibers are collected in bundles and sent to four different photomultipliers (PMT1 to PMT4) forming four independent towers. This segmentation allows to check the relative behaviour of the different towers and to give a rough localization of the spectator neutron's spot on the front face of the calorimeter. The information coming from the PMTc provides a complementary measurement of the shower's energy, in particular, useful for calibration purposes. Since the number of photons collected by each tower (further referred to as channels) is directly used in the further analysis of the calorimeter's output, we aim to achieve the best possible agreement, measured between the distributions of channel values for the original and fast simulation.
Simulating the response of the Zero Degree Calorimeter (ZDC) offers a challenging benchmark for generative models. The dataset consists of over 8 million samples obtained from the GEANT4 [10] simulation tool. Each response is created by a single particle described with 9 attributes (mass, energy, charge, momenta, primary vertex).
During the simulation process, the particle is propagated through the detector for over 100 meters while simulation tools must account for all of its interactions with the detector's matter. The end result of the simulation is the energy deposited in the calorimeter's fibres, which are arranged in a grid with 44 \(\times\) 44 size. We treat the calorimeter's response as a 1-channel image with 44 \(\times\) 44 pixels, where pixel values are the number of photons deposited in a given fibre. The schema of the simulation is depicted in Fig.1.
To create the dataset the simulation was run multiple times for the same input particles. For that reason, multiple possible outcomes correspond to the same particle properties. We refer to this dataset as HEP.
Importantly, over 95% of input particles does not produce any response of the calorimeter. For that reason, there are only 300 thousand non-zero ZDC responses in the dataset. We randomly split the dataset into training (80%) and validation (20% subsets.
Figure 1: Fast simulation of the Zero Degree Calorimeter
## 4 Method
The proposed method for simulating the response of the ZDC consists of two main parts. First, we pass the input particle parameters to a binary classifier that assigns one of two possible class labels to the particle - zero or non-zero. If a particle produces an empty ZDC response, the simulation returns a 44x44 matrix of zeros. If a particle is labelled as producing a non-empty response, then we pass its parameter to a generative model. The generative model synthesises a calorimeter response from the input particle parameters and a random noise vector. The schema of our method is visible on Fig. 2.
Zero vs non-zero response classifierWe use a binary neural network classifier to filter particles that do not produce any response of the ZDC. The model is a densely connected 3-layer network trained to distinguish between input parameters that produce results belonging to one of 2 classes - zero vs non-zero response. The first two layers of the network consist of 124 and 64 neurons respectively with ReLU activation functions. The last layer consists of a single neuron with a sigmoid activation function. The model was trained using binary cross-entropy loss.
Variatonal AutoencoderAs a baseline generative model, we apply a Variatonal Autoencoder [12] to the problem of simulating data from the ZDC. The network consists of 2 parts: the encoder and the decoder. During training, the encoder compresses the data into a multivariate normal distribution of the latent variables. The decoder attempts to decompress the data and reconstruct the input. We use a conditional variant of the model, providing a conditional particle data vector for both the encoder and the decor, as presented in Fig.3. For inference, only the encoder model is used, generating data from random normal noise and a conditional vector representing particle properties.
We use the following architecture for the encoder network:
Figure 2: Simulation pipeline.
* 3 convolutional layers with 32, 64 and 128 4x4 filters respectively, with stride = 2 and a LeakyReLU activation function
* a flattening layer to the output of which we concatenate corresponding conditional particle data vector
* a fully connected layer with 32 neurons and a LeakyReLU activation function
* two layers with 10 neurons for encoding mean and deviation of the latent variables
The decoder model has the following architecture:
* an input layer of size 10 for latent variables to which we concatenate corresponding conditional particle data vector
* a fully connected layer with 4608 neurons, reshaped to a 6\(\times\)6\(\times\)128 shape
* 3 blocks each consisting of an upsampling layer, a convolutional layer (128/64/32 4\(\times\)4 filters), a batch normalization layer and a LeakyReLU activation function
* an output convolutional layer with one 5\(\times\)5 filter and a ReLU activation function
Deep Convolutional Generative Adversarial NetworkWe adopt the Deep Convolutional Generative Adversarial Network architecture introduced in [14] as another baseline approach for the generative model. The GAN architecture consists of 2 networks: the generator and the discriminator. The generator learns to transform random noise into realistic data samples, while the discriminator learns to distinguish real and generated data. The two networks compete with each other during training. This process leads to a generator that can be used to generate new realistic data samples. We employ a conditional variant of the model, providing a conditional particle data vector for both the generator and the discriminator, as presented in Fig.4.
The generator model has the following architecture:
Figure 3: Variational autoencoder.
* an input layer of size 10 for random normal noise to which we concatenate corresponding conditional particle data vector
* 2 fully connected layers with 256 and 21632 neurons, dropout=0.2 and LeakyReLU activation, reshaped to a 13\(\times\)13\(\times\)128 shape
* 2 blocks each consisting of an upsampling layer, a convolutional layer (128/64 3\(\times\)3 filters), a batch normalization layer and a LeakyReLU activation function
* an output convolutional layer with one 3\(\times\)3 filter and a ReLU activation function
For the discriminator we use the following network:
* 2 convolutional layers with 32, 16 4x4 filters respectively, with stride = 2 and LeakyReLU activation function
* a flattening layer to the output of which we concatenate corresponding conditional particle data vector
* 2 fully connected layer with 128 and 64 neurons with LeakyReLU activation function and dropout = 0.2
* an output layer with a single neuron with a sigmoid activation
#### 3.2.1 Auxiliary regressor
To improve how well the GAN model reflects the geometric properties of the real data we expand the standard GAN architecture by an auxiliary regressor. This additional network was pretrained to output the position coordinates of the maximum number of photons in the input image. The regressor provides an additional source of loss to the generator by comparing the coordinates of the maximum of the generated examples with the maximum coordinates of the corresponding sample in the training set. During the training of the GAN model the loss coming from the auxiliary regressor is added to the loss of the generator. The auxiliary regressor has the same architecture as the GAN discriminator, apart from the last layer with has 2 neurons with a ReLU activation function. The network was pre-trained on the same dataset used for training the generative model. The target variables corresponding to the coordinates of the maximum number of photons were calculated before the training as a data preprocessing step. The model was pretrained with mean squared error loss.
#### 3.2.2 Postprocessing
Contrary to many generative machine learning applications simulating High Energy Physics experiments provides an objective way to directly measure the quality of the generated samples. Motivated by this observation we introduce a simple, yet effective postprocessing step.
To improve the alignment of the distribution of generated samples and the original simulations we multiply the output of the generative model by a constant \(c\). Next, we calculate the Wasserstein distance between channels of the original and generative simulation on the training subset as shown in Sec. Results. We find the optimal value of \(c\) by searching the parameter space between 0.9 to 1.1. We obtained the lowest Wasserstein distance for \(c\) == 0.96 for GAN with the auxiliary regressor and \(c\) == 1.03 for the standard GAN.
Additionally, we find a value for the standard deviation of the input noise vector that minimizes the Wasserstein distance between the original and the generative simulation. In our setup increasing the randomness of the input noise vectors helps to smooth the distribution of the generated results. We achieve the lowest Wasserstein distance for _N_(\(\mu=0,\sigma=3\)).
## 5 Results
To evaluate the performance of the particle classifier we use well-established classification metrics - precision, recall, accuracy and F1- score. For generative models. the most common method for evaluation utilizes Frechet Inception Distance (FID) [9]. However, to measure the quality of the simulation we propose a domain-specific evaluation scheme, as introduced in Sec. Zero Degree Calorimeter simulation. Following the calorimeter's specification [5] we base our evaluation procedure on 5 channels calculated from the pixels of generated images that correspond to the 5 optic fibre towers described in Sec. Zero Degree Calorimeter simulation. To measure the quality of the simulation we compare the distribution of channels for the original and generated data using Wasserstein distance [17]. Those channels represent the physical properties of the simulated particle showers and are used in the analysis of the calorimeter's output.
We present the results of our experiments using both qualitative and quantitative comparisons. In Tab. 1 we demonstrate that even using a relatively simple neural network we can successfully filter particles causing empty ZDC responses. In Fig. 5 we show examples of simulated calorimeter responses for different generative models. As demonstrated in Tab. 2, our approach outperforms other
Figure 4: DC-GAN with auxiliary regressor.
solutions on the ZDC dataset. Expanding the GAN architecture by an auxiliary regressor and introducing the postprocessing steps allows the GAN model to outperform VAE while producing visually sound calorimeter responses. The positive impact of this approach on the distribution of the generated samples is further confirmed by Fig. 6 where we compare channel distribution for all competing approaches for 2 selected channels. Applying the postprocessing increases the fidelity of the simulation by smoothing the distribution of generated responses and covering the whole range of possible outputs.
## 6 Conclusions
In this work, we apply generative machine learning models to the task of simulating the response of the Zero Degree Calorimeter from the ALICE experiment, CERN. We evaluate basic VAE and DC-GAN as baselines and apply two improvements to the GAN architecture. Simulating HEP experiments requires the generative models to follow the strict physical properties of the modelled process. However, this domain also offers new possibilities to improve the performance of the generative models and provides objective evaluation metrics. The developed generative machine learning models offer a cost-efficient alternative to Monte-Carlo-based simulations achieving a speed-up of 2 orders of magnitude.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{Wasserstein \(\downarrow\)} \\ \hline Real & - & & \\ VAE & 6.45 & & \\ DC-GAN & 8.25 & & \\ DC-GAN + auxreg & 7.20 & & \\ DC-GAN + postproc & 5.71 & & \\ DC-GAN + auxreg+ postproc & 5.16 & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results comparison for the HEP datasets. The VAE model performs better than GAN networks, even after introducing an auxiliary regressor to the GAN network. Applying the postprocessing step has a major impact on lowering the Wasserstein distance and allows GAN to outperform VAE. The GAN model with the auxiliary regressor and added postprocessing step achieves the lowest Wasserstein distance between channels calculated from the original and generated data.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & precision \(\uparrow\) recall \(\uparrow\) F1 score \(\uparrow\) accuracy \(\uparrow\) \\ \hline zero & 0.96 & 0.95 & 0.96 & 0.95 \\ non-zero & 0.93 & 0.95 & 0.94 & 0.95 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results for the binary particle classifier. We achieve high values for all evaluation metrics despite using a relatively simple neural network architecture.
## Acknowledgments
This research was funded by National Science Centre, Poland grant no 2020/39/O/ST6/01478, grant no 2018/31/N /ST6/02374 and grant no 2020/39/B/ST6/01511.
Figure 5: Examples of calorimeter response simulations with different methods. Samples generated from VAE reproduce the exact locations of the particle showers, but they appear to be blurred. Although the results from DC-GAN are visually sound with the original data, the model was not able to properly capture relations from conditional values. Adding an auxiliary regressor to the GAN architecture improves the methods in terms of reconstructing the position of the shower’s centre.
|
2305.05534
|
Integrating Holistic and Local Information to Estimate Emotional
Reaction Intensity
|
Video-based Emotional Reaction Intensity (ERI) estimation measures the
intensity of subjects' reactions to stimuli along several emotional dimensions
from videos of the subject as they view the stimuli. We propose a multi-modal
architecture for video-based ERI combining video and audio information. Video
input is encoded spatially first, frame-by-frame, combining features encoding
holistic aspects of the subjects' facial expressions and features encoding
spatially localized aspects of their expressions. Input is then combined across
time: from frame-to-frame using gated recurrent units (GRUs), then globally by
a transformer. We handle variable video length with a regression token that
accumulates information from all frames into a fixed-dimensional vector
independent of video length. Audio information is handled similarly: spectral
information extracted within each frame is integrated across time by a cascade
of GRUs and a transformer with regression token. The video and audio regression
tokens' outputs are merged by concatenation, then input to a final fully
connected layer producing intensity estimates. Our architecture achieved
excellent performance on the Hume-Reaction dataset in the ERI Esimation
Challenge of the Fifth Competition on Affective Behavior Analysis in-the-Wild
(ABAW5). The Pearson Correlation Coefficients between estimated and subject
self-reported scores, averaged across all emotions, were 0.455 on the
validation dataset and 0.4547 on the test dataset, well above the baselines.
The transformer's self-attention mechanism enables our architecture to focus on
the most critical video frames regardless of length. Ablation experiments
establish the advantages of combining holistic/local features and of
multi-modal integration. Code available at https://github.com/HKUST-NISL/ABAW5.
|
Yini Fang, Liang Wu, Frederic Jumelle, Bertram Shi
|
2023-05-09T15:28:24Z
|
http://arxiv.org/abs/2305.05534v1
|
# Integrating Holistic and Local Information to Estimate Emotional Reaction Intensity
###### Abstract
Video-based Emotional Reaction Intensity (ERI) estimation measures the intensity of subjects' reactions to stimuli along several emotional dimensions from videos of the subject as they view the stimuli. We propose a multi-modal architecture for video-based ERI combining video and audio information. Video input is encoded spatially first, frame-by-frame, combining features encoding holistic aspects of the subjects' facial expressions and features encoding spatially localized aspects of their expressions. Input is then combined across time: from frame-to-frame using gated recurrent units (GRUs), then globally by a transformer. We handle variable video length with a regression token that accumulates information from all frames into a fixed-dimensional vector independent of video length. Audio information is handled similarly: spectral information extracted within each frame is integrated across time by a cascade of GRUs and a transformer with regression token. The video and audio regression tokens' outputs are merged by concatenation, then input to a final fully connected layer producing intensity estimates. Our architecture achieved excellent performance on the Hume-Reaction dataset in the ERI Estimation Challenge of the Fifth Competition on Affective Behavior Analysis in-the-Wild (ABAW5). The Pearson Correlation Coefficients between estimated and subject self-reported scores, averaged across all emotions, were 0.455 on the validation dataset and 0.4547 on the test dataset, well above the baselines. The transformer's self-attention mechanism enables our architecture to focus on the most critical video frames regardless of length. Ablation experiments establish the advantages of combining holistic/local features and of multi-modal integration. Code available at [https://github.com/HKUST-NISL/ABAW5](https://github.com/HKUST-NISL/ABAW5).
## 1 Introduction
Attention to human affective behaviour analysis has increased in recent years, due to the multitude of its applications to fields such as robotics, human computer interaction (HCI) and psychology. The fifth Competition on Affective Behaviour Analysis in-the-wild (ABAW5) focuses on affective behaviour analysis in-the-wild. Solutions to its proposed challenges can be used to create systems that can understand people's feelings, emotions and behaviours, as well as machines and robots that can serve as 'human-centered' digital assistants.
The Emotional Reaction Intensity (ERI) Estimation Challenge in ABAW5 addresses one of the contemporary affective computing problems using the Hume-Reaction dataset [5], where subjects react to a wide range of emotional video stimuli while being observed by a webcam in their homes. After viewing each video, subjects self-annotate the intensity (over a range from 1 to 100) of their emotional reactions to it along seven dimensions (Adoration, Amusement, Anxiety, Disgust, Empathetic Pain, Fear, and Surprise).
Many previous approaches to video-based ERI estimation follow a similar pattern for representing video information, using features taken from among the final layers of a deep network that has been pre-trained on a large dataset, such as AffectNet [12]. AffectNet is annotated for holistic judgements of emotional facial expressions: categorization into one of eight emotion classes (neutral, happy, angry, sad, fear, surprise, disgust, contempt) and estimates of the intensity of valence and arousal.Thus, features extracted from later layers encode holistic judgements made by integrating information across the entire face. We hypothesize that such features may not encode more subtle, spatially localized changes in facial geometry, that may be important for estimating reaction intensity when subjects are simply viewing stimuli by themselves,
One way of labelling spatially localized changes is the Facial Action Coding System [7], which defines a set of action units (AUs) that taxonomizes the configuration of small groups of human facial muscles by their appearance on the face. Action units are spatially localized. For example, facial units include "inner brow raiser", "chin raiser", "lip corner puller", etc. There are 28 main AUs, but most databases are annotated for only a subset (e.g., 12 or 17) of these. Annotations can either be binary (indicating occurrence) or continuous (indicating intensity). As AUs are spatially localized, we hypothesize that they may encode information that is complementary to that encoded by later layers of a deep net trained on AffectNet, yet that is critical for ERI.
There are two main issues we address here in estimating a single emotional intensity along each emotional dimension in response to an entire video. First, videos can be of varying length. For example, in the Hume-Reaction dataset, videos range in length from 9.9 seconds to 14.98 seconds. Second, many of the frames within the video are images of the subject with a neutral expression, as it is often that only isolated moments in the video evoke changes in expression. Unfortunately, these moments are not known in advance. The unpredictability and sparsity of these events suggest that some common methods for extracting fixed length representations of a variable length video for input to a final classifier, such as randomly or uniformly sampling a fixed number of frames, or by average pooling across all frames in the video, are not appropriate, as they may miss or dilute the impact of critical frames. We address this problem through the use of a transformer architecture, where each frame corresponds to an input token, but we also add a regression token similar to the class token used in the Visual Transformer [6], which gathers information from all frames with weighting by an attention value. Using the output of the regression token as input to the final classification stage ensures a fixed dimensional representation that can be computed by combining information from all frames of a variable length video. The attention weighting avoids the problem of dilution by many frames containing neutral expressions, which might be problematic, especially in longer videos.
This paper proposes a multimodal architecture for ERI estimation that includes the two approaches outlined above. Our architecture uses both video and audio features, extracted from pre-trained networks (for video) or handcrafted algorithms (for audio). To the best of our knowledge, this is the first time that separate representations of holistic (AffectNet based) and local (AU-based) visual features have been combined for ERI estimation. Temporal information is extracted in two stages, first by a GRU, which operates sequentially from video or audio frame-to-frame, and second by a transformer with regression token, which combines information globally and in parallel. Our network adopts a late fusion architecture, where video and audio information is combined after separate temporal aggregation stages, just before the final classification layer. However, our architecture can be easily modified for early fusion, a promising direction for future development. Our architecture achieved excellent performance compared to baseline, outperforming the multimodal baseline by 91.02% on validation set and by 124.1% on test set.
## 2 Related Work
Emotional Reaction Intensity Estimation with the same dataset has also been presented in a Hume-Reaction MuSe 2022 [5] sub-challenge. The FaceRNET [10] was the best-performing model. It uses a CNN-RNN model to capture spatial-temporal correlations of all the frames in the video, with a routing mechanism on top. Wang et al. [14] proposed a spatiotemporal transformer architecture for dynamic facial representation learning and a multi-label graph convolutional network for emotion dependency modelling. ViPER [13] leverage the video's multimodal nature, and propose a transformer-based model to combine video frames, audio recordings, and generated textual annotations. These work neglect the importance of Action Unit feature in ERI estimation, and can be further improved by incorporating AU occurrence and intensity. The Hume-Reaction MuSe 2022 challenge also reported results from a baseline system that used Py-Feat [9] to detect the occurrence of 20 different AUs in each frame, and used these 20 binary values as the input to a Long Short-Term Memory LSTM-RNN. However, they did not investigate the addition of other visual features besides AUs, as we describe here.
## 3 Problem Formulation
We denote by \(\mathbb{V}\) and \(\mathbb{A}\) the visual and audio stream of a reaction video, and by \(\mathbb{Y}=\{y_{1},...,y_{7}\}\) seven emotional reaction intensities, representing Adoration, Amusement, Anxiety, Disgust, Empathic Pain, Fear, and Surprise, respectively.
Our objective is to predict 7 emotional reaction intensities \(\mathbb{X}=\{x_{1},...,x_{7}\}\) given \(\mathbb{V}\) and \(\mathbb{A}\) using our proposed architecture \(\mathcal{M}\), i.e.,
\[\mathbb{X}=\mathcal{M}(\mathbb{V},\mathbb{A}), \tag{1}\]
so that the loss \(\mathcal{L}(\mathbb{X},\mathbb{Y})\) is minimal.
In the next section, we will explain \(\mathcal{M}\) and \(\mathcal{L}\) in detail.
## 4 Methodology
Figure 1 shows our system's architecture. Given a video, our system outputs 7 emotional reaction intensities. Our architecture is dual stream: including a video stream and an audio stream. The video-stream is further separated into
dual streams: one for holistic feature extraction (a ResNet18 network trained on AffectNet) and one for local feature extraction (AU detection/intensity estimation by OpenFace). Dual stream features are combined by concatenation before being input to the next stage. In the video stream, we extract information spatially for each frame first, then integrate across time by a cascade of a Gated Recurrent Unit (GRU) and a transformer encoder. In the audio stream, we extract spectral information from frames of length \(\sim\)30ms, then follow a similar temporal integration architecture as used in the video stream. Finally, we fuse visual and audio features by concatenation. A fully connected layer merges multimodal information to output seven emotional reaction intensities. We will elaborate these steps in details in the following.
### Visual Feature Extraction
Data PreprocessingTo eliminate variability due to head motion, which interferes with facial expression recognition, we apply face alignment using the OpenFace Toolkit [3], which detects 68 facial landmarks. We use them to align the face by linear warping followed by cropping of the face region. Then we resize the cropped face images to \(224\times 224\) pixel resolution.
Holistic Spatial FeaturesWe use a ResNet18 network [8] pretrained on AffectNet [12] to extract holistic information about the general overall impression of the face. AffectNet is a large facial expression dataset consisting of 0.4 million images, designed for supervised facial analysis tasks. We use the 512-dimensional feature vector immediately before the final classification layer, which contains holistic information about the facial expression.
Local Spatial FeaturesWe use the OpenFace AU detection model [2] to extract both the occurrence and intensity of 17 AUs (e.g., AU1, AU2, AU4, AU6, AU7, AU10, AU12, AU14, AU15, AU17, AU23, AU25). This results in a 34-dimensional AU feature vector for each frame. This model is based on appearance (Histograms of Oriented Gradients) and geometry features (shape parameters and landmark locations). They propose a person-specific normalization approach, which allows the model to generalize well on other datasets.
Visual Feature FusionWe fuse holistic and local spatial features by concatenation, resulting in a 546-dimensional visual feature vector.
### Audio Feature Extraction
Mel-frequency Cepstral Coefficients (MFCC), the overall shape of the spectral envelope, have been widely used in speech recognition tasks. We extract them from \(\sim\)30ms frames sampled at 16kHz and with a stride of \(\sim\)16ms with the Python Librosa toolkit and use them as the basis of our audio feature representation. From a set of 32 MFCCs per frame, we obtain a sequence of 1024-dimensional audio feature vectors by combining features from 32 adjacent frames.
### Temporal Integration
Each branch cascades a GRU and a transformer encoder to integrate information from a variable number of frames per video into a final fixed-dimensional feature vector, which is used for the final prediction.
GRUThe GRU operates sequentially, frame to frame, to capture short- and long-term temporal correlations in the video.
Figure 1: Our architecture consists of two steams: a video stream and an audio stream. Visual feature includes holistic feature from a pretrained ResNet18 and local feature from AU detection/intensity estimation. Audio feature is extracted by MFCC. Each stream comprises GRU and transformer encoder to integrate information temporally. A learnable regression token is added to the input to the transformer encoder. The regression tokens from two streams are concatenated and fed into fully connected layer and Sigmoid for the final prediction. “FC” stands for Fully Connected Layer. “Concat.” stands for concatenation.
Transformer EncoderA Transformer encoder stacks several blocks. Each block has the same architecture, containing a multi-head attention mechanism followed by a fully-connected feed-forward network. Each frame corresponds to one token. We also add a regression token, with a set of learned embedding parameters. Through the self-attention mechanism, the regression token gathers information from the frames in the video. Only the regression token output after the last stage of the transformer encoder is passed to the next stage.
Readout functionWe input the concatenation of the video and audio regression token outputs into the final readout layer, consisting of a fully connected linear layer followed by a logistic sigmoid output layer, which restricts the output into [0, 1]. For each video sample \(n\), our model outputs 7 scores \(\mathbb{X}^{n}=\{x_{1}^{n},...,x_{7}^{n}\}\).
### Training
We freeze the weights in ResNet18 and AU detection model for the sake of training speed. We train the rest of the model using L2 loss:
\[\mathcal{L}(\mathbb{X},\mathbb{Y})=\frac{1}{7\times N}\sum_{i=1}^{7}\sum_{n=1}^ {N}(x_{i}^{n}-y_{i}^{n})^{2}, \tag{2}\]
where \(N\) is the total number of video samples.
## 5 Experimental Results
### Dataset
The Hume-Reaction dataset consists of 25,067 videos from 2,222 subjects. Subjects are from two cultures, 1,084 from South Africa and 1,138 from the United States, aged from 18.5 to 49.0 years old. The reaction videos vary in terms of resolution, FPS, background, and lightning noise conditions. The average video length is 11.62 seconds, and the average number of frames per video is 248.8 frames.
The dataset is partitioned into three sets: 15,806 for training set, 4,657 for validation set, and 4,604 for testing set.
### Evaluation Metric
Pearson's correlations coefficient (PCC) is used for the evaluation metric. We calculate the average PCC across the 7 emotional reactions intensities:
\[\mathcal{P}_{ERI}=\frac{\sum_{i=1}^{7}\rho_{i}}{7}. \tag{3}\]
And for each emotion, \(\rho_{i}\) is defined as:
\[\rho_{i}=\frac{\sum_{n=1}^{N}(x_{i}^{n}-\bar{x}_{i})(y_{i}^{n}-\bar{y}_{i})}{ \sqrt{\sum_{n=1}^{N}(x_{i}^{n}-\bar{x}_{i})^{2}}\sqrt{\sum_{n=1}^{N}(y_{i}^{n} -\bar{y}_{i})^{2}}}, \tag{4}\]
where \(\bar{x}\) and \(\bar{y}\) are the mean of predictions and labels.
### Baselines
We compare our model with the work that we have mentioned in Sec. 2 and the provided audio/visual baselines. The audio baseline uses DEEPSPECTRUM [1] to extract features from the audio signal. The visual baseline uses the feature vector from the last layer of a ResNet50 [8] trained on VGGface2 [4]. Then LSTM-RNN is used to process these features and output the prediction.
### Implementation Settings
We use a 2-layer GRU with the size of a hidden layer being 256. The number of transformer encoder blocks and the heads in the multi-head attention layer are both set to 4. A dropout probability of 0.2 was adopted for the transformer encoder. All our models are implemented using PyTorch and trained on a single GeForce 3070 GPU.
For the training parameters, we set an initial learning rate as 1e-4 which decays every 10 epochs by a factor of 0.5. AdamW optimizer with a weight decay of 0.5 is applied as the optimizer.
Due to the varying quality and noise of the video, there are 1% frames where no face is detected. We discard these frames. We also discard videos that have less than 50 valid images in the training. In the testing, the prediction of such an invalid video is set to be the average seven scores from other valid videos, in order to remove its impact on the PCC metrics.
### Results
Table 1 shows performance comparison on the validation and test sets. Our model achieved 0.455 on the validation dataset and 0.4547 on the test data set, surpassing all the existing methods in the table on both validation and test sets. Compared to multimodal baseline, our model outperforms it by 91.02% on validation set and by 124.1% on test set, proving the effectiveness of adding AU features.
Figure 2 shows the attention from the regression token to each frame of one long video sample and one short video sample in the validation set. The peaks in the attention curves show that the attention mechanism can identify frames that have the most informative expression changes. The peaks are quite sparse in the video, indicating the importance of using all frames for the estimation, and the ability of the network to avoid dilution of the information in key frames by neutral expressions present in most of the frames.
## 6 Ablation Study
To study the effect of different types of features (e.g., holistic ResNet18 feature, local AU feature, and audio feature), we conducted experiments with different combinations on the validation set, shown in Table 2. Using all the
features achieved the best performance. We can also see that incorporating AU features lead to a boost in performance.
We also compared the effect of different types of AUs in Table 3. We can see that using only AU intensity is slightly better than using only occurrence, while combining both of them yields the best outcome.
## 7 Conclusion
This paper describes a multimodal architecture for Emotional Reaction Intensity estimation. Our results demonstrate the efficacy of including both holistic and spatially localized information from face images for this task. Our use of a transformer encoder with a regression token enables the network to handle videos with varying length in a consistent manner and to handle the sparsity of frames indicating emotional reactions within both long and short videos. Our experimental results indicate that the performance of our architecture significantly outperforms the baselines. We also conducted ablation experiments to verify the importance of multiple visual cues and multiple sensory modalities. Our code implementation is available at [https://github.com/HKUST-NISL/ABAW5](https://github.com/HKUST-NISL/ABAW5).
|
2306.03415
|
Efficient and Interpretable Compressive Text Summarisation with
Unsupervised Dual-Agent Reinforcement Learning
|
Recently, compressive text summarisation offers a balance between the
conciseness issue of extractive summarisation and the factual hallucination
issue of abstractive summarisation. However, most existing compressive
summarisation methods are supervised, relying on the expensive effort of
creating a new training dataset with corresponding compressive summaries. In
this paper, we propose an efficient and interpretable compressive summarisation
method that utilises unsupervised dual-agent reinforcement learning to optimise
a summary's semantic coverage and fluency by simulating human judgment on
summarisation quality. Our model consists of an extractor agent and a
compressor agent, and both agents have a multi-head attentional pointer-based
structure. The extractor agent first chooses salient sentences from a document,
and then the compressor agent compresses these extracted sentences by selecting
salient words to form a summary without using reference summaries to compute
the summary reward. To our best knowledge, this is the first work on
unsupervised compressive summarisation. Experimental results on three widely
used datasets (e.g., Newsroom, CNN/DM, and XSum) show that our model achieves
promising performance and a significant improvement on Newsroom in terms of the
ROUGE metric, as well as interpretability of semantic coverage of summarisation
results.
|
Peggy Tang, Junbin Gao, Lei Zhang, Zhiyong Wang
|
2023-06-06T05:30:49Z
|
http://arxiv.org/abs/2306.03415v1
|
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning
###### Abstract
Recently, compressive text summarisation offers a balance between the conciseness issue of extractive summarisation and the factual hallucination issue of abstractive summarisation. However, most existing compressive summarisation methods are supervised, relying on the expensive effort of creating a new training dataset with corresponding compressive summaries. In this paper, we propose an efficient and interpretable compressive summarisation method that utilises unsupervised dual-agent reinforcement learning to optimise a summary's semantic coverage and fluency by simulating human judgment on summarisation quality. Our model consists of an extractor agent and a compressor agent, and both agents have a multi-head attentional pointer-based structure. The extractor agent first chooses salient sentences from a document, and then the compressor agent compresses these extracted sentences by selecting salient words to form a summary without using reference summaries to compute the summary reward. To our best knowledge, this is the first work on unsupervised compressive summarisation. Experimental results on three widely used datasets (e.g., Newsroom, CNN/DM, and XSum) show that our model achieves promising performance and a significant improvement on Newsroom in terms of the ROUGE metric, as well as interpretability of semantic coverage of summarisation results. 1
Footnote 1: Our source code is publicly available for research purposes at [https://github.com/peggyytang/URLComSum/](https://github.com/peggyytang/URLComSum/)
## 1 Introduction
Most existing works on neural text summarisation are extractive, abstractive, and compressive-based. Extractive methods select salient sentences from a document to form its summary and ensure the production of grammatically and factually correct summaries. These methods usually follow the sentence ranking conceptualisation Narayan et al. (2018); Liu and Lapata (2019); Zhong et al. (2020). The supervised models commonly rely on creating proxy extractive training labels for training Nallapati et al. (2017); Jia et al. (2021); Mao et al. (2022); Klaus et al. (2022), which can be noisy and may not be reliant. Various unsupervised methods Zheng and Lapata (2019); Xu et al. (2020); Padmakumar and He (2021); Liu et al. (2021) were proposed to leverage pre-trained language models to compute sentences similarities and select important sentences. Although these methods have significantly improved summarisation performance, the redundant information that appears in the salient sentences may not be minimized effectively.
Abstractive methods formulate the task as a sequence-to-sequence generation task, with the document as the input sequence and the summary as the output sequence See et al. (2017); Zhang et al. (2020); Wang et al. (2021); Liu et al. (2022) As supervised learning with ground-truth summaries may not provide useful insights on human judgment approximation, reinforcement training was proposed to optimise the ROUGE metric Parnell et al. (2021), and to fine-tune a pre-trained language model Laban et al. (2020). Prior studies showed that these generative models are highly prone to external hallucination Maynez et al. (2020).
Compressive summarisation is a recent appraoch which aims to select words, instead of sentences, from an input document to form a summary, which improves the factuality and conciseness of a summary. The formulation of compressive document summarisation is usually a two-stage extract-then
Figure 1: Illustration of our proposed URLComSum.
compress approach Zhang et al. (2018); Mendes et al. (2019); Xu and Durrett (2019); Desai et al. (2020): it first extracts salient sentences from a document, then compresses the extracted sentences to form its summary. Most of these methods are supervised, which require a parallel dataset with document-summary pairs to train. However, the ground-truth summaries of existing datasets are usually abstractive-based and do not contain supervision information needed for extractive summarisation or compressive summarisation Xu and Durrett (2019); Mendes et al. (2019); Desai et al. (2020).
Therefore, to address these limitations, we propose a novel unsupervised compressive summarisation method with dual-agent reinforcement learning strategy to mimic human judgment, namely URLComSum. As illustrated in Figure 1, URLComSum consists of two modules, an extractor agent and a compressor agent. We model the sentence and word representations using a efficient Bi-LSTM Graves and Schmidhuber (2005) with multi-head attention Vaswani et al. (2017) to capture both the long-range dependencies and the relationship between each word and each sentence. We use a pointer network Vinyals et al. (2015) to find the optimal subset of sentences and words to be extracted since the Pointer Network is well-known for tackling combinatorial optimization problems. The extractor agent uses a hierarchical multi-head attentional Bi-LSTM model for learning the sentence representation of the input document and a pointer network for extracting the salient sentences of a document given a length budget. To further compress these extracted sentences all together, the compressor agent uses a multi-head attentional Bi-LSTM model for learning the word representation and a pointer network for selecting the words to assemble a summary.
As an unsupervised method, URLComSum does not require a parallel training dataset.We propose an unsupervised reinforcement learning training procedure to mimic human judgment: to reward the model that achieves high summary quality in terms of semantic coverage and language fluency. Inspired by Word Mover's Distance Kusner et al. (2015), the semantic coverage rewards measured by Wasserstein distance Peyre et al. (2019) between the semantic distribution of the document and that of the summary. The fluency reward is measured by Syntactic Log-Odds Ratio (SLOR) Pauls and Klein (2012). SLOR is a referenceless fluency evaluation metric, which is effective in sentence compression Kann et al. (2018) and has better correlation to human acceptability judgments Lau et al. (2017).
The key contributions of this paper are:
* We propose the first unsupervised compressive summarisation method with dual-agent reinforcement learning, namely URLComSum.
* We design an efficient and interpretable multi-head attentional pointer-based neural network for learning the representation and for extracting salient sentences and words.
* We propose to mimic human judgment by optimising summary quality in terms of the semantic coverage reward, measured by Wasserstein distance, and the fluency reward, measured by Syntactic Log-Odds Ratio (SLOR).
* Comprehensive experimental results on three widely used datasets, including CNN/DM, XSum, Newsroom, demonstrate that URLComSum achieves great performance.
## 2 Related Work
Most of the existing works on neural text summarisation are extractive, abstractive, and compressive-based.
### Extractive Methods
Extractive methods usually follow the sentence ranking conceptualisation, and an encoder-decoder scheme is generally adopted. An encoder formulates document or sentence representations, and a decoder predicts extraction classification labels. The supervised models commonly rely on creating proxy extractive training labels for training Cheng and Lapata (2016); Nallapati et al. (2017); Jia et al. (2021), which can be noisy and may not be reliant. Some methods were proposed to tackle this issue by training with reinforcement learning Narayan et al. (2018); Luo et al. (2019) to optimise the ROUGE metric directly. Various unsupervised methods Zheng and Lapata (2019); Xu et al. (2020); Padmakumar and He (2021) were also proposed to leverage pre-trained language models to compute sentences similarities and select important sentences. Although these methods have significantly improved summarisation performance, since the entire sentences are extracted individually, the
redundant information that appears in the salient sentences may not be minimized effectively.
### Abstractive Methods
Abstractive methods formulate text summarisation as a sequence-to-sequence generation task, with the source document as the input sequence and the summary as the output sequence. Most existing methods follow the supervised RNN-based encoder-decoder framework (See et al., 2017; Zhang et al., 2020; Wang et al., 2021; Liu et al., 2022). As supervised learning with ground-truth summaries may not provide useful insights on human judgment approximation, reinforcement training was proposed to optimise the ROUGE metric (Paulus et al., 2018; Parnell et al., 2021), and to fine-tune a pre-trained language model (Laban et al., 2020). These models naturally learn to integrate knowledge from the training data while generating an abstractive summary. Prior studies showed that these generative models are highly prone to external hallucination, thus may generate contents that are unfaithful to the original document (Maynez et al., 2020).
### Compressive Methods
Compressive methods select words from a given document to assemble a summary. Due to the lack of training dataset, not until recently there have emerged works for compressive summarisation (Zhang et al., 2018; Mendes et al., 2019; Xu and Durrett, 2019; Desai et al., 2020). The formulation of compressive document summarisation is usually a two-stage extract-then-compress approach: it first extracts salient sentences from a document, then compresses the extracted sentences to form its summary. Most of these methods are supervised, which require a parallel dataset with document-summary pairs to train. However, the ground-truth summaries of existing datasets are usually abstractive-based and do not contain supervision information needed for extractive summarisation or compressive summarisation. Several reinforcement learning based methods (Zhang et al., 2018) use existing abstractive-based datasets for training, which is not aligned for compression. Note that existing compressors often perform compression sentence by sentence. As a result, the duplicated information among multiple sentences could be overlooked. Therefore, to address these limitations, we propose a novel unsupervised compressive method by exploring the dual-agent reinforcement learning strategy to mimic human judgment and perform text compression instead of sentence compression.
## 3 Methodology
As shown in Figure 1, our proposed compressive summarisation method, namely URLComSum, consists of two components, an extractor agent and a compressor agent. Specifically, the extractor agent selects salient sentences from a document \(\mathbf{D}\) to form an extractive summary \(\mathbf{S_{E}}\), and then the compressor agent compresses \(\mathbf{S_{E}}\) by selecting words to assemble a compressive summary \(\mathbf{S_{C}}\).
### Extractor Agent
Given a document \(\mathbf{D}\) consisting of a sequence of \(M\) sentences \(\{\mathbf{s_{i}}|i=1,...,M\}\), and each sentence \(\mathbf{s_{i}}\) consisting of a sequence of \(N\) words \(\{\mathbf{w}\mathbf{e}_{ij}|j=1,...,N\}\)2, the extractor agent aims to produce an extractive summary \(\mathbf{S_{E}}\) by learning sentence representation and selecting \(L_{E}\) sentences from \(\mathbf{D}\). As illustrated in Figure 2, we design a hierarchical multi-head attentional sequential model for learning the sentence representations of the document and using a Pointer Network to extract sentences based on their representations.
Footnote 2: We have pre-fixed the length of each sentence and each document by padding.
#### 3.1.1 Hierarchical Sentence Representation
To model the local context of each sentence and the global context between sentences, we use two-levels Bi-LSTMs to model this hierarchical structure, one at the word level to encode the word sequence of each sentence, one at the sentence level to encode the sentence sequence of the document. To model the context-dependency of the importance of words and sentences, we apply two levels of multi-head attention mechanism (Vaswani et al., 2017), one at each of the two-level Bi-LSTMs.
Figure 2: Illustration of the extractor agent.
Given a sentence \(\mathbf{s}_{i}\), we encode its words into word embeddings \(\mathbf{xe}_{i}=\{\mathbf{xe}_{ij}|j=1,...,N\}\) by \(\mathbf{xe}_{ij}=Enc(\mathbf{we}_{ij})\), where \(Enc()\) denotes a word embedding lookup table. Then the sequence of word embeddings are fed into the word-level Bi-LSTM to produce an output representation of the words \(\mathbf{le}^{w}\):
\[\mathbf{le}_{ij}^{w}=\overleftarrow{\text{LSTM}}(\mathbf{xe}_{ij}),j\in[1,N]\,. \tag{1}\]
To utilize the multi-head attention mechanism to obtain \(\mathbf{ae}_{i}^{w}=\{\mathbf{ae}_{i1}^{w},...,\mathbf{ae}_{iN}^{w}\}\) at word level, we define \(Q_{i}=\mathbf{le}_{i}^{w}\), \(K_{i}=V_{i}=\mathbf{xe}_{i}\),
\[\mathbf{ae}_{i}^{w}=\text{MultiHead}(Q_{i},K_{i},V_{i})\,\,. \tag{2}\]
The concatenation of \(\mathbf{le}_{i}^{w}\) and \(\mathbf{ae}_{i}^{w}\) of the words are fed into a Bi-LSTM and the output is concatenated to obtain the local context representation \(\mathbf{he}_{i}^{ws}\) for each sentence \(\mathbf{s}_{i}\):
\[\begin{split}\mathbf{he}_{ij}^{w}=\overleftarrow{\text{LSTM}}(& [\mathbf{le}_{ij}^{w};\mathbf{ae}_{ij}^{w}]),j\in[1,N]\,\,\,,\\ &\mathbf{he}_{i}^{ws}=[\mathbf{he}_{i1}^{w},...,\mathbf{he}_{iN} ^{w}]\,\,\,.\end{split} \tag{3}\]
To further model the global context between sentences, we apply a similar structure at sentence level. \(\mathbf{he}^{ws}=\{\mathbf{he}_{i}^{ws}|i=1,...,M\}\) are fed into the sentence-level Bi-LSTM to produce output representation of the sentences \(\mathbf{le}^{s}\):
\[\mathbf{le}_{i}^{s}=\overleftarrow{\text{LSTM}}(\mathbf{he}_{i}^{ws}),i\in[1, M]\,\,\,. \tag{4}\]
To utilize the multi-head attention mechanism to obtain \(\mathbf{ae}^{s}=\{\mathbf{ae}_{1}^{s},...,\mathbf{ae}_{M}^{s}\}\) at sentence level, we define \(Q=\mathbf{le}^{s}\), \(K=V=\mathbf{he}^{ws}\),
\[\mathbf{ae}^{s}=\text{MultiHead}(Q,K,V). \tag{5}\]
The concatenation of the Bi-LSTM output \(\mathbf{le}^{s}\) and the multi-head attention output \(\mathbf{ae}^{s}\) of the sentences are fed into a Bi-LSTM to obtain the final representations of sentences \(\mathbf{he}^{s}=\{\mathbf{he}_{1}^{s},...,\mathbf{he}_{M}^{s}\}\):
\[\mathbf{he}_{i}^{s}=\overleftarrow{\text{LSTM}}([\mathbf{le}_{i}^{s};\mathbf{ ae}_{i}^{s}]),i\in[1,M]\,\,. \tag{6}\]
#### 3.1.2 Sentence-Level Extraction
Similar to [3], we use an LSTM-based Pointer Network to decode the above sentence representations \(\mathbf{he}^{s}=\{\mathbf{he}_{1}^{s},...,\mathbf{he}_{M}^{s}\}\) and extract sentences recurrently to form an extractive summary \(\mathbf{S}_{\mathbf{E}}=\{A_{1},...,A_{k},...,A_{L_{E}}\}\) with \(L_{E}\) sentences, where \(A_{k}\) denotes the \(k\)-th sentence extracted.
At the \(k\)-th time step, the pointer network receives the sentence representation of the previous extracted sentence and has hidden state \(de_{k}\). It first obtains a context vector \(de_{k}^{\prime}\) by attending to \(\mathbf{he}^{s}\):
\[\begin{split}\mathbf{ue}_{i}^{k}&=v^{T}\tanh(W_{1} \mathbf{he}_{i}^{s}+W_{2}de_{k}),i\in(1,...,M)\,\,,\\ \mathbf{ae}_{i}^{k}&=\text{softmax}(\mathbf{ue}_{i}^{k} ),i\in(1,...,M)\,\,,\\ & de_{k}^{\prime}=\sum_{i=1}^{M}\mathbf{ae}_{i}^{k}\mathbf{he}_{i} ^{s}\,\,,\end{split} \tag{7}\]
where \(v,W_{1},W_{2}\) are learnable parameters of the pointer network. Then it predicts the extraction probability \(p(A_{k})\) of a sentence:
\[\begin{split}& de_{k}\leftarrow\left[de_{k},de_{k}^{\prime}\right] \,\,,\\ &\mathbf{ue}_{i}^{k}=v^{T}\tanh(W_{1}\mathbf{he}_{i}^{s}+W_{2}de_{ k}),i\in(1,...,M)\,\,,\\ & p(A_{k}|A_{1},...,A_{k-1})=\text{softmax}(\mathbf{ue}^{k})\,\,. \end{split} \tag{8}\]
Decoding iterates until \(L_{E}\) sentences are selected to form \(S_{E}\).
### Compressor Agent
Given an extractive summary \(\mathbf{S}_{\mathbf{E}}\) consisting of a sequence of words \(\mathbf{wc}=\{\mathbf{wc}_{i}|i=1,...,N\}\), the compressor agent aims to produce a compressive summary \(\mathbf{S}_{\mathbf{C}}\) by selecting \(L_{C}\) words from \(\mathbf{S}_{\mathbf{E}}\). As illustrated in Figure 3, it has a multi-head attentional Bi-LSTM model to learn the word representations. It uses a pointer network to extract words based on their representations.
#### 3.2.1 Word Representation
Given a sequence of words \(\mathbf{wc}\), we encode the words into word embeddings \(\mathbf{xc}=\{\mathbf{xc}_{i}|i=1,...,N\}\) by \(\mathbf{xc}_{i}=Enc(\mathbf{wc}_{i})\). Then the sequence of word embeddings are fed into a Bi-LSTM to produce the words' output representation \(\mathbf{lc}^{w}\):
\[\mathbf{lc}_{i}^{w}=\overleftarrow{\text{LSTM}}(\mathbf{xc}_{i}),i\in[1,N]\,\,. \tag{9}\]
To utilise the multi-head attention mechanism to obtain \(\mathbf{ac}^{w}=\{\mathbf{ac}_{1}^{w},...,\mathbf{ac}_{N}^{w}\}\), we define \(Q=\mathbf{lc}^{w}\), \(K=V=\mathbf{xc}\),
\[\mathbf{ac}^{w}=\text{MultiHead}(Q,K,V). \tag{10}\]
Figure 3: Illustration of the compressor agent.
The concatenation of \(\mathbf{lc}^{w}\) and \(\mathbf{ac}^{w}\) of the words are fed into a Bi-LSTM to obtain the representation \(\mathbf{hc}_{i}^{w}\) for each word \(\mathbf{wc}_{i}\):
\[\mathbf{hc}_{i}^{w}=\overleftarrow{\text{LSTM}}([\mathbf{lc}_{i}^{w};\mathbf{ ac}_{i}^{w}]),i\in[1,N]\,. \tag{11}\]
#### 3.2.2 Word-Level Extraction
The word extractor of the compressor agent shares the same structure as that of the extractor agent's sentence extractor. To select the words based on the above word representations \(\mathbf{hc}^{w}=\{\mathbf{hc}_{1}^{w},...,\mathbf{hc}_{N}^{w}\}\), the word extractor decodes and extracts words recurrently to produce \(\{B_{1},...,B_{k},...,B_{L_{C}}\}\), where \(B_{k}\) denotes the word extracted at the \(k\)-th time step. The selected words are reordered by their locations in the input document and assembled to form the compressive summary \(\mathbf{S}_{\mathbf{C}}\).
### Reward in Reinforcement Learning
We use the compressive summary \(\mathbf{S}_{\mathbf{C}}\) to compute the reward of reinforcement learning and denote \(\text{Reward}(\mathbf{D},\mathbf{S}_{\mathbf{C}})\) as \(\text{Reward}(\mathbf{D},\mathbf{S})\) for simplicity. \(\text{Reward}(\mathbf{D},\mathbf{S})\) is a weighted sum of the semantic coverage award \(\text{Reward}_{\text{cov}}(\mathbf{D},\mathbf{S})\) and the fluency reward \(\text{Reward}_{\text{flu}}(\mathbf{S})\):
\[\begin{split}\text{Reward}(\mathbf{D},\mathbf{S})=w_{\text{ cov}}&\text{Reward}_{\text{cov}}(\mathbf{D},\mathbf{S})\\ +w_{\text{flu}}&\text{Reward}_{\text{flu}}(\mathbf{ S})\,\end{split} \tag{12}\]
where \(w_{\text{cov}}\) and \(w_{\text{flu}}\) denote the weights of two rewards.
#### 3.3.1 Semantic Coverage Reward
We compute \(\text{Reward}_{\text{cov}}\) with the Wasserstein distance between the corresponding semantic distributions of the document \(\mathbf{D}\) and the summary \(\mathbf{S}\), which is the minimum cost required to transport the semantics from \(\mathbf{D}\) to \(\mathbf{S}\). We denote \(\mathbf{D}=\{d_{i}|i=1,...,N\}\) to represent a document, where \(d_{i}\) indicates the count of the \(i\)-th token (i.e., word or phrase in a vocabulary of size \(N\)). Similarly, for a summary \(\mathbf{S}=\{s_{j}|j=1,...,N\}\), \(s_{j}\) is respect to the count of the \(j\)-th token. The semantic distribution of a document is characterized in terms of normalised term frequency without the stopwords. The term frequency of the \(i\)-th token in the document \(\mathbf{D}\) and the \(j\)-th token in the summary \(\mathbf{S}\) are denoted as \(\text{TF}_{\mathbf{D}}(i)\) and \(\text{TF}_{\mathbf{S}}(j)\), respectively. By defining \(\text{TF}_{\mathbf{D}}=\{\text{TF}_{\mathbf{D}}(i)\}\in\mathbf{R}^{N}\) and \(\text{TF}_{\mathbf{S}}=\{\text{TF}_{\mathbf{S}}(j)\}\in\mathbf{R}^{N}\), we have the semantic distributions within \(\mathbf{D}\) and \(\mathbf{S}\) respectively.
The transportation cost matrix \(\mathbf{C}\) is obtained by measuring the semantic similarity between each of the tokens. Given a pre-trained tokeniser and token embedding model with \(N\) tokens, define \(\mathbf{v}_{i}\) to represent the feature embedding of the \(i\)-th token. Then the transport cost \(c_{ij}\) from the \(i\)-th to the \(j\)-th token is computed based on the cosine similarity: \(c_{ij}=1-\frac{\mathbf{<v}_{i}\mathbf{v}_{j}>}{\|\mathbf{v}_{i}\|_{2}\| \mathbf{v}_{j}\|_{2}}\). An optimal transport plan \(\mathbf{T}^{*}=\{t_{i,j}^{*}\}\in\mathbf{R}^{N\times N}\) in pursuit of minimizing the transportation cost can be obtained by solving the optimal transportation and resources allocation optimization problem (Peyre et al., 2019). Note that the transport plan can be used to interpret the transportation of tokens from document to summary, which brings interpretability to our URLComSum method.
Wasserstein distance measuring the distance between the two semantic distributions \(\text{TF}_{\mathbf{D}}\) and \(\text{TF}_{\mathbf{S}}\) with the optimal transport plan is computed by: \(d_{W}(\text{TF}_{\mathbf{D}},\text{TF}_{\mathbf{S}}|\mathbf{C})=\sum_{i,j}t_{ ij}^{*}c_{ij}\). \(\text{Reward}_{\text{cov}}(\mathbf{D},\mathbf{S})\) can be further defined as:
\[\text{Reward}_{\text{cov}}(\mathbf{D},\mathbf{S})=1-d_{W}(\text{TF}_{\mathbf{D }},\text{TF}_{\mathbf{S}}|\mathbf{C}). \tag{13}\]
#### 3.3.2 Fluency Reward
We utilise Syntactic Log-Odds Ratio (SLOR) (Pauls and Klein, 2012) to measure \(\text{Reward}_{\text{flu}}(S)\), which is defined as: \(\text{Reward}_{\text{flu}}(S)=\frac{1}{|S|}(\text{log}(P_{LM}(S))-\text{log}(P_ {U}(S)))\), where \(P_{LM}(S)\) denotes the probability of the summary assigned by a pre-trained language model \(LM\), \(p_{U}(S)=\prod_{t\in S}P(t)\) denotes the unigram probability for rare word adjustment, and \(|S|\) denotes the sentence length.
We use the Self-Critical Sequence Training (SCST) method (Rennie et al., 2017), since this training algorithm has demonstrated promising results in text summarisation (Paulus et al., 2018; Laban et al., 2020). For a given input document, the model produces two separate output summaries: the sampled summary \(\mathbf{S}^{s}\), obtained by sampling the next pointer \(t_{i}\) from the probability distribution at each time step \(i\), and the baseline summary \(\hat{\mathbf{S}}\), obtained by always picking the most likely next pointer \(t\) at each \(i\). The training objective is to minimise the following loss:
\[\begin{split} Loss=-(\text{Reward}(\mathbf{D},\mathbf{S}^{s})- \text{Reward}(\mathbf{D},\hat{\mathbf{S}}))\\ \cdot\frac{1}{N}\sum_{i=1}^{N}\text{log}\,p(t_{i}^{s}|t_{1}^{s},...,t_{i-1}^{s},\mathbf{D})\,\end{split} \tag{14}\]
where \(N\) denotes the length of the pointer sequence, which is the number of extracted sentences for the extractor agent and the number of extracted words for the compressor agent.
Minimising the loss is equivalent to maximising the conditional likelihood of \(\mathbf{S}^{s}\) if the sampled summary \(\mathbf{S}^{s}\) outperforms the baseline summary \(\hat{\mathbf{S}}\), i.e. \(\text{Reward}(\mathbf{D},\mathbf{S}^{s})-\text{Reward}(\mathbf{D},\hat{ \mathbf{S}})>0\), thus increasing the expected reward of the model.
## 4 Experiments
### Experimental Settings
We conducted comprehensive experiments on three widely used datasets: _Newsroom_Grusky et al. (2018), _CNN/DailyMail (CNN/DM)_Hermann et al. (2015), and _XSum_Narayan et al. (2018). We set the LSTM hidden size to 150 and the number of recurrent layers to 3. We performed hyperparameter searching for \(w_{\text{cov}}\) and \(w_{\text{flu}}\) and decided to set \(w_{\text{cov}}=1\), \(w_{\text{flu}}=2\) in all our experiments since it provides more balanced results across the datasets. We trained the URLComSum with AdamW Loshchilov and Hutter (2018) with learning rate 0.01 with a batch size of 3. We obtained the word embedding from the pre-trained GloVe Pennington et al. (2014). We used BERT for the pre-trained embedding models used for computing semantic coverage reward. We chose GPT2 for the trained language model used for computing the fluency reward due to strong representation capacity.
As shown in Table 1, we followed Mendes et al. (2019) to set \(\mathbf{L_{E}}\) for Newsroom and Zhong et al. (2020) to set \(\mathbf{L_{E}}\) for CNN/DM and XSum. We also followed their protocols to set \(\mathbf{L_{C}}\) by matching the average number of words in summaries.
We compare our model with existing compressive methods which are all supervised, including _LATENTCOM_Zhang et al. (2018), _EXCONSUM_M Mendes et al. (2019), _JECS_Xu and Durrett (2019), _CUPS_Desai et al. (2020). Since our method is unsupervised, we also compare it with unsupervised extractive and abstractive methods, including _TextRank_Mihalcea and Tarau (2004), _PacSum_Zheng and Lapata (2019), _PMI_Padmakumar and He (2021), and _SumLoop_Laban et al. (2020). To better evaluate compressive methods, we followed a similar concept as LEAD baseline See et al. (2017) and created _LEAD-WORD_ baseline which
\begin{table}
\begin{tabular}{|l||c c c|} \hline
**Dataset** & **Newsroom** & **CNN/DM** & **XSum** \\ \hline \hline
**\#Sentences in Doc.** & 27 & 39 & 19 \\
**\#Tokens in Doc.** & 659 & 766 & 367 \\ \(\mathbf{L_{E}}\) & 2 & 3 & 2 \\ \(\mathbf{L_{C}}\) & 26 & 58 & 24 \\
**Train** & 995,041 & 287,113 & 204,045 \\
**Test** & 108,862 & 11,490 & 11,334 \\ \hline \end{tabular}
\end{table}
Table 1: Overview of the three datasets. #Sentences in Doc. and #Tokens in Doc. denote the average number of sentences and words in the documents respectively. \(\mathbf{L_{E}}\) denotes the number of sentences to be selected by the extractor agent. \(\mathbf{L_{C}}\) denotes the number of words to be selected by the compressor agent. Train and Test denote the size of train and test sets.
\begin{table}
\begin{tabular}{|l||c c c|} \hline
**Method** & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline \hline LEAD & 40.0 & 17.5 & 32.9 \\ LEAD-WORD & 39.7 & 16.6 & 32.5 \\ \hline \hline
**Supervised Methods** & & & \\ \hline \hline LATENTCOM (Ext.) & 41.1 & 18.8 & 37.5 \\ LATENTCOM (Ext.+Com.) & 36.7 & 15.4 & 34.3 \\ JECS (Ext.) & 40.7 & 18.0 & 36.8 \\ JECS (Ext.+Com.) & 41.7 & 18.5 & 37.9 \\ EXCONSUM (Ext.) & 41.7 & 18.6 & 37.8 \\ EXCONSUM (Ext.+Com.) & 40.9 & 18.0 & 37.4 \\ CUPS (Ext.) & 43.7 & 20.6 & 40.0 \\ CUPS (Ext.+Com.) & 44.0 & 20.6 & 40.4 \\ \hline \hline
**Unsupervised Methods** & & & \\ \hline \hline SumLoop (Abs.) & 37.7 & 14.8 & **34.7** \\ TextRank (Ext.) & 34.1 & 12.8 & 22.5 \\ PacSum (Ext.) & **40.3** & **17.6** & 24.9 \\ PMI (Ext.) & 36.7 & 14.5 & 23.3 \\
**URL.ComSum (Ext.+Com.)** & 40.0 & 17.5 & 32.9 \\
**URL.ComSum (Ext.+Com.)** & 39.3 & 16.0 & 32.2 \\ \hline \end{tabular}
\end{table}
Table 3: Comparisons between our URLComSum and the state-of-the-art methods on the **CNN/DM** test set. (Ext.), (Abs.), and (Com.) denote the method is extractive, abstractive, and compressive respectively.
\begin{table}
\begin{tabular}{|l||c c c|} \hline
**Method** & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline \hline LEAD & 33.9 & 23.2 & 30.7 \\ LEAD-WORD & 34.9 & 23.1 & 30.7 \\ \hline \hline
**Supervised Methods** & & & \\ \hline \hline EXCONSUMM (Ext.)* & 31.9 & 16.3 & 26.9 \\ EXCONSUMM (Ext.+Com.)* & 25.5 & 11.0 & 21.1 \\ \hline \hline
**Unsupervised Methods** & & & \\ \hline \hline SumLoop (Abs.) & 27.0 & 9.6 & 26.4 \\ TextRank (Ext.) & 24.5 & 10.1 & 20.1 \\
**URL.ComSum (Ext.)** & 33.9 & **23.2** & 30.0 \\
**URL.ComSum (Ext.+Com.)** & **34.6** & 22.9 & **30.5** \\ \hline \end{tabular}
\end{table}
Table 2: Comparisons on the **Newsroom** test set. The symbol * indicates that the model is not directly comparable to ours as it is based on a subset (the “Mixed” ) of the dataset.
extracts the first several words of a document as a summary. The commonly used ROUGE metric Lin (2004) is adopted.
### Experimental Results
The experimental results of URLComSum on different datasets are shown in Table 2, Table 3 and Table 4 in terms of ROUGE-1, ROUGE-2 and ROUGE-L F-scores. (Ext.), (Abs.), and (Com.) denote that the method is extractive, abstractive, and compressive, respectively. Note that on the three datasets, LEAD and LEAD-WORD baseline are considered strong baselines in the literature and sometimes perform better than the state-of-the-art supervised and unsupervised models. As also discussed in See et al. (2017); Padmakumar and He (2021), it could be due to the Inverted Pyramid writing structure Pottker (2003) of news articles, in which important information is often located at the beginning of an article and a paragraph.
Our URLComSum method significantly outperforms all the unsupervised and supervised ones on Newsroom. This demonstrates the effectiveness of our proposed method. Note that, unlike supervised EXCONSUMM, our reward strategy contributes to performance improvement when the compressor agent is utilised. For example, in terms of ROUGE-L, EXCONSUMM(Ext.+Com.) does not outperform EXCONSUMM(Ext.), while URLComSum(Ext.+Com.) outperforms URLComSum(Ext.). Similarly, our URLComSum method achieves the best performance among all the unsupervised methods on XSum, in terms of ROUGE-1 and -L. URLComSum underperforms in ROUGE-2, which may be due to the trade-off between informativeness and fluency. The improvement on Newsroom is greater than those on CNN/DM and XSum, which could be because the larger size of Newsroom is more helpful for training our model.
Our URLComSum method achieves comparable performance with other unsupervised methods on CNN/DM. Note that URLComSum does not explicitly take position information into account while some extractive methods take advantage of the lead bias of CNN/DM, such as PacSum and LEAD. Nevertheless, we observe that URLComSum(Ext.) achieves the same result as LEAD. Even though URLComSum is unsupervised, eventually the extractor agent learns to select the first few sentences of the documents, which follows the principle of the aforementioned Inverted Pyramid writing structure.
#### 4.2.1 Ablation Studies
**Effect of Compression.** We observed that the extractive and compressive methods usually obtain better results than the abstractive ones in terms of ROUGE scores on CNN/DM and Newsroom, and vice versa on XSum. It may be that CNN/DM and Newsroom contain summaries that are usually more extractive, whereas XSum's summaries are highly abstractive. We noticed that URLComSum(Ext.+Com.) generally achieves higher ROUGE-1 and -L scores than its extractive version on Newsroom. Meanwhile, on CNN/DM and XSum, the compressive version has slightly lower ROUGE scores than the extractive version. We observe similar behaviour in the literature of compressive summarisation, which may be that the sentences of news articles have dense information and compression does not help much to further condense the content.
**Effect of Transformer.** Note that we investigated the popular transformer model Vaswani et al. (2017) in our proposed framework to replace Bi-LSTM for learning the sentence and word representations. However, we noticed the transformer-based agents do not perform as well as the Bi-LSTM-based ones while training from scratch with the same training procedure. The difficulties of training a transformer model have also been discussed in Popel and Bojar (2018); Liu et al. (2020). Besides, the commonly used pre-trained transformer models, such as BERT Devlin et al. (2019) and BART Lewis et al. (2020), require high computational resources and usually use subword-based
\begin{table}
\begin{tabular}{|l||c c c|} \hline
**Method** & ROUGE-1 & ROUGE-2 & ROUGE-L \\ \hline \hline LEAD & 19.4 & 2.4 & 12.9 \\ LEAD-WORD & 18.3 & 1.9 & 12.8 \\ \hline \hline
**Supervised Methods** & & & \\ \hline \hline CUPS (Ext.) & 24.2 & 5.0 & 18.3 \\ CUPS (Ext.+Com.) & 26.0 & 5.4 & 19.9 \\ \hline \hline
**Unsupervised Methods** & & & \\ \hline \hline TextRank (Ext.) & 19.0 & 3.1 & 12.6 \\ PacSum (Ext.) & **19.4** & 2.7 & 12.4 \\ PMI (Ext.) & 19.1 & **3.2** & 12.5 \\
**URLComSum (Ext.)** & **19.4** & 2.4 & **12.9** \\
**URLComSum (Ext.+Com.)** & 18.0 & 1.8 & 12.7 \\ \hline \end{tabular}
\end{table}
Table 4: Comparisons on the **XSum** test set.URLComSum (Ext.) denotes the extractive summary produced by our extractor agent. URLComSum (Ext.+Com.) denotes the compressive summary produced further by our compressor agent.
tokenizers. They are not suitable for URLComSum since our compressor agent points to words instead of subwords. Therefore, at this stage Bi-LSTM is a simpler and more efficient choice. Nevertheless, the transformer is a module that can be included in our framework and is worth further investigation in the future.
**Comparison of Extraction, Abstraction and Compression Approaches.** We observed that the extraction and compressive approaches usually obtain better results than the abstractive in terms of ROUGE scores on CNN/DM and Newsroom, and vice versa on XSum. It may be because CNN/DM and Newsroom contain summaries that are usually more extractive, whereas XSum's summaries are highly abstractive. Since the ROUGE metric reflects lexical matching only and overlooks the linguistic quality and factuality of the summary, it is difficult to conclude the superiority of one approach over the others solely based on the ROUGE scores. Automatic linguistic quality and factuality metrics would be essential to provide further insights and more meaningful comparisons.
### Qualitative Analysis
In Figure 5, 6, 7 in Appendix A, summaries produced by URLComSum are shown together with the reference summaries of the sample documents in the CNN/DM, XSum, and Newsroom datasets. This demonstrates that our proposed URLComSum method is able to identify salient sentences and words and produce reasonably fluent summaries even without supervision information.
### Interpretable Visualisation of Semantic Coverage
URLComSum is able to provide an interpretable visualisation of the semantic coverage on the summarisation results through the transportation matrix. Figure 4 illustrates the transport plan heatmap, which associated with a resulting summary is illustrated. A heatmap indicates the transportation of semantic contents between tokens in the document and its resulting summary. The higher the intensity, the more the semantic content of a particular document token is covered by a summary token. Red line highlights the transportation from the document to the summary of semantic content of token "country", which appears in both the document and the summary. Purple line highlights how the semantic content of token "debt", which appears in the document only but not the summary, are transported to token "bankruptcy" and "loans", which are semantically closer and have lower transport cost, and thus achieve a minimum transportation cost in the OT plan.
## 5 Conclusion
In this paper, we have presented URLComSum, the first unsupervised and an efficient method for compressive text summarisation. Our model consists of dual agents: an extractor agent and a compressor agent. The extractor agent first chooses salient sentences from a document, and the compressor agent further select salient words from these extracted sentences to form a summary. To achieve unsupervised training of the extractor and compressor agents, we devise a reinforcement learning strategy to simulate human judgement on summary quality and optimize the summary's semantic coverage and fluency reward. Comprehensive experiments on three widely used benchmark datasets demonstrate the effectiveness of our proposed URLComSum and the great potential of unsupervised compressive summarisation. Our method provides interpretability of semantic coverage of summarisation results.
Figure 4: Interpretable visualisation of the OT plan. from a source document to a resulting summary on the CNN/DM dataset. The higher the intensity, the more the semantic content of a particular document token is covered by a summary token. Red line highlights the transportation from the document to the summary of semantic content of token “country”, which appears in both the document and the summary. Purple line highlights how the semantic content of token “debt”, which appears in the document only but not the summary, are transported to token “bankruptcy” and “loans”, which are semantically closer and have lower transport cost, and thus achieve a minimum transportation cost in the OT plan.
|
2308.15903
|
The destiny of Dark Matter
|
The majority of baryons, which account for $15\%$ of the matter in the
Universe, will end their lives as carbon and oxygen inside cold black dwarfs.
Dark matter (DM) makes up the remaining $85\%$ of the matter in the universe,
however, the fate of DM is unknown. Here we show that the destiny of purely
gravitationally interacting DM particles follows one of two possible routes.
The first possible route, the "radiation-destiny" scenario, is that massive DM
particles lose sufficient energy through gravitational radiation causing them
to spiral into a supermassive black hole that ultimately disappears through
Hawking radiation. The second possible route, the "drifting-alone" destiny,
applies to lighter DM particles, where only the central DM halo region spirals
into the central BH which is then Hawking radiated away. The rest of the DM
halo is ripped apart by the accelerated expansion of the Universe.
|
Fabiano Tracanna, Steen H. Hansen
|
2023-08-30T09:10:43Z
|
http://arxiv.org/abs/2308.15903v1
|
# The destiny of Dark Matter
###### Abstract
The majority of baryons, which account for 15% of the matter in the Universe, will end their lives as carbon and oxygen inside cold black dwarfs. Dark matter (DM) makes up the remaining 85% of the matter in the universe, however, the fate of DM is unknown. Here we show that the destiny of purely gravitationally interacting DM particles follows one of two possible routes. The first possible route, the "radiation-destiny" scenario, is that massive DM particles lose sufficient energy through gravitational radiation causing them to spiral into a supermassive black hole that ultimately disappears through Hawking radiation. The second possible route, the "drifting-alone" destiny, applies to lighter DM particles, where only the central DM halo region spirals into the central BH which is then Hawking radiated away. The rest of the DM halo is ripped apart by the accelerated expansion of the Universe.
dark matter, cosmology, gravitational waves, black holes
## 1 Introduction
In approximately 5 billion years, our Sun will evolve into a red giant, expanding its radius by several hundred times and engulfing the innermost planets of the solar system, likely including the Earth, which will become a scorched and lifeless desert. During the same period, the Milky Way, our galaxy, will collide with the Andromeda galaxy, forming a resulting galaxy named Milkomeda. This new galaxy will continue as a large elliptical galaxy, with its central black holes merging into a supermassive BH (Schiavi et al., 2020). Due to the accelerated expansion of the Universe, only a few more galaxies will collide with Milkomeda, and after several tens of billions of years, all solar-mass stars will fade away, and after trillions of years, all low-mass stars will have exhausted their fuel (Adams and Laughlin, 1997).
The Universe comprises not only stars and gas but also large amounts of dark matter (DM). This has been observed in galaxies and clusters from the early 1930th (Lundmark, 1930; Oort, 1932; Zwicky, 1937), thoroughly established with galactic rotations curves in the 1970th (Rubin et al., 1980), and confirmed on the scales of the full universe through the cosmic microwave background observations (Planck Collaboration et al., 2016).
DM particles have a negligible collisional cross-section (Markevitch et al., 2004), which implies that they orbit the galaxy solely under the influence of gravity. The ultimate fate of DM particles depends on their annihilation rate or whether they decay. In some of the most popular particle scenarios, DM can annihilate when two DM particles come close enough to each other (Bertone
et al., 2005). These models establish a strong correlation between the DM annihilation cross-section and the DM abundance. A typical annihilation cross-section for such particles is of the order of \(\sigma\sim 10^{-32}\left(\frac{m}{1\mathrm{TeV}}\right)^{2}\mathrm{cm}^{2}\), where the mass of 1TeV is often considered for thermally produced DM particles (Bertone et al., 2005) This means that a significant fraction of DM in a typical galaxy with a mass of \(10^{12}M_{\odot}\) will annihilate within \(10^{17}\) years.
During DM annihilation, the resulting products may include photons and other high-speed particles that escape the galaxy. As more DM particles annihilate, the central part of the galaxy vanishes. This causes a reduction in the gravitational attraction, allowing DM particle velocities to exceed the escape velocity when \(\sim 10\%\) of the DM particles have annihilated (Binney & Tremaine, 2008) (see Appendix A). As a result, if DM particles are thermally produced, and hence typically have annihilation cross sections of the order the weak interaction scale, then the DM in the galaxy disperse into the emptiness of the expanding space. The ultimate fate of these dispersed DM particles depends on their specific particle properties, including whether they decay into lighter particles or not. Other popular particle candidates for the DM considers decaying DM particle. For instance for the sterile neutrino (Dodelson & Widrow, 1994) the decay time into active neutrinos or photons is approximately \(\tau_{\mathrm{decay}}\sim 10^{19}\,\left(\frac{10\mathrm{keV}}{m}\right)^{3}\) years. As the particles decay away the resulting less tightly bound galaxy will be ripped apart by the accelerated expansion of space, and the dispersed DM particles will decay. We will not consider neither thermally created DM particles or sterile neutrinos further.
In contrast, the destiny of the more general class of DM particles that only interact through gravity is generally unknown, and we will here show how it depends sensitively on their particle mass.
There is a long history of production of particles with no non-gravitational interactions (Parker, 1969; Grib & Mamaev, 1969; Parker, 1971; Mamaev et al., 1976; Grib et al., 1976), and this production may even lead to abundances relevant for it being the DM (Dolgov & Kirilova, 1990; Traschen & Brandenberger, 1990; Dolgov & Hansen, 1999; Kuzmin & Tkachev, 1999; Bassett & Liberati, 1998; Chung et al., 1998, 2001). The DM can be created for instance by allowing it to have a coupling to a scalar inflaton field \(\phi\), such as \(\mathcal{L}\sim\phi^{2}\chi^{2}\), where the DM is a scalar \(\chi\). The inflaton field oscillates towards the end of inflation, and the DM is produced due to the nonadiabatic expansion of spacetime during the transition to the matter or radiation dominated phase. An important restriction is that the DM must have properties to prevent subsequent thermalization, which is often achieved by considering very massive DM particles with no other coupling to standard model particles. Alternatively, the coupling of DM can be directly to gravity (and nothing else), through a conformal coupling like \(\mathcal{L}\sim\xi R\chi^{2}\), where \(R\) is the Ricci scalar curvature. The fundamental cause of particle production is that the expanding Universe breaks time-translation symmetry, which leads to non-conservation of energy in the quantum particles (Ford, 2021). For recent lists of references, see for instance (Ford, 2021; Lebedev, 2021; Kolb et al., 2023). This wide range of production mechanisms have one thing in common, namely that the DM particles today will appear essentially sterile and non-interacting, except through gravity. Thus, for the rest of this paper, we are only considering the general class of DM particles that today only interact through gravity.
Massive objects circling each other have been predicted to emit gravitational waves (GW) (Einstein, 1918). This effect was measured indirectly by the frequency change of pulsars (Weisberg et al., 1981), and more recently 100 years of search culminated with the direct observation of GW (Abbott et al., 2016).
The purely gravitationally interacting DM particles emit gravitational waves as they orbit the galaxy, and the power emitted by a single particle can be expressed as \(P_{m}(r)=\frac{32G}{5c^{5}}\Omega^{6}m^{2}r^{4}\), where \(G\) is the gravitational constant, \(c\) is the speed of light, \(\Omega\) is the angular velocity of the particle, and \(r\) is the radius of its orbit (see appendix B). Notably, the power emitted depends on the square of the particle's mass. Thus, if we consider a wide range of DM candidates with masses ranging from that of the axion particle (\(m\sim 10^{-38}g\)) to a 100 solar-mass DM candidate (\(m\sim 10^{35}g\)), there will be a difference of a factor of \(10^{146}\) in the emitted power. The angular velocity of the DM particle depends strongly on the potential existence of a supermassive central black hole, since the typical circular velocity can be expressed as \(v_{\rm circ}(r)=\sqrt{GM(r)/r}\). Therefore, a large central object will allow the DM to emit more gravitational radiation and transit to a smaller orbit at a faster rate. A central BH is not a permanent fixture, as it is subject to Hawking radiation due to quantum effects (Hawking, 1975). The timescale for a black hole to radiate away due to this effect is on the order of \(\tau_{\rm Hawking}\sim 10^{-19}\left(\frac{M_{\rm BH}}{g}\right)^{3}\) years. Thus, a BH with a mass of \(10^{6}M_{\odot}\) will completely evaporate in approximately \(10^{85}\) years.
## 2 The Two Possible Destinies
For a given DM particle mass, \(m\), we can calculate the timescale for a fraction of the galaxy to inspiral due to energy loss from gravitational radiation. For instance, we can ask how long it would take for the innermost \(10^{-9}\) of the galaxy's mass to move on sufficiently small orbits that the
Figure 1: The two main evolutionary tracks of dark matter haloes: for very massive DM candidates the large amount of gravitational radiation emitted leads to a quick inspiral onto the central BH. Subsequently this BH evaporates, and all DM thereby disappear in radiation. Alternatively, for light DM candidates, the central BH evaporates before a significant fraction of the DM has collapsed onto the central BH, and subsequently the remaining part of the DM halo will be dispersed into the expanding Universe.
DM particles will get absorbed by the central BH. For a \(10^{12}M_{\odot}\) galaxy with a density profile in reasonable agreement with observations and numerical simulations (Hernquist, 1990) (see Appendix B2), and with an initial central BH of \(10^{6}M_{\odot}\), this timescale is approximately \(10^{80}\) years for a DM particle with mass \(m=100\mathrm{TeV}\). This is a shorter time than the Hawking radiation timescale of \(10^{85}\)yrs discussed above. For a DM candidate with mass \(m=10^{-5}\)eV, the corresponding timescale is on the order of \(10^{118}\) years. Thus, the destiny of DM particles in a \(10^{12}M_{\odot}\) galaxy with a central BH of \(10^{6}M_{\odot}\) is fundamentally different for different DM particle masses. The most massive DM candidates will lose enough energy through gravitational radiation to be entirely absorbed by the central BH in a sufficiently short time, and their fate is to end as Hawking radiation on timescales of the order of \(10^{103}\) years. This is shown as "radiation-destiny" in Figure 1. On the other hand, lighter DM candidates will lose energy through gravitational radiation slowly enough that the central BH will evaporate. Subsequently, the remaining galaxy will be dispersed into the vast empty space. In this case, shown as "drifting-alone destiny" in Figure 1, a significant fraction of the individual DM particles will survive.
## 3 Three Parameters \(M_{\mathrm{GAL}}\), \(M_{\mathrm{BH}}\) and \(M_{\mathrm{DM}}\)
As DM particles are treated as point-like, they only interact through 2-body gravitational interactions that are long-range. The corresponding relaxation process can affect their energy distribution, which may result in the ejection of DM particles if their velocities exceed the local escape velocity (Spitzer, 1940). However, there is a counteracting effect of dynamical friction (Chandrasekhar, 1943), which reduces the velocity of the fastest particles. Since relaxation is a stochastic process and dynamical friction provides a systematic deceleration, the resulting energy distribution of DM particles will not contain particles that will evaporate from the cosmological structure (see appendix C for more details).
Galaxies can be distinguished based on the available gas and stellar matter, the total mass of the dark matter halo, and the mass of individual dark matter particles. Additionally, the initial mass of the central object can vary from a single stellar mass to the mass of the Milky Way's black hole, which is approximately \(10^{6}M_{\odot}\), or to supermassive black holes with masses exceeding several \(10^{9}M_{\odot}\). We have today observed a free-floating BH of mass \(\sim 7M_{\odot}\)(Sahu et al., 2022) and EHT took the first image of the black hole at the center of galaxy Messier 87 (Event Horizon Telescope Collaboration et al., 2019). The Milky Way has a BH of mass \(4\cdot 10^{6}M_{\odot}\)(Ghez et al., 1998; Schodel et al., 2002), Andromeda has a BH of mass \(\sim 1.4\cdot 10^{8}M_{\odot}\)(Al-Baidhany et al., 2020), and it is believed that most massive galaxies host a supermassive BH near its center (Kormendy & Ho, 2013).
The gas and stellar matter will either be ejected from the galaxy or absorbed by the central black hole on much shorter timescales than those of the dark matter, and to avoid the details of this complication here, we simply allow these options to be covered by the seed BH mass to range from stellar mass to the entire mass of the cosmological structure (see Appendix C). 1 Consequently, we can reduce the number of important parameters to three: the mass of the seed black hole (which
may include all the mass of present-day stars and gas), the total mass of the cosmological structure, and the mass of the individual dark matter particle.
In Figure 2, we present the fate calculation for a wide range of possible parameters. The general conclusion is that dark matter survives, i.e., it does not get absorbed by the central black hole, for small dark matter particle masses. At the second order, smaller seed black hole masses allow for more dark matter to survive.
Figure 2: The figure illustrates the fate of dark matter for a wide range of galaxy parameters. The blue-colored surface and the region under it represent the parameter space in which the central black hole evaporates rapidly enough that at least half of the dark matter particles in the cosmological structure end up dispersed in the Universe. Conversely, the non-colored region shows parameters for which more than half of the structure ends up being engulfed by the central black hole, which subsequently evaporates through Hawking radiation. The calculation spans over 70 orders of magnitude in the dark matter particle mass, \(m_{\rm DM}\), and galaxy masses ranging from dwarf galaxies of \(10^{6}M_{\odot}\) to galaxy clusters of \(10^{15}M_{\odot}\). It also allows for the possibility of the central seed black hole having a range of masses, from a single star to all the stars of the structure. The figure is cut short at \(\log\left(m_{\rm DM}/g\right)\sim 0\) to better visualize the surface, as heavier dark matter particles lie out and above the blue-colored surface for any galaxy and seed black hole mass shown.
## 4 Conclusion
The properties of DM particles are mostly unknown, and they may potentially decay or undergo annihilation. This paper examines the scenario where DM particles solely interact gravitationally over significantly longer timescales compared to the current age of the Universe.
We have demonstrated that the fate of DM is highly dependent on the mass of the DM particles. Extremely massive DM particles will promptly emit gravitational waves, leading to their gradual spiral towards the central black hole of the galaxies. Subsequently, the black hole will emit Hawking radiation, causing the DM particles to ultimately disappear as radiation.
In the case of lighter DM particles, the emission of gravitational radiation occurs at a significantly slower rate. As a result, only a minor portion of the DM particles will be absorbed by the central black hole. Once the central black hole ceases to exist, the potential of the galaxy is slightly reduced, and the remaining DM particles within these cosmological structures will gradually evaporate. Consequently, these DM particles will follow the "drifting-alone" destiny.
## Appendix A Evaporation of DM particles when central region has annihilated away
As discussed in the introduction, some DM particle candidates have annihilation cross section of the order the weak interaction scale. Such particle most often also annihilate when two particles get close to each other.
When DM particles have a non-zero annihilation cross section, then the central part of the halo will first disappear since the annihilation rate is proportional to \(\rho^{2}\). In this case the potential of the structure is reduced, and hence the high-energy tail of the DM distribution function may evaporate from the halo. The corresponding calculation is as follows.
Consider a halo in equilibrium, with a density profile given by \(\rho(r)\). One can integrate the Jeans equation to show that the radial velocity velocity dispersion is given by
\[\sigma_{r}^{2}(r)=\frac{1}{\rho(r)}\int_{r}^{\infty}\frac{\rho(r^{\prime})GM(r ^{\prime})}{r^{\prime 2}}\,dr^{\prime}\] (A1)
From numerical simulations it is known that the velocity distribution function does not have an exponential tail, but instead has a rapid decline which goes to zero around \(v=2\sigma_{\rm tot}\)(Hansen et al., 2006). Thus, if the escape velocity
\[v_{\rm esc}(r)=\sqrt{-2\Phi(r)}\] (A2)
where \(\Phi(r)\) is the potential of the structure, becomes smaller than approximately 2 times the total velocity dispersion, then the high-energy particles will escape. Assuming that the velocity anisotropy is zero, one has \(\sigma_{\rm tot}^{2}=3\sigma_{r}^{2}\), and we find that if 10% of the central mass (in a Hernquist structure) is removed, then the potential is reduced by slightly more than 10% at all radii. This will reduce the potential even further, leading to a run-away process where all the DM particles will evaporate. This is shown in figure 3.
Another popular particle candidate for DM is a decaying DM particle. For instance, in the case of sterile neutrinos (Dodelson & Widrow, 1994), the decay time into active neutrinos or photons is
approximately \(\tau_{\rm decay}\sim 10^{19}\,\left(\frac{10{\rm keV}}{m}\right)^{3}\) years. As these particles decay, the resulting galaxy becomes less tightly bound, some of the particles now have velocities exceeding the escape velocity of the galaxy, and eventually the remaining part of the galaxy is ripped apart by the accelerated expansion of space. The dispersed DM particles will also decay.
## Appendix B Linearized Gravity
To describe the GW emission we work under the assumptions of linearized gravity. This amounts to considering small GW amplitudes, large distances from the source and short wavelength GWs.
Linearized gravity implies assuming that gravitational waves (GWs) are a small perturbation to the Minkowski metric \(\eta_{\alpha\beta}\equiv{\rm diag}(-1,1,1,1)\). The GWs will therefore be described by a metric perturbation \(h_{\alpha\beta}\) such that the metric solving the Einstein equation can be written as
\[g_{\alpha\beta}=\eta_{\alpha\beta}+h_{\alpha\beta},\] (B3)
with \(h_{\alpha\beta}\ll 1\) for every \(\alpha,\beta\).
In linearized gravity, the Einstein equation assumes the form
\[\Box\bar{h}_{\alpha\beta}=-16\pi T_{\alpha\beta}\] (B4)
Figure 3: Velocities as a function of radius in a typical DM halo, with scaled distances and velocities. The total velocity dispersion is given by \(\sigma\) (blue line) the circular velocity is \(v_{circ}\) (orange line). The uppermost curve is the escape velocity (green line) for the full structure, and the reduced escape velocity (second from top, red line) is calculated when removing the central total mass. When \(v_{esc}/\sigma\approx 2\) at a given radius, then the highest energy particles will escape, and through a run-away process the entire structure will disperse.
(Hartle, 2003), in geometrized units (\(c=G=1\), mass measured in length). \(T_{\alpha\beta}\) is the stress-energy tensor and the _trace-reversed_ amplitude \(\bar{h}_{\alpha\beta}\) is defined by
\[\bar{h}_{\alpha\beta}\equiv h_{\alpha\beta}-\frac{1}{2}\eta_{\alpha\beta}h\,,\] (B5)
with \(h\) being the trace of the metric perturbation (i.e. \(h\equiv h_{\gamma}^{\gamma}\)), and \(\eta_{\alpha\beta}\) being the Minkowski metric. The d'Alembert operator \(\Box\) is defined as
\[\Box\equiv\frac{\partial}{\partial x_{\nu}}\frac{\partial}{\partial x^{\nu}}= -\frac{\partial^{2}}{\partial t^{2}}+\nabla^{2},\]
following the convention \((-+++)\) for the metric signature. Imposing gauge conditions allow to close the system and uniquely solve the equation. We choose the Lorenz gauge, that can be conveniently expressed in terms of \(\bar{h}_{\alpha\beta}\) as
\[\frac{\partial\bar{h}^{\alpha\beta}}{\partial x^{\beta}}=0\,.\] (B6)
It can be shown (Hartle, 2003) that the spatial components of the trace-reversed GW amplitude can be written as
\[\bar{h}^{ij}(t,\vec{x})\longrightarrow\frac{2}{r}\bar{I}^{ij}(t-r),\] (B7)
where the second mass moment \(I^{ij}\), here evaluated at the retarded time \(t-r/c=t-r\), is defined as
\[I^{ij}\equiv\int x^{\prime i}x^{\prime j}\rho(t,\vec{x})d^{3}x^{\prime}.\] (B8)
The energy flux (energy per unit time per unit area) \(f_{GW}\) of a linearized, plane GW is proportional to the square of the amplitude of the GW, let us call it \(a\), times the square of its frequency \(\omega\)(Hartle, 2003):
\[f_{GW}=\frac{\omega^{2}a^{2}}{32\pi}.\] (B9)
Since we are looking at the GW far away from the source, and the amplitude in Eq. B7 describes a spherical wave, the plane wave approximation is legitimate. The frequency dependence in Eq. B9, together with Eq. B7, suggests a dependence of \(f_{GW}\) which is quadratic in the third time derivative of \(I^{ij}\). We can write:
\[f_{GW}\propto\frac{1}{r^{2}}\Big{[}\xi\Big{(}\bar{I}^{ij}\Big{)}\Big{]}^{2}.\] (B10)
The right function \(\xi\) can be found by noticing that there is no radiation from a spherically symmetric mass distribution. The quadrupole moment tensor can be expressed in terms of the second mass moment as
\[Q^{ij}=3I^{ij}-\delta^{ij}I^{k}_{k},\] (B11)
satisfies such requirements. The total power radiated can be found by integrating \(f_{GW}\) over a surface encompassing the mass distribution, say a sphere, in the limit \(r\longrightarrow\infty\), i.e.
\[P_{GW}=\lim_{r\rightarrow\infty}4\pi\int f_{GW}r^{2}dr\propto\vec{Q}_{ij}\bar {Q}^{ij}\] (B12)
Including units we can finally express the total power radiated by gravitational waves in the quadrupole approximation as
\[P_{GW}=\frac{G}{45c^{5}}\left\langle\ddbar{Q}_{ij}\ddbar{Q}^{ij}\right\rangle,\] (B13)
with \(\left\langle\,\cdot\,\right\rangle\) denoting a time average over a period (Hartle, 2003).
### Test mass in central gravitational field
A DM mass \(m\) is orbiting a central mass \(M\), with \(M\gg m\), in an elliptical orbit with such a low eccentricity that we can assume the orbit to be circular. Let \(R\) be the initial radius of the orbit, \(\Omega\) the orbital frequency. By placing the origin of our Cartesian coordinate system to coincide with the position of the central object and choosing the orbit to lie the \(xy\) plane we can describe the trajectory of the test mass as:
\[x(t) = R\cos(\Omega t)\] (B14) \[y(t) = R\sin(\Omega t)\] (B15) \[z(t) = 0\] (B16)
The mass density of the system can be written as
\[\rho(\vec{x})=M\delta(\vec{x})+m\delta(\vec{x}-\vec{r}),\] (B17)
and the components of the second mass moment can then be written
\[I^{xx} = mR^{2}\cos^{2}(\Omega t)=\frac{1}{2}mR^{2}[1+\cos(2\Omega t)]\] (B18) \[I^{xy} = mR^{2}\cos(\Omega t)\sin(\Omega t)=\frac{1}{2}mR^{2}\sin(2\Omega t)\] (B19) \[I^{yy} = mR^{2}\sin^{2}(\Omega t)=\frac{1}{2}mR^{2}[1-\cos(2\Omega t)]\] (B20) \[I^{xz} = I^{yz}=I^{zz}=0.\] (B21)
The remaining components are determined by the fact that the second mass moment is by definition a symmetric tensor, i.e., \(I^{ij}=I^{ji}\). The third time derivative of each non-zero component is easily calculated to be
\[\ddbar{I}^{xx} = -4\Omega^{3}mR^{2}\sin(2\Omega t)\] (B22) \[\ddbar{I}^{xy} = 4\Omega^{3}mR^{2}\cos(2\Omega t)\] (B23) \[\ddbar{I}^{yy} = 4\Omega^{3}mR^{2}\sin(2\Omega t)=-\ddbar{I}^{xx}.\] (B24)
Recalling the definition of the quadrupole moment B11 and noticing \(\ddbar{I}^{kk}=0\), since \(I^{k}_{k}=mR^{2}\) is independent of time,
\[\ddbar{Q}_{ij}\ddbar{Q}^{ij}=144\Omega^{6}m^{2}R^{4}[\sin^{2}(2\Omega t)+2\cos ^{2}(2\Omega t)+\sin^{2}(2\Omega t)]=288\Omega^{6}m^{2}R^{4}.\] (B25)
The power radiated B13 is thus
\[P_{GW}=\frac{32G}{5c^{5}}\Omega^{6}m^{2}R^{4}.\] (B26)
This is in agreement with the results of (Weinberg, 1972).
### The mass profile of galaxies
We assume the DM mass distribution to be spherically symmetric, with a density \(\rho(r)\) described by the Hernquist profile (Hernquist, 1990)
\[\rho(r)=\frac{M}{2\pi}\frac{a}{r}\frac{1}{(r+a)^{3}},\] (B27)
where \(M\) is the total mass and \(a\) a linear scale of the object. This profile approximates well the mass distribution of galactic bulges and elliptical galaxies, but also the DM distribution in haloes. We opt for this profile instead of the NFW profile since the Hernquist mass is finite without the need for a truncation at large radii. The cumulative mass profile, and the corresponding potential pertaining to the density profile B27 are, respectively (Hernquist, 1990),
\[M(r) = M\frac{r^{2}}{(r+a)^{2}},\] (B28) \[\phi(r) = -\frac{GM}{r+a}.\] (B29)
The velocity dispersion \(\sigma_{v}^{2}\) is obtained by solving the 1D Jeans equation for a non-rotating, spherical system. Its radial component, \(\sigma_{v_{r}}^{2}\), is given by (Hernquist, 1990)
\[\sigma_{v_{r}}^{2} = \frac{GM}{12a}\bigg{\{}\frac{12r(r+a)^{3}}{a^{4}}\ln\bigg{(}\frac {r+a}{r}\bigg{)}\] (B30) \[-\frac{r}{r+a}\bigg{[}25+52\frac{r}{a}+42\Big{(}\frac{r}{a} \Big{)}^{2}+12\Big{(}\frac{r}{a}\Big{)}^{3}\bigg{]}\bigg{\}}.\] (B31)
The corresponding angular velocity is given by
\[\Omega^{2}=\frac{\sigma_{v_{T}}^{2}}{r^{2}}=\frac{2\sigma_{v_{r}}^{2}}{r^{2}},\] (B32)
with \(\sigma_{v_{T}}^{2}\equiv\sigma_{v_{\theta}}^{2}+\sigma_{v_{\phi}}^{2}\) being the transverse velocity dispersion. In fact, for a spherical system, \(\sigma_{v_{r}}^{2}=\sigma_{v_{\theta}}^{2}=\sigma_{v_{\phi}}^{2}\), so that \(\sigma_{v_{T}}^{2}=2\sigma_{v_{r}}^{2}\). The single particle GW power loss is thus given by
\[P_{\rm m}(r)=\frac{32Gm^{2}}{5c^{5}}\frac{8(\sigma_{v_{r}}^{2})^{3}}{r^{2}}.\] (B33)
In figure 4 we show that for a DM particle of mass \(1TeV\) and an initial seed BH of mass \(10^{6}M_{\odot}\) the Hawking radiation timescale is longer than the inspiral time for the entire galaxy, and hence this structure will inspiral and gets absorbed by the central BH. In contrast a central initial seed BH of only \(10^{3}M_{\odot}\) will evaporate away before a fraction \(10^{-6}\) of the galaxy has inspiraled onto the BH. This implies that all subsequent inspiraling DM will eventually evaporate through Hawking radiation, until the remaining DM halo is sufficiently dilute that it will disperse through the accelerated expansion of the Universe, and hence a significant fraction of the DM particles will remain as DM particles in the expanding Universe.
## Appendix C Evaporation v.s. Dynamical Friction
It has almost become "common knowledge" that gravitational 2-body interactions lead to effectively relaxed systems, which implies a slow but steady evaporation of particles from the system (Spitzer, 1940). The argument is that the 2-body interactions leads to an exponential distribution of energies, and since any exponential will have a high-energy tail beyond the systems escape-velocity, then this implies that particles will evaporate. This conclusion is, however, incorrect, as we will show now, since it ignores another important gravitational effect: Dynamical Friction (DF) (Chandrasekhar, 1943).
The relaxation time arises from long-range encounters causing a cumulative diffusion of a stars velocity. It is frequently estimated by following the trajectory of a subject star with initial velocity \(v\), as it passes a field star with impact parameter \(b\). The acceleration from the field star gives the subject star a perpendicular velocity of the order \(\delta v=2Gm/(bv)\)(Binney & Tremaine, 2008). If we consider a large spherical structure with radius \(R\) and \(N\) particles each with mass \(m\), then we can calculate the number of long-range encounters during one crossing. Each encounter produces a small perturbation to the subject stars velocity, and since these are independent of each other we can add the \(\delta v^{2}\) linearly. Hereby one can integrate over all impact parameters to find
\[\Delta v^{2}\approx 8N\left(\frac{Gm}{Rv}\right)^{2}\,{\rm log}\Lambda\] (C34)
where the Coulomb logarithm comes from the maximum and minimum impact parameters \(b_{\rm max}\sim R\) and \(b_{\rm min}\sim R/N\), giving \({\rm log}\Lambda\sim{\rm log}N\). It is important to keep in mind that the standard trick of numerical N-body simulations of the inclusion of a softening merely leads to a slightly bigger value
Figure 4: Timescales for inspiral and Hawking radiation as a function of the fraction of the galaxy. The four dashed lines show the dependence on the mass of the DM particle, where the upper-most curve are for the lightest DM particle (\(m_{\rm DM}=10^{-14}{\rm GeV}\)) and the lowest curve is the most massive (\(m_{\rm DM}=10^{12}{\rm GeV}\)). The four solid lines show the Hawking radiation timescale dependence of the initial seed BH mass. If the initial seed BH is small (lowest curve, \(1M_{\odot}\)) then the Hawking radiation timescale is short, whereas a supermassive BH initial seed of \(10^{11}M_{\odot}\) leads to very long radiation timescales (uppermost solid curve).
for \(b_{\rm min}\), which only enters the expression through the \(\log\Lambda\). A typical velocity is given by
\[v^{2}=\frac{GNm}{R}\,,\] (C35)
and we hence have
\[\frac{\Delta v^{2}}{v^{2}}\approx\frac{8\log N}{N}\,,\] (C36)
which implies that after \(\frac{N}{8\log N}\) crossings the total energy exchange is of the same level as the initial energy (the stars orbit has been completely randomized), and this gives the result
\[t_{\rm relax}=\frac{N}{8\log N}\,t_{\rm cross}\,.\] (C37)
This effect is possibly most famous for Globular clusters, where \(N\sim 10^{5}\) and crossing times of Myrs makes this 2-body relaxation important given the age of the globular clusters. If these repeated encounters set up a Maxwellian distribution of velocities, then the high-energy tail will contain particles moving beyond the escape velocity, and these particles will hence evaporate. Given the small number of particles in the high-energy tail, one often expects that the entire cosmological structure may evaporate at time-scales around 100 times the relaxation time (Spitzer 1940).
There is, however, another gravitational effect, which also must be included, namely the Dynamical Friction (DF). This effect is often interpreted through the gravitational focusing behind the particles path, which slows the particle down, and hence transfers energy from the rapidly moving particles to the slow ones. By integrating over impact parameters the acceleration is often written by Chandrasekhar expression (Chandrasekhar 1943a)
\[\frac{d\vec{v}_{M}}{dt}=-16\pi^{2}\,G^{2}m\,\left(m+M\right)\,\log N\,\frac{ \vec{v}_{M}}{v_{M}^{3}}\,\int_{0}^{v_{M}}f(v_{m})v_{m}^{2}dv_{m}\,,\] (C38)
where the subject star has mass \(M\) and the field stars have mass \(m\). From this formula it is clear that only the slower moving field particle contribute to slowing the subject particle down. For a rapidly moving subject particle the integral over the field particles is just the number density \(\int_{0}^{\infty}f(v_{m})v_{m}^{2}dv_{m}=n/(4\pi)\,\) and hence the magnitude of the acceleration can be written as
\[\frac{dv_{M}}{dt}=8\pi\left(\frac{Gm}{v_{M}}\right)^{2}\,\log N\,n\] (C39)
where we used \(M=m\) when considering only DM particles. To make the comparison with the relaxation time as explicit as possible we will again consider a sphere of radius \(R\) with \(N\) particles of mass \(m\), where a typical velocity is still given by \(v^{2}=GmN/R\). If we are considering a fast moving particle, then we can ask the number of crossings (of crossing time \(\tau_{\rm cross}=R/N\)) the particle needs, in order to reduce its velocity by the order \(v\)
\[\frac{dv_{M}}{dt}\,n_{\rm cross}\,\tau_{\rm cross}\approx v\,,\] (C40)
which is solved by
\[n_{\rm cross}^{-1}\approx\frac{6\log N}{N}.\] (C41)
Comparing with eq. C36 we thus see that the timescale for reducing the velocity of fastmoving particles is the same (within a factor of \(3/4\)) as that of evaporation.
The process of relaxation/evaporation is a stochastic process, whereas DF has a systematic decelerating effect. Any given particle which happens to have a velocity slightly larger than the field particles will therefore have its velocity reduced by DF faster than the statistical process of relaxation can push it beyond the escape velocity.
The inclusion of DF in the calculation of stellar evaporation was first studied in (Chandrasekhar, 1943b) by considering the stochastic process of relaxation as a diffusion process. The conclusions of (Chandrasekhar, 1943b) was also that the effect of DF is crucial to include in order to calculate evaporation, even though the paper (Chandrasekhar, 1943b) works under the assumption of Gaussian distributions of velocities, which is today known to be incorrect long before the onset of effects of both relaxation of DF (Hansen et al., 2006). It is expected that the very rapid process of violent relaxation (Binney & Tremaine, 2008) is responsible for the appearance of the non-exponential shape of the velocity distribution function with no high-energy particles. As shown in Figure 4, the stochastic appearance of high-energy particles will immediately be damped by DF, and hence no DM particles will evaporate. This calculation only considers \(N=10^{3}\) particles, and the effect of DF is only calculated accurately for the high-energy tail of the energy distribution (the bulk of the particles have their energies adjusted accordingly to assure energy conservation in each time-step), and thus a more careful calculation is needed in order to address complicated dynamical systems like Globular clusters. The above argument (and simple calculation) is here partly used as an argument why we may allow the "initial seed" BH to cover everything from a single star, to the mass of the entire collection of stars.
It is a pleasure thanking the referee for very constructive suggestions which improved the paper. SHH thanks Jens Hjorth and Radek Wojtak for interesting discussions.
|
2306.14169
|
A Web-based Mpox Skin Lesion Detection System Using State-of-the-art
Deep Learning Models Considering Racial Diversity
|
The recent 'Mpox' outbreak, formerly known as 'Monkeypox', has become a
significant public health concern and has spread to over 110 countries
globally. The challenge of clinically diagnosing mpox early on is due, in part,
to its similarity to other types of rashes. Computer-aided screening tools have
been proven valuable in cases where Polymerase Chain Reaction (PCR) based
diagnosis is not immediately available. Deep learning methods are powerful in
learning complex data representations, but their efficacy largely depends on
adequate training data. To address this challenge, we present the "Mpox Skin
Lesion Dataset Version 2.0 (MSLD v2.0)" as a follow-up to the previously
released openly accessible dataset, one of the first datasets containing mpox
lesion images. This dataset contains images of patients with mpox and five
other non-mpox classes (chickenpox, measles, hand-foot-mouth disease, cowpox,
and healthy). We benchmark the performance of several state-of-the-art deep
learning models, including VGG16, ResNet50, DenseNet121, MobileNetV2,
EfficientNetB3, InceptionV3, and Xception, to classify mpox and other
infectious skin diseases. In order to reduce the impact of racial bias, we
utilize a color space data augmentation method to increase skin color
variability during training. Additionally, by leveraging transfer learning
implemented with pre-trained weights generated from the HAM10000 dataset, an
extensive collection of pigmented skin lesion images, we achieved the best
overall accuracy of $83.59\pm2.11\%$. Finally, the developed models are
incorporated within a prototype web application to analyze uploaded skin images
by a user and determine whether a subject is a suspected mpox patient.
|
Shams Nafisa Ali, Md. Tazuddin Ahmed, Tasnim Jahan, Joydip Paul, S. M. Sakeef Sani, Nawsabah Noor, Anzirun Nahar Asma, Taufiq Hasan
|
2023-06-25T08:23:44Z
|
http://arxiv.org/abs/2306.14169v1
|
A Web-based Mpox Skin Lesion Detection System Using State-of-the-art Deep Learning Models Considering Racial Diversity
###### Abstract
The recent 'Mpox' outbreak, formerly known as 'Monkeypox', has become a significant public health concern and has spread to over 110 countries globally. The challenge of clinically diagnosing mpox early on is due, in part, to its similarity to other types of rashes. Computer-aided screening tools have been proven valuable in cases where Polymerase Chain Reaction (PCR) based diagnosis is not immediately available. Deep learning methods are powerful in learning complex data representations, but their efficacy largely depends on adequate training data. To address this challenge, we present the "Mpox Skin Lesion Dataset Version 2.0 (MSLD v2.0)" as a follow-up to the previously released openly accessible dataset, one of the first datasets containing mpox lesion images. This dataset contains images of patients with mpox and five other non-mpx classes (chickenopox, measles, hand-foot-mouth disease, cowpox, and healthy). We benchmark the performance of several state-of-the-art deep learning models, including VGG16, ResNet50, DenseNet121, MobileNetV2, EfficientB3, Inception3, and Xception, to classify mpox and other infectious skin diseases. In order to reduce the impact of racial bias, we utilize a color space data augmentation method to increase skin color variability during training. Additionally, by leveraging transfer learning implemented with pre-trained weights generated from the HAM10000 dataset, an extensive collection of pigmented skin lesion images, we achieved the best overall accuracy of 83.59 + 2.11%. Finally, the developed models are incorporated within a prototype web application to analyze uploaded skin images by a user and determine whether a subject is a suspected mpox patient.
Computer-aided diagnosis, Skin lesion detection, mpox, Deep learning.
## I Introduction
The global outbreak of the virus previously called 'Monkeypox', now referred to as mpx, has caused widespread concern over the past year and continued to be a major topic in public health news headlines. In July 2022, the World Health Organization (WHO) declared it a Public Health Emergency of International Concern (PHEIC) due to the significant risk associated with the virus [1]. The World Health Network (WHN) has also emphasized the need for coordinated global action to combat the spread of the disease, given its potential for deadly outcomes [2]. Recent epidemiological data indicate that the mpx outbreak is slowing down in the American and European regions, while the transmission is still ongoing in African regions [3]. In May 2023, WHO announced that mpx is no longer classified as a PHEIC [4]. Despite this, the study of mpx remains relevant, as there is a looming threat of another possible multi-country outbreak.
Mpox is a dsDNA virus that originates from the _Poxviridae_ family and _Orthopoxvirus_ genus [5]. Monkeys, rodents, squirrels, non-human primates, and several other animal species have been identified as primary vectors for transmission [5]. Since its first confirmed human case, in 1970, in the Democratic Republic of Congo (DRC), the human-to-human transmission of mpx has come to notice and is marked as endemic in the tropical rainforest region of Africa [5]. Since January 2022, mpx has been reported in 110 member states from six WHO regions, with 86,496 laboratory-confirmed cases and 111 deaths reported as of March 13, 2023 [3]. Most of these cases have been reported in nations with no history of mpx transmission [6]. While mpx cases peaked during July-August 2022 and have since declined, it is crucial to establish frameworks that can be readily applied to diagnosis and screening of mpx in case of its reappearance, given the recurrence of health threats by viral species responsible for communicable diseases, such as SARS-CoV, MERS-CoV, and SARS-CoV2 [7, 8].
Mpox disease symptoms closely resemble the rashes of other diseases such as chickenpox, measles, rickettsial box, smallpox, and hand-foot-mouth disease [5]. From case reports and demographics, it also appears to be a relatively rare disease among Asians, native Hawaiians, and other Pacific Islanders [9]. These factors, along with the inadequate Polymerase Chain Reaction (PCR) testing facilities in many countries, pose significant challenges for healthcare professionals. In this scenario, AI-based automated computer-aided systems may substantially contribute to resolving the core impediments in the way of rapid and accurate initial screening of mpx
In recent years, the multi-faceted applications of deep learning (DL), particularly the variations of Convolutional Neural Networks (CNNs), have revolutionized different fields of medical science due to their superior learning capability compared to conventional machine learning techniques [10; 11]. When trained with ample data, these networks can automatically extract salient features from images, creating optimal representations for specified tasks [12; 13]. However, the necessity for large amounts of data and time-consuming training with specialized computational resources hinders the applicability of DL-based frameworks [14]. While accelerators (e.g., GPU, TPU) can resolve time and resource-related issues, obtaining unbiased and homogeneous clinical data remains challenging. One well-known method of increasing dataset size is data augmentation [15], which generates additional samples through slight modifications of existing data. In cases of data scarcity, transfer learning [14] is commonly used, where a CNN model pre-trained on a large dataset (e.g., ImageNet [16]) transfers knowledge for context-specific learning on a smaller dataset.
Inspired by the superior performance of the DL algorithms across different domains, research groups worldwide have attempted to create datasets containing mopx skin lesion images to train effective learning algorithms [17; 18; 19; 20]. Our research group was one of the first to release a dataset with mopx lesion images, "Monkeypox Skin Lesion Dataset (MSLD)"1, containing web-scrapped images of patients (validated through Google's Reverse Image Search) categorized into two classes:'monkeypox' and 'non-monkeypox' (measles, chickenpox). However, several issues plague most of these datasets, including ours: a few images had mislabels due to the lack of professional scrutiny by dermatologists, and some images were of poor quality with watermarks and distortions. Although the instant release of such unscrutinized datasets was essential during the initial surge of mopx cases, now that the cases have come under control, ensuring the clinical soundness of the data through expert verification and feedback incorporation is crucial for developing effective algorithms to tackle any future cases of monkeypox or diseases with similar skin lesions.
Footnote 1: [https://github.com/mHealthBuet/Monkeypox-Skin-Lesion-Dataset](https://github.com/mHealthBuet/Monkeypox-Skin-Lesion-Dataset)
In this paper, we introduce an updated version of our previously released dataset, "Mopx Skin Lesion Dataset Version 2.0 (MSLD v2.0),"2 a publicly available dataset consisting of web-scraped images of patients with mopx and non-mpox cases, including chickenpox, measles, cowpox, hand-foot-mouth disease, and healthy individuals. The images are taken from different body parts, such as the face, neck, hand, arm, and leg. Our preliminary study explores the potential of deep learning models for the early detection of mopx disease, leveraging transfer learning on various architectures, including VGG16 [21], ResNet50 [22], DenseNet121 [23], MobileNetV2 [24], EfficientNetB3 [25], InceptionV3 [26] and Xception [27]. Furthermore, we have developed a web application3 using the open-source Streamlit framework that analyzes uploaded images and predicts whether the subject is a suspected mopx patient requiring urgent consultation with a physician. Our current working pipeline is illustrated in Fig. 1.
Footnote 2: [https://github.com/mHealthBuet/Mopx-Skin-Lesion-Dataset-v2](https://github.com/mHealthBuet/Mopx-Skin-Lesion-Dataset-v2)
Footnote 3: [https://skinlesionclassiferibmyehealthlab.streamlit.app/](https://skinlesionclassiferibmyehealthlab.streamlit.app/)
"Skin color bias," also known as the "white lens problem", has long been an issue in AI-based diagnosis for skin lesion images [13]. Most benchmark datasets, such as HAM10000 [28], ISIC challenge dataset 2018 [29], and PH2 (Prado Hospital 2) dataset [30], consist primarily of images of individuals with lighter skin tones, leading to inaccuracies in diagnoses for individuals with darker skin tones. However, mopx is prevalent in African regions; consequently, most of the mopx data came from dark-skinned individuals, potentially introducing a bias opposite to the "white lens problem." Additionally, the non-mopx classes had a comparatively higher
Figure 1: A flow-diagram of the proposed mopx detection system. A prototype web-app is developed that incorporates the best-performing deep learning model to detect mopx from skin lesions uploaded by users.
number of samples with lighter skin tones. To reduce the impact of skin tone bias in our dataset, we adopted a recently proposed skin-color agnostic color-space augmentation before classification [31]. This technique simulates the effect of a diverse range of skin tones and ethnicities, helping to mitigate any bias in our dataset.
The main contributions of this paper are summarized below:
* We introduce the Mpox Skin Lesion Dataset Version 2.0 (MSLD v2.0) containing web-scrapped skin lesion images of mpox and non-mpox patients.
* We explore the potential of DL-based models, including VGG16, ResNet50, DenseNet121, MobileNetV2, EfficientNetB3, InceptionV3, and Xception architectures for early detection and screening of mpox from skin lesion images.
* We adopt a skin-color agnostic color-space augmentation method to improve the skin tone variability in the training dataset and thus increase the generalizability of the model to various patient ethnicities.
* We developed a web-app capable of predicting whether a subject is a potential mpox suspect and should consult a physician or not.
The remainder of the paper is organized as follows. In Sec. II, we present a brief background on different stages of mpox lesions. Sec. III contains a brief review of the relevant literature. Sec. IV provides a detailed description of dataset development and its acquisition procedure. Sec. V outlines the experiments performed on the dataset and the results associated with this. The web-app description is presented in Sec. VI. Finally, Sec. VII and VIII summarize this work's contributions and discuss future directions.
## II Background
Mpox is a typically self-limited disease, with symptoms lasting between 2 to 4 weeks. The severity of the disease generally depends on the extent of virus exposure, the health status of the patient, and the complications. The disease tends to affect children more severely. The virus has an incubation period that ranges from 5 to 21 days [5]. During the invasion period (which lasts from 0 to 5 days), patients commonly experience symptoms such as fever, lymphadenopathy (swollen lymph nodes), mylalgia (muscle ache), asthenia (physical weakness), and severe headache. The rash begins within 1-3 days of fever onset and is usually noticed on the face, palms of the hands, and soles of the feet [5]. In the skin eruption phase (2-4 weeks), the lesions follow a four-stage progression: macules (lesions with a flat base) develop into papules (raised, firm, and painful lesions), which then become vesicles (filled with clear fluid) and finally pustules (filled with pus) before encrustation. Consequently, the lesions may appear slightly different as they progress through these stages (see Fig. 2).
## III Related Work
Classifying different types of skin lesions is a challenging problem due to high inter-class similarity and intra-class variability [32]. In addition, the lack of available skin lesion data has been the primary challenge in developing deep-learning models for mpox detection. Thus, when the mpox cases surged in 2022, researchers mainly focused on developing a reliable mpox skin image dataset with several non-mpox classes that have high similarity with mpox in terms of the appearance and characteristics of the rashes. After several such web-scrapped datasets were released, studies on bench-marking and developing deep learning architectures for mpox detection emerged. Thus, our literature survey concentrates on two primary areas: (i) the development of mpox skin lesion datasets and (ii) the evolution of classification techniques.
In terms of dataset development, as mentioned in Sec. I, we have previously introduced the first version of our mpox dataset as MSLD v1.0 [33], which contains images belonging to two classes: 102'mpox' images and 126 'non-mpox' (chickenpox and measles) images. During the same period, Ahsan _et al._ published one of the first few datasets on mpox skin lesion images [17]. The dataset contains skin images of 4 classes: 43 mprox, 47 chickenpox, 17 measles, and 54 healthy. Subsequently, the "Monkeypox Skin Images Dataset (MSID)" was released on Kaggle consisting of images from the same four classes, i.e., mpxo (279 images), chickenpox (107 images), measles (91 images), and healthy (293 images) [19]. Immediately after the release of MSID, Islam _et al._ published another dataset, which was previously available on Kaggle, containing images labeled for six classes: mpox (117 images), chickenpox (178 images), smallpox (358 images), cowpox (54 images), measles (47 images), and healthy (50 images) [18]. However, none of these datasets were verified by a dermatologist, which increases the risk of mislabeling images, leading to erroneous disease identification. Several other limitations include the presence of watermarks, poor aspect ratio, and irrelevant images due to improper filtering in the web-scraping process. It is also important to note that the'smallpox' class in the dataset released in [18] is error-prone since smallpox is now extinct. A few other datasets are also created using our dataset MSLD v1.0 as the foundation and then adding to it [20].
Various classification studies were conducted using the pre
Fig. 2: The mpox lesion through its various stages: (a) early vesicle, (b) small pustule (\(\not\)\(2\)mm), (c) umbilical pustule (\(\not\)\(3\)-\(4\)mm), (d) ulcerated lesion (\(\not\)\(5\)mm), (e) crusting of a mature lesion, (f) partially removed scab.
-liminary version of MSLD and other available datasets. Situala _et al._ performed classification using 13 DL architectures pre-trained on ImageNet weights [34]. They proposed an ensemble on Xception and DenseNet-169 based on performance. The authors also explained the performance of their best-performing model, Xception, using Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME). Abdelhamid _et al._ proposed two algorithms for improved classification results [35]. The first uses GoogleNet architecture based on transfer learning for feature extraction, and the second approach consists of a binary hybrid algorithm for feature selection and a hybrid algorithm for optimizing the neural network's parameters. The Al-Biruni Earth radius algorithm, the sine-cosine algorithm, and the particle swarm optimization algorithm are examples of meta-heuristic optimization algorithms that are used for feature selection and parameter optimization. Ahsan _et al._ proposed a modified VGG16 model for classification and explained the feature extraction of the model using LIME [36]. Islam _et al._ used 5-fold cross-validation to fine-tune seven DL models with pre-trained weights from ImageNet on their dataset [37]. Alakus _et al._ constructed a deep learning algorithm to categorize the DNA sequences of the MPV and HPV viruses that cause mpox and warts, respectively [38]. The findings revealed an average accuracy of 96.08% and an F1-score of 99.83%, demonstrating that the two diseases can be correctly identified based on their DNA sequences. Sahin _et al._ created an Android mobile app that uses deep learning to aid in detecting mpox using the DL architectures EffcientNetb0 and MobileNetv2 [39]. Haque _et al._ attempted to integrate deep transfer learning-based methods and a convolutional block attention module (CBAM) to focus on the relevant portion of feature maps to conduct an image-based classification of mpxo [40]. They used the DL architectures VGG19, Xception, DenseNet121, EfficientNetB3, and MobileNetV2. Their proposed model, XceptionCBAM-Dense, was reported to achieve 83.89% accuracy on our dataset. Kumar investigated various deep CNN models with multiple machine learning classifiers for mpxo disease diagnosis [41]. For this, bottleneck features of three CNN models, i.e., AlexNet, GoogleNet, and VGG16, are explored with multiple machine learning classifiers such as SVM, KNN, Naive Bayes, Decision Tree, and Random Forest. Yang _et al._ introduced an Al-based mpos detector primarily aimed at handling images taken from resource-constrained devices [20].
## 4 Dataset Preparation
During the initial peak outbreak phase of mpos, there was no publicly available dataset for the detection of mpos. Therefore, for the initial feasibility analysis of the AI-based mpos screening system, images of different body parts (face, neck, hand, arm, leg) of patients with mpos and non-mpox (measles, chickenpox) cases were collected from publicly available case reports, news portals, and reliable websites via web-scrapping [42] by our research group and the first version of "Mpox Skin Lesion Dataset (MSLD)" was released.
### Data Collection Procedure
This current study presents an improved and expanded version of our dataset "Mpox Skin Lesion Dataset Version 2.0 (MSLD v2.0)". The MSLD v2.0 contains web-scraped images belonging to 6 classes: mpxo (284 images), chickenpox (75 images), measles (55 images), cowpox (66 images), hand-foot-mouth disease or HFMD (161 images), and healthy (114 images) for multi-class classification. This retrospective data collection study was approved by our Institutional Review Board (IRB)4 where the informed consent requirement was waived. In most cases, the patient's personal identifying information (including patient history, health, epidemiological information, and co-morbidity) was not disclosed in the original source of the image and thus was confidential. We acknowledge that web-scraped images could be from various sources, including copyrighted data as well as non-copyright, freely usable, reusable, and redistributed public-domain data [43]. Since MSLD v2.0 images are intended to be used only for research purposes, our IRB approved the data collection study under the "fair use" principle provided that the sources are appropriately listed and credited [44]. Our data collection protocol also included a prospective study component for collecting images from mpxo patients in our IRB-approved clinical study sites. However, since there were no known cases of mpos in Bangladesh, the prospective study component could not be conducted.
Footnote 4: This study was approved by Ethical Review Committee of Popular Medical College (Ref: PMC/Ehiciac/2023/02).
### Dataset Screening
The collected skin images were processed through a 2-stage screening process. First, the out-of-focus, low-resolution, and low-quality images were discarded, and only the unique images that satisfy the quality requirements were selected. Next, the images were cropped to their region of interest and resized to 224 224 pixels while maintaining the aspect ratio. Finally, an expert dermatologist verified the disease label of each skin image. Fig 3 shows a few image samples from the
Figure 3: Sample images of each class collected from the dataset.
dataset. A detailed distribution of the dataset is provided in Table 1.
### Standard Data Augmentation
In the next stage, to assist in the classification task and improve the generalizability of the learning algorithms, several data augmentation methods, including rotation, translation, reflection, shear, hue, saturation, contrast and brightness jitter, noise, and scaling, were applied to the dataset. Post-augmentation, the number of images increased by approximately 14-fold. These augmented images are also provided in a separate folder in the dataset to ensure the reproducibility of results.
### Color Space Augmentation
The prevalence of skin-tone bias in skin image datasets for automated skin detection is a well-known phenomenon, largely due to the imbalance in the distribution of training samples that tend to represent lighter skin tones [13]. This bias is also evident in our dataset, as most of the images within classes such as Chickenpox, Cowpox, Measles, and HFMD primarily depict individuals with lighter skin tones. In contrast, the mpox class contains a mixture of individuals with both dark and light skin tones. A recent report from the Centers for Disease Control and Prevention (CDC) highlights the distribution of mpx cases among different racial and ethnic groups, thereby providing further evidence of this imbalance [45]. According to the report, out of all the reported mpx cases, 30.08% are identified as Black or African American, 29.04% as Hispanic or Latino, and 27.65% as White. The distribution of reported mpx cases by race and ethnicity is visually represented in Fig 4.
Studies have shown that the over-reliance on color information in skin disease detection techniques has imposed limitations due to skin-tone bias in the dataset. To address this issue, we also utilize color space augmentation in the HSV (hue, saturation, and value) space [31]. This augmentation aims to increase the universality of the dataset and reduce racial bias by transforming each image into 180 different versions with varying values in the HSV space. Doing so suppresses the reliance on color cues, and the training procedure is guided toward visual texture and context features. Alg. 1 describes our color space augmentation method.
```
Input :\(X_{t}=\{x^{i}\}_{i=1,...,n}\) - Images from the dataset Output :\(X_{aug}\) - Augmented image dataset
1 Initialize \(X_{aug}\)=[] Initialize \(X_{hue}\)=[] Initialize \(X_{sat}\)=[] Initialize \(X_{val}\)=[] for\(i\in\{1,2,...,n\}\)do
2for\(j\in\{H_{1},H_{2},...,H_{final}\}\)do
3\(X_{hue}\Leftarrow\) hue_shift(\(x^{i},j\))
4for\(k\in\{S_{1},S_{2},...,S_{final}\}\)do
5\(X_{sat}\Leftarrow\) saturation_scaling(\(X_{hue},k\))
6for\(l\in\{V_{1},V_{2},...,V_{final}\}\)do
7\(X_{val}\Leftarrow\) value_scaling(\(X_{sat},l\))
8\(X_{aug}\Leftarrow\)(\(X_{aug}\)) \(\bigcup(X_{val})\)
9
10 Reset \(X_{val}\)=[]
11
12 end for
13
14 Reset \(X_{sat}\)=[]
15
16 end for
17
18 end for return\(X_{aug}\)
```
**Algorithm 1**Color Space Augmentation
Studies have shown that the over-reliance on color information in skin disease detection techniques has imposed limitations due to skin-tone bias in the dataset. To address this issue, we also utilize color space augmentation in the HSV (hue, saturation, and value) space [31]. This augmentation aims to increase the universality of the dataset and reduce racial bias by transforming each image into 180 different versions with varying values in the HSV space. Doing so suppresses the reliance on color cues, and the training procedure is guided toward visual texture and context features. Alg. 1 describes our color space augmentation method.
Figure 4: Distribution of reported mpx cases by race and ethnicity.
Figure 5: Example of color space augmentation with varying HSV parameters on an mpx skin lesion image.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Class label** & **No. of Original** & **No. of Unique** \\ & **Images** & **Patients** \\ \hline Mpx & 284 & 143 \\ Chickenpox & 75 & 62 \\ Measles & 55 & 46 \\ Cowpox & 66 & 41 \\ Hand, foot and mouth disease & 161 & 144 \\ Healthy & 114 & 105 \\ \hline Total & 755 & 541 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Class distribution of the presented Mpx Skin Lesion Dataset (MSLD) v2.0
## 5 Experiments and Results
### Experimental Design
Our experimental evaluations are conducted using a five-fold cross-validation framework. The original images are divided into train, validation, and test sets, keeping an approximate distribution of 70:20:10 while preserving patient independence. Only images from the train and valid set were augmented. The experiments were subdivided into two separate studies. Only the standard augmentation techniques were applied for the first study, as stated in the previous section. For the second study, color space augmentation was performed as well in combination with standard augmentation methods, and changes in the results were investigated. We employed accuracy, precision, recall/sensitivity, and F1-score as our performance metrics.
### Pre-trained Networks and Transfer Learning
To evaluate the performance of DL-based classification algorithms on our MSLD v2.0 data, we have selected seven well-known CNN architectures- VGG16 [21], ResNet50 [22], DenseNet121 [23], MobileNetV2 [24], EfficientNetB3 [25], InceptionV3 [26] and Xception [27] pre-trained on the ImageNet dataset. We select these models as they have demonstrated excellent classifications through transfer performance in various computer vision and medical image analysis tasks. Transfer learning involves training a DL model on a large dataset, known as the'source dataset', and then utilizing the learned model parameters to initialize training on a relatively smaller 'target dataset'.
In many cases, ImageNet pre-trained models perform satisfactorily well while using transfer learning for image-based classification tasks. However, we hypothesize that model performance may further improve for mpox detection if, instead of ImageNet data, the network is pre-trained using a large skin lesion image dataset. Therefore, to test this hypothesis, we also pre-trained our model using HAM10000 [28], a large open-access skin-lesion dataset. This dataset contains 10,015 images from seven categories of skin lesions, including Melanoma, Melanocytic Nevi, Basal Cell Carcinoma, Actinic Keratosis and Inta-Epithelial Carcinoma, Benign Keratosis, Dermatofibroma, and Vascular Lesions. Experimental results are discussed in the following sections.
### Implementation Details
Input images with dimensions {224, 224, 3} were fed into the selected pre-trained models. The fully connected layers were removed. We kept all the layers trainable. Next, we flatten the backbone model's output, followed by three blocks of fully connected (FC) layers, and dropout to the network. The FC layers had successively 4096, 1072, and 256 nodes while the dropout factors were respectively 0.3, 0.2 and 0.15. Finally, an FC layer with six nodes was employed with a softmax activation function for this multi-class classification task.
The network architectures were implemented in Keras and were accelerated using Nvidia K80 GPUs provided by Kaggle notebooks. The batch size was set to 16. The adaptive learning rate optimizer (Adam) with an initial learning rate of \(10^{-5}\) and categorical cross-entropy loss function was employed for training
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline
**Network** & **Accuracy (\%)** & **Precision** & **Recall** & **F1 score** \\ \hline VGG16 & 75.22\(\pm\) 3.16 & 0.79 \(\pm\) 0.02 & 0.71 \(\pm\) 0.06 & 0.72 \(\pm\) 0.05 \\ ResNet50 & 77.94\(\pm\) 3.87 & 0.79 \(\pm\) 0.05 & 0.76 \(\pm\) 0.07 & 0.76 \(\pm\) 0.06 \\ DenseNet121 & **81.70\(\pm\) 5.39** & 0.83 \(\pm\) 0.04 & 0.79 \(\pm\) 0.06 & 0.80 \(\pm\) 0.06 \\ MobileNetV2 & 76.98\(\pm\) 4.65 & 0.81 \(\pm\) 0.06 & 0.74 \(\pm\) 0.05 & 0.75 \(\pm\) 0.05 \\ EfficientNetB3 & 74.61\(\pm\) 3.94 & 0.75 \(\pm\) 0.06 & 0.71 \(\pm\) 0.06 & 0.72 \(\pm\) 0.06 \\ InceptionV3 & 76.31\(\pm\) 2.66 & 0.79 \(\pm\) 0.04 & 0.72 \(\pm\) 0.03 & 0.74 \(\pm\) 0.03 \\ Xception & 75.74\(\pm\) 6.20 & 0.75 \(\pm\) 0.07 & 0.74 \(\pm\) 0.08 & 0.73 \(\pm\) 0.07 \\ \hline \end{tabular}
\end{table}
Table 2: Performance comparison of different DL models using ImageNet Pre-trained weights for transfer learning. The best two results are shown in red and blue colors.
Figure 6: Classification results with different DL models for transfer learning using ImageNet pre-trained weights.
Figure 7: Classification results with different DL models for transfer learning using HAM10000 pre-trained weights.
### Comparing of Model Initialization Methods
In these experiments, the transfer learning models were initialized using two different approaches. The first employed pre-trained weights from ImageNet data, while the second utilized the HAM10000 skin lesion dataset. The results for the ImageNet-pre-trained models are summarized in Table 2 and Fig 6. These results show that DenseNet121 yields the best accuracy (\(81.70\pm 5.39\%\)) while ResNet50 also shows competitive performance (\(77.94\pm 3.87\%\)). In the second approach, we pre-trained our models using the HAM10000 skin image dataset and used these model parameters to initialize the transfer learning. The results are summarized in Table 3 and Fig 7. As anticipated, the performance metrics of most of the architectures have improved. The best-performing architecture in the first approach, DenseNet121, yielded \(82.26\%\) accuracy while transfer learning was performed with HAM10000 weights. Moreover, the revised initialization technique reduced the standard deviation in terms of the accuracy metric to \(3.46\%\), indicating its consistency in performance across the five folds.
localize and focus the skin lesions, demonstrating the interpretability of the model.
## 6 Mpor Detector Web App
In this work, we have also developed an intuitive and user-friendly web application for online mpos skin disease detection to demonstrate our work. The web application is powered by the best-performing deep learning model presented in this work. The app's front end is built using HTML, CSS, and JavaScript. Upon clicking the upload button, users can easily upload a skin image using their phone's native camera app. With the user's appropriate consent, the uploaded data can be on our local server, which can be utilized for retraining the model for improved performance in the future. The prototype of the web application has been developed using the open-source Python streamlit framework with a flask core and has been hosted in the streamlit provided server for a better user experience. Fig. 10 shows the interface of the current web application. We plan to further improve the DL models and our web application in our future work. In the unfortunate event of another mpos outbreak, we believe such AI-assisted mpos screening tools can benefit disease surveillance and contact tracing.
## 7 Discussion
In this work, we present our work on mpos skin lesion data collection and experimental evaluations of DL-based classification methods with an aim to overcome some of the shortcomings of the previous works in this area. First, we developed an updated version of our dataset, MSLD v2.0, including a few additional classes of infectious skin diseases and additional images for the existing classes of measles, chickenpox, and mpos. The dataset includes 755 images of skin lesions of 541 distinct patients. These additional images improve the models' capabilities to generalize new cases. Second, in the previous version of our dataset, there was an under-representation of dark skin tone images, possibly affecting the classification algorithm. We used color space augmentation, which reduces racial and regional biases in the dataset, to address this problem. Finally, in our previous works, the pre-trained weights from the ImageNet dataset, which essentially comprises all types of images, were utilized for transfer learning. In our current work, we used pre-trained weights in the HAM10000 dataset containing 10,015 dermoscopic pictures to do transfer learning. This strategy improved the model's accuracy, as discussed in the previous section. However, large-scale mpos data collection efforts are still required to develop more generalizable models for mpos disease screening. Our dataset was primarily compiled via web-scraping and thus lacks crucial meta-data, such as the patient's clinical history, the duration of the sickness, and the stage of the disease, which are essential for diagnosis. A more coordinated effort and international collaboration are required to build a larger dataset that can provide generalized results across different demographic regions.
## 8 Conclusion
This study has presented an open-access dataset and a promising approach for automatically detecting mpos from skin lesions using state-of-the-art deep learning architectures. The "Mporx Skin Lesion Dataset (MSLD) v2.0" presented in this study can be beneficial to researchers to advance this field of research further. The experimental results demonstrate the potential and effectiveness of deep learning-based AI systems on this dataset for early diagnosis of mpos and other infectious diseases. In addition, we have also developed a web application that can play a significant role in public health by allowing people to conduct preliminary screening during the early phases of infection. Future works can focus on expanding the dataset by incorporating data from diverse geographical locations worldwide, reducing the under-representation of particular ethnic groups, and improving the generalizability of the models. In addition, further research on lightweight DL models will enhance the efficiency and ease of deployment in edge devices, improving public access to such AI-assisted
Figure 10: The user interface of the online mpos screening tool.
Figure 9: Example skin lesion images and corresponding heatmaps produced via Grad-CAM using the best performing model.
screening tools. We hope our current efforts will contribute to developing effective AI-powered infectious disease detection and screening systems.
|
2304.02172
|
Dynamic Adversarial Resource Allocation: the dDAB Game
|
This work proposes a dynamic and adversarial resource allocation problem in a
graph environment, which is referred to as the dynamic Defender-Attacker Blotto
(dDAB) game. A team of defender robots is tasked to ensure numerical advantage
at every node in the graph against a team of attacker robots. The engagement is
formulated as a discrete-time dynamic game, where the two teams reallocate
their robots in sequence and each robot can move at most one hop at each time
step. The game terminates with the attacker's victory if any node has more
attacker robots than defender robots. Our goal is to identify the necessary and
sufficient number of defender robots to guarantee defense. Through a
reachability analysis, we first solve the problem for the case where the
attacker team stays as a single group. The results are then generalized to the
case where the attacker team can freely split and merge into subteams.
Crucially, our analysis indicates that there is no incentive for the attacker
team to split, which significantly reduces the search space for the attacker's
winning strategies and also enables us to design defender counter-strategies
using superposition. We also present an efficient numerical algorithm to
identify the necessary and sufficient number of defender robots to defend a
given graph. Finally, we present illustrative examples to verify the efficacy
of the proposed framework.
|
Daigo Shishika, Yue Guan, Jason R. Marden, Michael Dorothy, Panagiotis Tsiotras, Vijay Kumar
|
2023-04-05T00:17:03Z
|
http://arxiv.org/abs/2304.02172v1
|
# Dynamic Adversarial Resource Allocation:
###### Abstract
This work proposes a dynamic and adversarial resource allocation problem in a graph environment, which is referred to as the dynamic Defender-Attacker Blotto (dDAB) game. A team of defender robots is tasked to ensure numerical advantage at every node in the graph against a team of attacker robots. The engagement is formulated as a discrete-time dynamic game, where the two teams reallocate their robots in sequence and each robot can move at most one hop at each time step. The game terminates with the attacker's victory if any node has more attacker robots than defender robots. Our goal is to identify the necessary and sufficient number of defender robots to guarantee defense. Through a reachability analysis, we first solve the problem for the case where the attacker team stays as a single group. The results are then generalized to the case where the attacker team can freely split and merge into subteams. Crucially, our analysis indicates that there is no incentive for the attacker team to split, which significantly reduces the search space for the attacker's winning strategies and also enables us to design defender counter-strategies using superposition. We also present an efficient numerical algorithm to identify the necessary and sufficient number of defender robots to defend a given graph. Finally, we present illustrative examples to verify the efficacy of the proposed framework.
## 1 Introduction
Deploying resources (robots, sensors, or supplies) to appropriate locations at the appropriate time is a fundamental problem in multi-agent systems, often studied as the multi-robot task allocation (MRTA) problem [1, 2]. In real world settings, resource allocation or MRTA are performed in a dynamically changing environment. Time-varying demand is one of the major sources of dynamics, exemplified by the applications in wireless network [3], ride-sharing [4], power-grid [5], and cloud computing [6].
In this work, we study the dynamic resource allocation problem on a graph, where nodes represent physical locations and edges represent the traversability between those locations. The focus is on transporting the resources effectively in the environment to satisfy demands that change dynamically. Instead of achieving the desired allocation instantly, we require the resources1 to _traverse_ through the environment. Such consideration arises naturally when dealing with embodied agents and resources, such as robots, or autonomous vehicles.
Footnote 1: We use the terms robots and resources interchangeably. The term “player”, however, is reserved for the entity (the defender or the attacker) that determines the allocation of these robots / resources.
To stress the dynamic aspect of the problem, we consider demands that are generated by an adversary. Specifically, we formulate the problem as a dynamic (turn-based) game played between a blue team of defender robots and a red team of attacker robots. The defender team must ensure numerical advantage at every node where the attacker robots are present. Whenever the attacker team has more robots at any node, the attacker team wins the game. In that sense, the demand imposed by the attacker team is a hard constraint that the defender team must continuously satisfy throughout
the game. Note that many other safety-critical applications with dynamic demands (e.g., resilient power grid [7], wildfire surveillance [8], etc.) can be formulated as such a hard-constrained resource allocation problem.
In this work, we consider centralized strategies for both teams. Namely, a coordinator decides the next allocation for its own team and sends instructions to the robots within its team to follow. Consequently, the only intelligent agents are the defender coordinator and the attacker coordinator, which we refer to as the defender and the attacker for simplicity. Our formulation also leads to feedback strategies that re-allocate resources based on the system state (the current allocation of the attacker team and the defender team). The re-allocation is done with all possible next actions of the opposing team in mind. This is a major difference from many prior works on resource allocation in the robotics community, where the focus has been either on achieving a desired terminal allocation that is fixed [9, 10], or on scheduling to satisfy a time-varying but known demand (e.g., multiple traveling salesman problem) [2].
The main contributions of this work are: (i) formulation of a novel resource allocation problem that has high relevance to safety-critical applications; (ii) identification of the critical amount of resources that are necessary and sufficient to guarantee successful defense; (iii) derivation of the corresponding strategies that guarantee the successful defense; and (iv) development of efficient algorithms to construct these strategies.
### Related Work
Population model on graphs:The distributed resource allocation problem over a graph environment was proposed in [9], where the authors developed stochastic control laws that drive the population of robots to a desired distribution to meet a _static_ demand. The theory was later extended to accommodate heterogeneous robots and tasks with more diverse needs [10, 11]. However, the theoretical analysis in these works focused on the steady-state performance of the system, and a more delicate transient response to dynamically changing conditions was ignored. In contrast, our work focuses on the feedback mechanisms for a team to react to external inputs, but with the simplification of being centralized. Our work can be viewed as an "outer loop" that updates the desired allocation in response to adversarial actions, which the distributed control laws in [9] can track as an "inner loop" at a faster time scale.
Dynamic resource/task allocation:Dynamical aspect of the resource allocation problem has been studied in different ways. Scheduling is one such formulation that considers tasks that must be completed in sequence [12]. On top of an efficient allocation algorithm, an adaptation mechanism is proposed in [12] which reacts to robot failures through a "market-based" optimizer to re-allocate the leftover tasks. A distributed resource allocation on a graph environment has also been studied with an adaptation mechanism [13], where the population dynamics are controlled through the adaptation of individual behaviors based on local sensing. These works provide scalable within-team interactions, but the adaptation schemes are purely reactive and do not contain any anticipation of the failure or changes that may occur in the future. In contrast, this paper emphasizes the between-team (defender vs. attacker) strategic interactions, where each team selects its action based on the anticipated optimal reactions from the opposing team.2
Footnote 2: Note that in safety-critical systems, one can model the environment as an adversarial agent/team that seeks to undermine the performance of the deployed system.
Colonel Blotto Games:The static version of the adversarial resource allocation problem is commonly formulated as Colonel Blotto game [14, 15, 16, 17]. In the most standard version [18] of the game, two colonels allocate their resources to multiple locations. Whoever allocated more resource wins that location, and each colonel seeks to maximize the number of locations s/he wins. Many variants of the Colonel Blotto game have been studied, including asymmetric budget [14], asymmetric information [19], etc. However, most of the formulations in the existing literature consider static games, which assume that the desired allocation is achieved instantly and thus ignore the dynamics that are
Figure 1: Illustration of the adversarial resource allocation problem.
involved in the resource transportation. Although more recent works have considered dynamical extensions of Colonel Blotto games [20, 21, 22], their formulation does not capture the transportation of the resources in the environment.
**Preliminary work:** The conference version of this work [23] introduced the _dynamic Defender Attacker Blotto (dDAB) game_ that combines the ideas from Colonel Blotto games [18] and the population dynamics over graphs [10]. The conference version has identified the critical resource ratios (CRR) for a special class of graphs (ring graphs) and proposed a sampling-based algorithm that only provides certificates for the attacker's victory when the algorithm returns a solution. The analysis on the defender side (e.g., necessary and sufficient conditions for the defender's victory, the defender's strategies, etc.) was not fully conducted in [23]. This paper provides a complete characterization of the dDAB game on any given graph.
## 2 Problem Formulation
The dynamic Defender-Attacker Blotto (dDAB) game is played between two players: the defender and the attacker. The environment is represented as a directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where the \(N\) nodes represent locations, and the directed edges represent the traversability among those locations. We assume that \(\mathcal{G}\) is strongly connected [9], i.e., every node is reachable from any other node.3 For notational simplicity, we assume that the two players share the same graph, but the present analysis easily extends to the case where the two players have different edge sets.
Footnote 3: The assumption of strongly connected graph is used to avoid the degenerate cases with “sinks” in the graph, which the defender resource cannot get out from once reached. See Figure 13 in Appendix A for an example.
To capture the connectivity among the nodes, we define the graph adjacency matrix \(A\in\mathbb{R}^{N\times N}\) as follows:
\[\left[A\right]_{ij}=\left\{\begin{array}{ll}1&\text{if }(j,i)\in \mathcal{E},\\ 0&\text{otherwise}.\end{array}\right.\]
The _out-degree_ of node \(i\) is denoted as \(d_{i}=\sum_{j}[A]_{ji}\), and its _out-neighbors_ is denoted as \(\mathcal{N}_{i}=\{j\in\mathcal{V}|(i,j)\in\mathcal{E}\}\).
The total amount of resources for the defender and the attacker are denoted by \(X\in\mathbb{R}_{>0}\) and \(Y\in\mathbb{R}_{>0}\), respectively. For some time horizon \(T\), the allocation of the defender's resources over the graph at time \(t=0,1,\ldots,T\) is denoted by the state vector (allocation vector) \(\mathbf{x}_{t}\in\mathbb{R}^{N}\), which lies on a scaled simplex, such that \([\mathbf{x}_{t}]_{i}\geq 0\) and \(\sum_{i}[\mathbf{x}_{t}]_{i}=X\). The state vector (allocation vector) \(\mathbf{y}_{t}\in\mathbb{R}^{N}\) for the attacker also satisfies the same conditions with \(X\) replaced by \(Y\). We use \(\Delta_{X}\) and \(\Delta_{Y}\) to denote the state space of the defender and the attacker. Note that continuous resources (\(\mathbf{x}_{t}\) and \(\mathbf{y}_{t}\) are continuous variables) are considered in this work.4
Footnote 4: Such an assumption on the state vector simplifies the analysis in [9, 10], however, we will later show that our algorithms accommodate states that take discrete values.
The major difference from the original Colonel Blotto game is that the dDAB game is played over multiple time steps, and that the states evolve according to the following discrete-time dynamics:
\[\mathbf{x}_{t+1}=K_{t}\mathbf{x}_{t}\quad\text{and}\quad\mathbf{y}_{t+1}=F_{t }\mathbf{y}_{t}, \tag{1}\]
where \(K_{t}\) and \(F_{t}\) represent the _transition matrices_ for the defender and the attacker, respectively. These matrices are left stochastic (column sum is unity), and their \(ij\)-th entry can take nonzero values only when \([A]_{ij}=1\). These matrices represent the action/control executed by the players. For example, an action \(K_{t}\) of the defender is admissible if and only if it satisfies the following _linear_ constraints:
\[K_{t}^{\top}\mathbf{1}=\mathbf{1}, \tag{2}\] \[[K_{t}]_{ij}\geq 0,\qquad\forall\ i,j\in\mathcal{V},\] (3) \[[K_{t}]_{ij}=0,\qquad\text{if }A_{ij}=0. \tag{4}\]
The entry \([K_{t}]_{ij}\) denotes the fraction of resource on node \(j\) to be transferred to node \(i\) at the next time step. We denote the admissible set for the matrices \(K_{t}\) as \(\mathcal{K}\), which depends only on the underlying graph \(\mathcal{G}\) and is time-invariant. The matrix \(F_{t}\) for the attacker also satisfies similar constraints, and we denote the set of all admissible matrices \(F_{t}\) as \(\mathcal{F}\).5
Footnote 5: Under the assumption that the two players have the same graph, we have \(\mathcal{F}=\mathcal{K}\). For consistency, we still use the notations of \(\mathcal{K}\) and \(\mathcal{F}\) to denote the two action spaces.
Similar to the Colonel Blotto games [18], the engagement at each location is modeled solely based on the amount of resources. Specifically, the defender successfully _guards_ a location by allocating at least as many resources as the attacker does, whereas the attacker _breaches_ a location by allocating more than what the defender does. For the dDAB game, the defender wants to prevent the attacker from breaching any location. In this work, we mainly focus on a finite horizon \(T\). The game terminates with the attacker's victory at the earliest time instance \(t\in\{0,\ldots,T\}\) such that
\[[\mathbf{y}_{t}]_{i}>[\mathbf{x}_{t}]_{i}\text{ for some }i\in\mathcal{V}. \tag{5}\]
The defender wins the game if it can prevent the attacker from achieving condition (5) for all \(t\in\{0,\ldots,T\}\). If the defender can prevent (5) for all time horizons \(T\geq 1\), we say that the defender can defend indefinitely.
For the information structure, we assume that the players make decisions in sequence. Specifically, the defender acts first then the attacker acts next, i.e., the attacker selects its action after observing how the defender allocated its resources. The game outcome is evaluated after the attacker's move. To avoid the degenerate scenario where the attacker wins immediately in the first time step, we let the attacker specify its initial allocation \(\mathbf{y}_{-1}\), followed by the defender freely picking its distribution \(\mathbf{x}_{0}\) after observing \(\mathbf{y}_{-1}\). The timeline of the dDAB game is presented in Figure 2. In a realistic scenario where the two players make simultaneous actions, our problem formulation corresponds to a worst-case scenario for the defender. Importantly, our setting accommodates state feedback strategies in contrast to previous results with constant action (transition) matrices [9, 10].
In summary, an instance of dDAB game is defined by: (i) the available resources \(X\) and \(Y\), and (ii) the underlying graph \(\mathcal{G}\). Given a graph, our goal is to identify the necessary and sufficient amount of resource for the defender to win the game. To formalize the goal above, we introduce the following multiplicative factor.
**Definition 1** (Critical Resource Ratio).: _For a given (strongly-connected) graph \(\mathcal{G}\) and a time horizon \(T\), the CRR, \(\alpha_{T}\geq 1\), is the smallest positive number such that, if_
\[X\geq\alpha_{T}Y, \tag{6}\]
_then the defender has a strategy to defend up to time step \(T\) against any admissible attacker strategy that starts at any initial state \(\mathbf{y}_{-1}\in\Delta_{Y}\). We use \(\alpha_{\infty}\) to denote the CRR that enables the defender to defend indefinitely._
The two main questions we address in this work are:
**Problem 1**.: _Given a (strongly-connected) graph and a finite horizon \(T\), what is the CRR \(\alpha_{T}\)?_
**Problem 2**.: _When \(X\geq\alpha_{T}Y\), what is the corresponding defender strategy that guarantees defense over \(T\) time steps? When the defender does not have enough resources, what is the attacker strategy that ensures breaching?_
## 3 Reachable Sets and Required Sets
This section introduces several concepts that are useful for the reachability analysis in the sequel. Since the dynamics of the two players are symmetric, we focus on the analysis of the defender's reachable sets and its action space \(\mathcal{K}\).
### Reachable Sets
There are two major disadvantages working directly with the action space \(\mathcal{K}\): (i) the higher dimensionality than the state space, i.e. \(|\mathcal{E}|\gg|\mathcal{V}|\), and (ii) the nonuniqueness in the action that achieves a transition from \(\mathbf{x}_{t}\) to \(\mathbf{x}_{t+1}\). To avoid these issues, we consider the possible states the defender can reach at the next time step.
**Definition 2** (Reachable Set from a Point).: _The reachable set from a single point \(\mathbf{x}_{t}\), denoted as \(\mathcal{R}(\mathbf{x}_{t})\), is the set of all states that the defender can reach at the next time step with an admissible action. Formally,_
\[\mathcal{R}(\mathbf{x}_{t})=\{\mathbf{x}\mid\exists K\in\mathcal{K}\ \ \text{s.t.}\ \ \mathbf{x}=K\mathbf{x}_{t}\}. \tag{7}\]
**Remark 1**.: _All points in the reachable set satisfy the conservation of resource. That is, for all \(\mathbf{x}_{t+1}\in\mathcal{R}(\mathbf{x}_{t})\), we have that \(\mathbf{1}^{\top}\mathbf{x}_{t+1}=\mathbf{1}^{\top}\mathbf{x}_{t}\)._
To better understand the properties of the reachable sets, we first examine the structure of the action space. Under the linear constraints in (2)-(4), the set of admissible actions \(\mathcal{K}\) is a bounded polytope in the \(|\mathcal{E}|\)-dimensional space. We use the extreme points (vertices) of this polytope to characterize \(\mathcal{K}\).
Given the admissible action space \(\mathcal{K}\), we define the set of _extreme actions_ as
\[\hat{\mathcal{K}}=\big{\{}K\in\mathcal{K}\mid[K]_{ij}\in\{0,1\}\big{\}}. \tag{8}\]
Figure 2: Sequence of events at every time step of the dDAB game. The defender first moves its resources based on the observation of the current attacker allocation. The attacker then observes and reallocates. Finally, the game outcome at this time step is evaluated after the attacker’s move.
In words, \(\hat{\mathcal{K}}\) contains all admissible actions \(K\) whose entries are either 0 or 1. The cardinality of \(\hat{\mathcal{K}}\) is given by \(|\hat{\mathcal{K}}|=\prod_{j\in\mathcal{V}}d_{j}\), where \(d_{j}\) is the out-degree of node \(j\). We use \(\ell\) to index the extreme actions in \(\hat{\mathcal{K}}\), i.e. \(\hat{\mathcal{K}}=\{\hat{K}^{(\ell)}\}_{\ell=1}^{|\hat{\mathcal{K}}|}\). The following theorem reveals the connection between the extreme actions and the admissible action set.
**Theorem 1**.: _The extreme actions defined in (8) are the vertices of the polytope \(\mathcal{K}\). Formally,_
\[\mathcal{K}=\operatorname{Conv}\big{(}\hat{\mathcal{K}}\big{)}. \tag{9}\]
_Consequently, for any admissible action \(K\in\mathcal{K}\), there is a set of non-negative coefficients \(\boldsymbol{\lambda}=\{\lambda^{(\ell)}\}_{\ell=1}^{|\hat{\mathcal{K}}|}\) such that \(\sum_{\ell=1}^{|\hat{\mathcal{K}}|}\lambda^{(\ell)}=1\) and_
\[K=\sum_{\ell=1}^{|\hat{\mathcal{K}}|}\lambda^{(\ell)}\hat{K}^{(\ell)}. \tag{10}\]
Proof.: See Appendix B.
**Remark 2**.: _The extreme action set \(\hat{\mathcal{K}}\) depends only on the graph \(\mathcal{G}\), and it only needs to be constructed once._
The extreme action set for the attacker is denoted as \(\hat{\mathcal{F}}\) and is defined similarly; we use \(\{\hat{F}^{(r)}\}_{r=1}^{|\hat{\mathcal{F}}|}\) to index the elements of \(\hat{\mathcal{F}}\).
#### 3.1.1 Reachable Sets as Polytopes
The reachable set \(\mathcal{R}(\mathbf{x}_{t})\) is, in fact, a polytope in \(\Delta_{X}\), and it can be viewed as a transformation performed on the action space \(\mathcal{K}\). Formally, we have the following lemma, which is a direct result of Theorem 1.
**Lemma 1**.: _Given a point \(\mathbf{x}_{t}\), the reachable set \(\mathcal{R}(\mathbf{x}_{t})\) is a polytope given by \(\mathcal{R}(\mathbf{x}_{t})=\operatorname{Conv}\big{(}\{\hat{K}^{(\ell)} \mathbf{x}_{t}\}_{\ell=1}^{|\hat{\mathcal{K}}|}\big{)}\)._
Proof.: For any \(\mathbf{x}\in\mathcal{R}(\mathbf{x}_{t})\), by definition, there is an action \(K_{t}\in\mathcal{K}\), such that \(\mathbf{x}=K_{t}\mathbf{x}_{t}\). Based on the characterization of \(\mathcal{K}\) in (10), this \(\mathbf{x}\) can be represented as the following convex combination for some \(\boldsymbol{\lambda}\):
\[\mathbf{x}=K\mathbf{x}_{t}=\bigg{(}\sum_{\ell=1}^{|\hat{\mathcal{K}}|}\lambda ^{(\ell)}\hat{K}^{(\ell)}\bigg{)}\mathbf{x}_{t}=\sum_{\ell=1}^{|\hat{\mathcal{ K}}|}\lambda^{(\ell)}\left(\hat{K}^{(\ell)}\mathbf{x}_{t}\right). \tag{11}\]
Define \(\mathbf{v}_{t+1}^{(\ell)}=\hat{K}^{(\ell)}\mathbf{x}_{t}\) to be the state achieved by propagating \(\mathbf{x}_{t}\) with the extreme action \(\hat{K}^{(\ell)}\). Then, the convex hull of these vertices gives us the polytope \(\mathcal{R}(\mathbf{x}_{t})=\operatorname{Conv}\big{(}\{\mathbf{v}_{t+1}^{( \ell)}\}_{\ell=1}^{|\hat{\mathcal{K}}|}\big{)}\), which describes the set of states that the defender at \(\mathbf{x}_{t}\) can achieve at the next time step.
Figure 3 presents an example of the reachable set for a three-node graph. For discrete resources (robots) as illustrated in Figure 3(a), the defender is able to achieve any discrete state (black dots) is are contained in the reachable set.
Using the same argument, we can compute the attacker reachable set via \(\mathcal{R}(\mathbf{y}_{t})=\operatorname{Conv}\big{(}\{\mathbf{w}_{t+1}^{(r) }\}_{r}\big{)}\), where the vertices are given by \(\mathbf{w}_{t+1}^{(r)}=\hat{F}^{(r)}\mathbf{y}_{t}\) for \(r=1,2,...,|\hat{F}|\).
Figure 3: Illustration of a defender reachable set. (a) A directed graph with three nodes. (b) The defender’s reachable set. The small dots indicate the discrete states if the defender’s resource consists of undividable units/robots as depicted in (a).
Since any state in \(\mathcal{R}(\mathbf{x}_{t})\) can be reached at the next time step from \(\mathbf{x}_{t}\), we view this polytope as the action space for the defender at state \(\mathbf{x}_{t}\). This definition of the action space resolves the two issues raised at the beginning of this section: dimensionality and nonuniqueness.
#### 3.1.2 Reachable Sets of Polytopes
We extend the definition of the reachable set of a single point to the reachable set of a (potentially unbounded) set, which will play a significant role in our later analysis of the optimal strategies.
**Definition 3** (Reachable Set from a Set).: _Given a set \(P\subseteq\mathbb{R}_{\geq 0}^{n}\), the reachable set from this set, denoted as \(\mathcal{R}(P)\), is the set of all states that the player can reach at the next time step with an admissible action starting from a state within \(P\). Formally,_
\[\mathcal{R}(P)=\{\mathbf{x}=K\mathbf{x}_{t}\mid K\in\mathcal{K},\;\mathbf{x} _{t}\in P\}. \tag{12}\]
**Lemma 2**.: _Given a polytope \(P\), the reachable set \(\mathcal{R}(P)\) is also a polytope._
Proof.: Due to the resolution theorem [24], any point \(\mathbf{x}_{t}\in P\) can be expressed as
\[\mathbf{x}_{t}=\sum_{r=1}^{R}\theta^{[r]}\mathbf{x}^{[r]}+\sum_{m=1}^{M} \phi^{[m]}\mathbf{h}^{[m]},\]
where \(\{\mathbf{x}^{[r]}\}_{r}\) is the set of vertices of \(P\) and \(\{\mathbf{h}^{[m]}\}_{m}\) is the set of extreme rays. Then, it is straightforward to show that
\[\mathcal{R}(P)=\mathrm{Conv}\big{(}\{\hat{K}^{[\ell]}\mathbf{x}^{[r]}\}_{\ell, r}\big{)}+\mathrm{Cone}\big{(}\{\hat{K}^{[\ell]}\mathbf{h}^{[m]}\}_{\ell,m} \big{)},\]
where \(\mathrm{Cone}\) represents the conic hull of the rays and the summation is a Minkowski sum.
### Required Set
In this subsection, we identify the set of defender states \(\mathbf{x}_{t}\) that leads to immediate termination given the attacker's previous allocation \(\mathbf{y}_{t-1}\). That is, the attacker has an action \(\mathbf{y}_{t}\in\mathcal{R}(\mathbf{y}_{t-1})\) to win the game by breaching at least one node, after observing \(\mathbf{x}_{t}\).
For the defender to defend every location at time \(t+1\), it is necessary and sufficient that the allocation vector \(\mathbf{x}_{t+1}\) matches or outnumbers \(\mathbf{y}_{t+1}\) at every node \(i\):
\[[\mathbf{x}_{t+1}]_{i}\geq[\mathbf{y}_{t+1}]_{i}\quad\forall i\in\mathcal{V}. \tag{13}\]
Since the attacker takes its action after observing the defender's allocation \(\mathbf{x}_{t+1}\), the question is whether there exists \(\mathbf{x}_{t+1}\in\mathcal{R}(\mathbf{x}_{t})\) such that (13) is true for all \(\mathbf{y}_{t+1}\in\mathcal{R}(\mathbf{y}_{t})\). This observation leads to the following condition for selecting \(\mathbf{x}_{t+1}\) to guarantee defense at time \(t+1\):
\[[\mathbf{x}_{t+1}]_{i}\geq\max_{\mathbf{y}_{t+1}\in\mathcal{R}(\mathbf{y}_{t}) }[\mathbf{y}_{t+1}]_{i}\quad\forall i\in\mathcal{V}. \tag{14}\]
Since \(\mathcal{R}(\mathbf{y}_{t})\) is a bounded polytope, for each node \(i\) the optimization \(\max_{\mathbf{y}_{t+1}\in\mathcal{R}(\mathbf{y}_{t})}[\mathbf{y}_{t+1}]_{i}\) can be viewed as a linear program, whose optimum is attained at one of the vertices of \(\mathcal{R}(\mathbf{y}_{t})\). Consequently, we define the minimum required resource at \(t+1\) as \(\mathbf{x}_{t+1}^{\text{req}}\), whose elements are given by
\[[\mathbf{x}_{t+1}^{\text{req}}]_{i}=\max_{r}\left[\mathbf{w}_{t+1}^{(r)} \right]_{i}, \tag{15}\]
where \(\left\{\mathbf{w}_{t+1}^{(r)}\right\}_{r}=\left\{\hat{F}^{(r)}\mathbf{y}_{t} \right\}_{r}\) are the vertices of \(\mathcal{R}(\mathbf{y}_{t})\). Then, the condition in (14) can be expressed in the following (component-wise) vector inequality form
\[\mathbf{x}_{t+1}\geq\mathbf{x}_{t+1}^{\text{req}}. \tag{16}\]
**Remark 3**.: _The defender's minimum required resource at the next time step, \(\mathbf{x}_{t+1}^{\text{req}}=\mathbf{x}_{t+1}^{\text{req}}(\mathbf{y}_{t})\), is a function of the attacker's current state, \(\mathbf{y}_{t}\)._
We claim that the defender can guarantee defense at time \(t+1\) by selecting \(\mathbf{x}_{t+1}\) inside the polytope \(\mathcal{P}_{\text{req}}(\mathbf{y}_{t})\) as follows.
**Definition 4** (Required Set).: _Given the attacker's allocation \(\mathbf{y}_{t}\) at time \(t\), the required set for the defender at time \(t+1\) is defined as:_
\[\mathcal{P}_{\text{req}}(\mathbf{y}_{t})\triangleq\{\mathbf{x}_{t+1}\mid[ \mathbf{x}_{t+1}]_{i}\geq[\mathbf{x}_{t+1}^{\text{req}}(\mathbf{y}_{t})]_{i}, \;\forall\;i\in\mathcal{V}\}. \tag{17}\]
**Proposition 1**.: _The condition \(\mathbf{x}_{t}\in\mathcal{P}_{\text{req}}(\mathbf{y}_{t-1})\) is necessary and sufficient for the defender to defend time step \(t\)._
Proof.: By allocating at least \([\mathbf{x}_{t+1}^{\text{req}}]_{i}\) to node \(i\), the defender ensures that this node is defended against all feasible attacker actions at time step \(t+1\). If the defender allocates \([\mathbf{x}_{t+1}]_{i}<[\mathbf{x}_{t+1}^{\text{req}}]_{i}\), then, after observing the defender's state (allocation), the attacker has a strategy \(\mathbf{y}_{t+1}\in\mathcal{R}(\mathbf{y}_{t})\) to win location \(i\). Thus, \(\mathbf{x}_{t+1}^{\text{req}}\) is the necessary and sufficient amount of resources for the defender to defend all locations at time \(t+1\), given the current attacker resource distribution \(\mathbf{y}_{t}\).
**Remark 4**.: _The required set \(\mathcal{P}_{\text{req}}(\mathbf{y}_{t})\) can be equivalently expressed as_
\[\mathcal{P}_{\text{req}}(\mathbf{y}_{t}) =\{\mathbf{x}_{t+1}\big{|}[\mathbf{x}_{t+1}]_{i}\geq\max_{ \mathbf{y}_{t+1}\in\mathcal{R}(\mathbf{y}_{t})}[\mathbf{y}_{t+1}]_{i},\ \forall i\in\mathcal{V}\}\] \[=\{\mathbf{x}_{t+1}\big{|}[\mathbf{x}_{t+1}]_{i}\geq\max_{F_{t} \in\mathcal{F}}\left[F_{t}\mathbf{y}_{t}\right]_{i},\ \forall i\in\mathcal{V}\}.\]
Given an attacker state \(\mathbf{y}_{t}\), Figure 4 illustrates the intersection between \(\mathcal{P}_{\text{req}}(\mathbf{y}_{t})\) and the defender's state space \(\Delta_{X}\). It is easy to see how the intersection \(\mathcal{P}_{\text{req}}(\mathbf{y}_{t})\cap\Delta_{X}\) can change as a function of the defender's resource \(X\).
Notice that \(X_{t+1}^{\text{req}}=\mathbf{1}^{\top}\mathbf{x}_{t+1}^{\text{req}}\) depends on \(\mathcal{G}\) and \(\mathbf{y}_{t}\). Clearly, the defender does not have a strategy to guarantee defense if \(X_{t+1}^{\text{req}}>X\). This immediately leads to the following result.
**Theorem 2** (Degenerate Parameter Regime [23]).: _Let \(d_{\max}\triangleq\max_{i\in\mathcal{V}}d_{i}\) denote the maximum outdegree of \(\mathcal{G}\). If the total resources satisfy_
\[X<d_{\max}Y, \tag{18}\]
_then the attacker can win the game at time step \(t=0\)._
Proof.: Consider the scenario where the attacker initializes the game with \(\mathbf{y}_{-1}\) that concentrates all its resources at the node with the maximum out-degree. If the defender does not allocate an equal amount or more to every neighboring node at time \(0\), the attacker can immediately win the game by moving all its resources to the neighboring node where the defender's allocation \(\mathbf{x}_{0}\) has less than \(Y\) unit of resources, after observing the defender's move.
Based on Theorem 2, the rest of the paper focuses on the case where
\[X\geq d_{\max}Y.\]
## 4 No-Splitting Attacker
This section develops the tools to construct optimal feedback strategies for the defender and the attacker. We first focus on the case where the attacker resources move as a single concentrated group (a blob). In Section 5, we generalize the results to scenarios where the attacker splits its resource into multiple subgroups. Note that throughout this paper, we do not restrict the defender's allocation strategies.
Let \(\mathbf{e}_{i}\in\mathbb{R}^{n}\) be the unit vector with its \(i\)-th element equal to one. In the sequel, we use the shorthand \(\mathbf{y}^{(i)}=Y\mathbf{e}_{i}\) to denote the attacker allocation that is fully concentrated on node \(i\).
**Definition 5** (No-splitting Attacker Strategy).: _A no-splitting attacker strategy concentrates all attacker resources at one node at each time \(t\). That is, for all \(t\), \(\mathbf{y}_{t}=\mathbf{y}^{(i_{t})}\triangleq Y\mathbf{e}_{i_{t}}\) for some \(i_{t}\in\mathcal{V}\).6_
Footnote 6: The no-splitting strategy can be achieved by ensuring that \(\mathbf{y}_{0}=\mathbf{y}^{(i)}\) for some \(i\) and \(F_{t}\in\hat{\mathcal{F}}\) for all time step \(t\), i.e., the attacker always selects its action from the set of _extreme actions_ defined in (8).
### K-step Safe Sets
To generate the defender strategy against _no-splitting_ attacker strategies, we define the following \(k\)-step safe set, later referred to as the Q-set.
**Definition 6** (\(k\)-step Safe Set).: _We define \(\mathcal{Q}_{k}^{(i)}=\mathcal{Q}_{k}^{(i)}(\mathcal{G})\) to be the region in \(\mathbb{R}_{\geq 0}^{n}\) such that for a no-splitting attacker state \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\), the defender's state \(\mathbf{x}_{t}\in\mathcal{Q}_{k}^{(i)}\) is necessary and sufficient to defend against any no-splitting attacker strategy until time step \(t+k\) (included).7_
Footnote 7: Specifically, there may exist a no-splitting attacker strategy that wins at time step \(t+k+1\) but not before that.
**Theorem 3**.: _The following recursive expression provides the \(k\)-step safe set:_
\[\mathcal{Q}_{0}^{(i)} =\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)}), \tag{19a}\] \[\mathcal{Q}_{k}^{(i)} =\left\{\mathbf{x}\;\big{|}\;\mathbf{x}\in\mathcal{P}_{\text{req }}(\mathbf{y}^{(i)})\text{ and }\mathcal{R}(\mathbf{x})\cap\mathcal{Q}_{k-1}^{(j)}\neq\varnothing\; \forall j\in\mathcal{N}_{i}\right\}\quad\forall\;k\geq 1, \tag{19b}\]
_where \(\mathcal{N}_{i}\) is the set of out-neighbors of node \(i\)._
Proof.: We break the proof into the following two lemmas, where Lemma 3 is for sufficiency and Lemma 4 is for necessity. Although the proofs are elementary, they provide insights into the mechanisms behind the Q-sets and are also helpful for the construction of the strategies.
**Lemma 3** (Sufficiency of Q-sets).: _Let the Q-sets be defined in (19), and suppose that the attacker starts with \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\). Then, by having \(\mathbf{x}_{t}\in\mathcal{Q}_{k}^{(i)}\), the defender can defend at least until time step \(t+k\)._
Proof.: We provide a proof by induction.
_Base Case:_ When \(k=0\), we have \(\mathbf{x}_{t}\in\mathcal{Q}_{0}^{(i)}=\mathcal{P}_{\text{req}}(\mathbf{y}^{(i )})\). From Proposition 1, the defense is guaranteed at time \(t\).
_Inductive hypothesis:_ Suppose that for some \(k\geq 1\), and for all \(i\in\mathcal{V}\), the condition \(\mathbf{x}_{t}\in\mathcal{Q}_{k}^{(i)}\) guarantees defense until time \(t+k\) given that \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\).
_Induction:_ Given \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\), we let \(\mathbf{x}_{t}\in\mathcal{Q}_{k+1}^{(i)}\). Under the no-splitting strategy, suppose that the attacker selects \(\mathbf{y}_{t}=\mathbf{y}^{(j)}\), for some arbitrary \(j\in\mathcal{N}_{i}\). The attacker cannot immediately win with this (or any other) action since the defender state \(\mathbf{x}_{t}\in\mathcal{Q}_{k+1}^{(i)}\subseteq\mathcal{P}_{\text{req}}( \mathbf{y}_{t-1})\) guarantees defense at time step \(t\). After observing \(\mathbf{y}_{t}=\mathbf{y}^{(j)}\), we let the defender select its next state so that \(\mathbf{x}_{t+1}\in\mathcal{R}(\mathbf{x}_{t})\cap\mathcal{Q}_{k}^{(j)}\). This new selection is reachable since \(\mathbf{x}_{t}\in\mathcal{Q}_{k+1}^{(i)}\) ensures that \(\mathcal{R}(\mathbf{x}_{t})\cap\mathcal{Q}_{k}^{(j)}\neq\varnothing\) (from (19b)). After the defender's action, we are at a situation where \(\mathbf{y}_{t}=\mathbf{y}^{(j)}\) and \(\mathbf{x}_{t+1}\in\mathcal{Q}_{k}^{(j)}\). From the inductive hypothesis, the defender can defend another \(k\) steps from this time on. The defender can thus defend until time step \(t+k+1\).
**Lemma 4** (Necessity of Q-sets).: _Let the Q-sets be defined in (19), and suppose that the attacker starts with \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\). If \(\mathbf{x}_{t}\notin\mathcal{Q}_{k}^{(i)}\), the attacker can win the game before or at time step \(t+k\)._
Proof.: We prove this lemma via an inductive argument.
_Base case:_ Suppose \(\mathbf{x}_{t}\notin\mathcal{Q}_{0}^{(i)}=\mathcal{P}_{\text{req}}(\mathbf{y}^ {(i)})\). Then, by the construction of \(\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\), there exists \(j\in\mathcal{N}_{i}\) such that \(\mathbf{y}_{t}=\mathbf{y}^{(j)}\) defeats \(\mathbf{x}_{t}\) on node \(j\).8 This corresponds to a defender defeat at time \(t\).
Footnote 8: If \(\mathbf{x}_{t}\notin\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\), we know that there exists at least one \(\mathbf{y}_{t}\in\mathcal{R}(\mathbf{y}^{(i)})\) that breaches \(\mathbf{x}_{t}\), and this \(\mathbf{y}_{t}\) is not necessarily a concentrated configuration. Suppose this (potentially split) \(\mathbf{y}_{t}\) defeats \(\mathbf{x}_{t}\) on node \(j\). Since we are starting from a concentrated state \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\), the attacker can move all its resource to the same node \(j\), and this concentrated state would also breach node \(j\).
_Inductive hypothesis:_ Suppose that, for all \(i\), \(\mathbf{x}_{t}\notin\mathcal{Q}_{k}^{(i)}\) implies that the attacker with state \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\) can win the game before or at time step \(t+k\).
_Induction:_ Let the attacker start with \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\) and the defender select \(\mathbf{x}_{t}\notin\mathcal{Q}_{k+1}^{(i)}\). From the definition of \(\mathcal{Q}_{k+1}^{(i)}\), we have either of the following two cases: (i) \(\mathbf{x}_{t}\notin\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\), which leads to an immediate defeat at \(t\); or (ii) there exists \(j\in\mathcal{N}_{i}\), such that \(\mathcal{R}(\mathbf{x}_{t})\cap\mathcal{Q}_{k}^{(j)}=\varnothing\). In the latter case, the attacker can move to \(\mathbf{y}_{t}=\mathbf{y}^{(j)}\). Then, for all possible next defender allocation \(\mathbf{x}_{t+1}\in\mathcal{R}(\mathbf{x}_{0})\), we have that \(\mathbf{x}_{t+1}\notin\mathcal{Q}_{k}^{(j)}\). From the inductive hypothesis, the defender will be defeated within \(k\) steps from this time \(t+1\). Thus, the attacker can win the game before or at time step \(t+k+1\).
Next, we present two important properties of the Q-sets.
**Remark 5**.: _For a fixed node \(i\in\mathcal{V}\), the sequence \(\left\{\mathcal{Q}_{k}^{(i)}\right\}_{k}\) is a decreasing sequence of sets. Formally, for all \(i\in\mathcal{V}\) and \(k\geq 0\),_
\[\mathcal{Q}_{k+1}^{(i)}\subseteq\mathcal{Q}_{k}^{(i)}. \tag{20}\]
The above remark follows directly from the definition of the Q-sets. That is, if the defender can defend \(k+1\) steps from some state, then it can clearly defend \(k\) steps.
**Theorem 4**.: _All Q-sets are polytopes._
We delay the proof of Theorem 4 to the algorithmic section, where we introduce additional tools to characterize and efficiently construct the Q-sets.
### Indefinite Defense
The recursive definition of the Q-sets in (19) can be viewed as an operator mapping from \(\left(2^{\Delta_{X}}\right)^{|\mathcal{V}|}\) to itself, where \(2^{S}\) denotes the power set of set \(S\). Consequently, (19) can be viewed as an iterative algorithm, and its fixed point(s) is therefore of great interest to study. Note that a fixed point of (19) is an element in \(\left(2^{\Delta_{X}}\right)^{|\mathcal{V}|}\).
**Definition 7** (Indefinite Safe Set).: _We define the indefinite safe sets \(\mathcal{Q}_{\infty}^{(i)}\subseteq\Delta_{X}\) for \(i\in\mathcal{V}\) as follows:_
\[\mathcal{Q}_{\infty}^{(i)}=\bigcap_{k\geq 0}\mathcal{Q}_{k}^{(i)}. \tag{21}\]
**Remark 6**.: _Since the Q-sets are nested (descending), the above definition is equivalent to \(\mathcal{Q}_{\infty}^{(i)}=\lim_{k\rightarrow\infty}\mathcal{Q}_{k}^{(i)}\)._
**Remark 7**.: _The indefinite safe sets are either all empty or all nonempty. In the first case, the defender cannot defend indefinitely with a finite amount of resource._
The first natural question is whether the collection of indefinite safe sets defined in (21) is a fixed point of the recursive formula in (19).
**Theorem 5**.: _If the indefinite safe sets defined in (21) are nonempty, they satisfy the following fixed point relation for all nodes \(i\in\mathcal{V}\):_
\[\mathcal{Q}_{\infty}^{(i)}=\Bigl{\{}\mathbf{x}\big{|}\mathbf{x}\in\mathcal{P} _{\text{req}}(\mathbf{y}^{(i)})\text{ and }\mathcal{R}(\mathbf{x})\cap\mathcal{Q}_{\infty}^{(j)} \neq\varnothing\ \forall j\in\mathcal{N}_{i}\Bigr{\}}. \tag{22}\]
Proof.: See Appendix C.
In the following theorem, we formalize the natural conjecture that indefinite safe sets guarantee an indefinite defense for the defender.
**Theorem 6**.: _If \(\mathcal{Q}_{\infty}^{(i)}\neq\varnothing\), then \(\mathbf{x}_{t}\in\mathcal{Q}_{\infty}^{(i)}\) is necessary and sufficient for indefinite defense given that the attacker is at \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\)._
Proof.: The necessity is straightforward. If \(\mathbf{x}_{t}\notin\mathcal{Q}_{\infty}^{(i)}\), then \(\mathbf{x}_{t}\notin\mathcal{Q}_{k}^{(i)}\) for some finite \(k\). From the necessity of the \(k\)-step safe sets, we know that the defender will be defeated within \(k\) steps.
For the sufficiency, suppose at time step \(t\), the system is at the state \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\) and \(\mathbf{x}_{t}\in\mathcal{Q}_{\infty}^{(i)}\). Since \(\mathcal{Q}_{\infty}^{(i)}\subseteq\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\), the defender can defend at least the current time step \(t\). Next, suppose the attacker moves to \(\mathbf{y}_{t}=\mathbf{y}^{(j)}\)
where \(j\in\mathcal{N}_{i}\). From (22), there is a state \(\mathbf{x}_{t+1}\in\mathcal{Q}_{\infty}^{(j)}\) that is reachable from \(\mathbf{x}_{t}\). Since \(\mathcal{Q}_{\infty}^{(j)}\subseteq\mathcal{P}_{\text{req}}(\mathbf{y}^{(j)})\), the defender can also defend the time step \(t+1\). Through mathematical induction, one can easily argue that being in \(\mathcal{Q}_{\infty}^{(i)}\) when \(\mathbf{y}_{t-1}=\mathbf{y}^{(i)}\) guarantees indefinite defense for all _no-splitting_ attacker strategies.
The conditions on the graph that guarantee convergence of the iterative algorithm in (19) as well as the conditions for the existence of such fixed point(s) is an ongoing research. Note that not all graphs have such a fixed point, for example, a sink graph (see Figure 13 in Appendix A) does not have one, since it requires infinite defender resources to guard indefinitely. Empirically, we found that for all strongly-connected and undirected graphs, the iterative algorithm in (19) converges within \(N\) iterations, where \(N=|\mathcal{V}|\) is the number of nodes. A follow-up work would be to establish the convergence guarantees.
### Q-Set Propagation
The Q-set propagation process is described in Algorithm 1, which takes two inputs: the graph environment \(\mathcal{G}\) and the horizon of the game \(T\). We assume that the players do not care about their performance beyond \(T\), and therefore, Q-sets need not be computed beyond this horizon. The algorithm then utilizes (19) to generate the Q-sets. The actual numerical implementation of the algorithm uses an equivalent but more computationally efficient formula (36) introduced later in Section 6. The iterative Q-set construction process stops if the Q-sets converge, as described in line 4. In this case, we can conclude that the defender has a strategy to defend _indefinitely_ against all no-splitting attacker strategies. The output \(k_{\infty}\) gives the smallest finite number such that \(\mathcal{Q}_{k_{\infty}}^{(i)}=\mathcal{Q}_{\infty}^{(i)}\) for all \(i\in\mathcal{V}\). This conclusion is formalized in Corollary 1.
```
Inputs: Graph \(\mathcal{G}\), attacker total resource \(Y\), game horizon \(T\);
1 Set \(k_{\infty}=\infty\) and set \(\mathcal{Q}_{0}^{(i)}=\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\) for all \(i\in\mathcal{V}\);
2for\(k=1\) to \(T\)do
3 Construct \(\mathcal{Q}_{k}^{(i)}\) using (36) for all \(i\in\mathcal{V}\);
4if\(\mathcal{Q}_{k}^{(i)}=\mathcal{Q}_{k-1}^{(i)}\) for all \(i\in\mathcal{V}\)then
5\(k_{\infty}=k-1\);
6 Break;
7
8 end if
9
10 end for
11Return:\(\{\mathcal{Q}_{k}^{(i)}\}_{i,k}\), \(k_{\infty}\)
```
**Algorithm 1**Q-Prop
**Inputs:** Graph \(\mathcal{G}\), attacker total resource \(Y\), game horizon \(T\);
1 Set \(k_{\infty}=\infty\) and set \(\mathcal{Q}_{0}^{(i)}=\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\) for all \(i\in\mathcal{V}\);
2for\(k=1\) to \(T\)do
3 Construct \(\mathcal{Q}_{k}^{(i)}\) using (36) for all \(i\in\mathcal{V}\);
4if\(\mathcal{Q}_{k}^{(i)}=\mathcal{Q}_{k-1}^{(i)}\) for all \(i\in\mathcal{V}\)then
5\(k_{\infty}=k-1\);
6 Break;
7
8 end if
12
13 end for
14
15Return:\(\{\mathcal{Q}_{k}^{(i)}\}_{i,k}\), \(k_{\infty}\) ```
**Algorithm 2**Q-Set
**Corollary 1**.: _Suppose Algorithm 1 returns \(k_{\infty}<\infty\). Then, for any no-splitting attacker strategy, there is a defender strategy that can defend indefinitely._
Proof.: If the algorithm terminates at some \(k_{\infty}<\infty\), we have \(\mathcal{Q}_{k_{\infty}}^{(i)}=\mathcal{Q}_{k_{\infty}+1}^{(i)}\) for all nodes \(i\in\mathcal{V}\), which implies that \(\{\mathcal{Q}_{k_{\infty}}^{(i)}\}_{i\in\mathcal{V}}\) is the limit in (21). The corollary then follows directly from Theorem 6.
For the rest of the paper, if Algorithm 1 converges, we denote the converged Q-sets \(\{\mathcal{Q}_{k_{\infty}}^{(i)}\}_{i}\) as \(\{\mathcal{Q}_{\infty}^{(i)}\}_{i}\) for simplicity.
### K-step Strategies
The proof of Theorem 3 provides a guideline for the strategies that the defender and the attacker would deploy under the no-splitting assumption. We first summarize the defender strategy in the following two algorithms.
Algorithm 3 presents the feedback strategy for the defender. Suppose \(k_{\max,0}=k_{\infty}\) in Algorithm 2, then the defender can indefinitely defend regardless of the attacker's no-splitting strategy. In this case, the defender observes \(\mathbf{y}_{t-1}=\mathbf{y}^{(i_{t-1})}\) and reallocates its resources to the corresponding Q-set: \(\mathcal{Q}_{k_{\infty}}^{(i_{t-1})}=\mathcal{Q}_{\infty}^{(i_{t-1})}\).
On the other hand, if Algorithm 2 outputs \(k_{\max,0}<k_{\infty}\), then either Algorithm 1 did not converge, or the defender does not have enough resource to achieve indefinite defense. By the construction of Q-sets, the defender has a guarantee to defend up to time step \(t=k_{\max,0}\). If we also have \(k_{\max,0}<T\), then the attacker will identify a strategy to win at
\(t=k_{\max,0}+1\) (shown later in Algorithms 4 and 5). Under the rational strategies by both players, \(k_{\max,t}\) will reduce by 1 at each time step, and the game terminates with attacker's win at \(t=k_{\max,0}+1\leq T\). However, if the attacker does not play rationally, the defender may be able to delay the breaching. The search / optimization performed in line 1 of Algorithm 3 ensures that the defender exploits such opportunity.9 Footnote 9: Note that if \(k_{\max,0}=T<k_{\infty}\), we do not have an estimate of when the attacker will be able to breach, even if the game continued beyond \(t=T\). However, the defender still has a guarantee to defend up to time step \(T\), and that is sufficient to identify the outcome of the finite-horizon game.
The following two algorithms describe the attacker strategy under the restriction of no-splitting. In particular, Algorithm 4 presents the initial allocation for the attacker, and Algorithm 5 provides the feedback attacker strategy at time steps \(t\geq 0\). As we will show later in Corollary 2, the attacker has no incentive to split, i.e., if the attacker can win a dDAB game by splitting, it can also win the game without splitting. Consequently, the algorithms presented here are sufficient for the attacker to play the dDAB game.
```
Inputs: Graph \(\mathcal{G}\), defender total resource \(X\), attacker total resource \(Y\), attacker initial allocation \(\mathbf{y}_{-1}=\mathbf{y}^{(i_{-1})}\), game horizon \(T\);
1 Construct Q-sets via Algorithm 1;
2\(k_{\max,0}\leftarrow\arg\max_{k}\left\{k\leq\min\{T,k_{\infty}\}|\Delta_{X} \cap\mathcal{Q}_{k}^{(i_{-1})}\neq\varnothing\right\}\)\(\triangleright\) find the longest defense time
3\(\mathbf{x}_{0}\leftarrow\) any element in \(\mathcal{Q}_{k_{\max,0}}^{(i_{-1})}\)
4Return: Initial allocation \(\mathbf{x}_{0}\), guaranteed defense time \(k_{\max,0}\)
```
**Algorithm 2**Initial Defender Allocation (against No-Splitting Attacker)
The attacker can defeat the defender only when the defender allocates resources outside the Q-sets. Since we formulated the dDAB game as a game of kind without any performance metric, when the defender allocates resources within \(\mathcal{Q}_{k}^{(i)}\), the defender is guaranteed to defend the next \(k\) steps, and thus the attacker does not have preference over which node to move to next. Therefore, we have arbitrary selections in line 7 of Algorithm 4 and line 6 of Algorithm 5. Introducing a cost for the defender's reallocation is a potential extension of this work. Our recent work [25] explored this idea and developed a more general framework based on convex body chasing [26], where the Q-sets are the convex bodies to be chased.
```
Inputs: Q-sets, observed defender allocation \(\mathbf{x}_{t}\), planning horizon \(T\);
1if\(\exists k\leq\min\{T,k_{\infty}\}\) and \(i\in\mathcal{N}_{i_{t-1}}\) such that \(\mathbf{x}_{t}\notin\mathcal{Q}_{k}^{(i)}\)then
2\(k_{\min,t}\leftarrow\) exploit defender's mistake\(\arg\min_{k}\Big{\{}k\leq\min\{T,k_{\infty}\}\mid\mathbf{x}_{t}\notin\mathcal{Q}_{k}^{(i)},\,i \in\mathcal{N}_{i_{t-1}}\Big{\}}\);
3\(i_{t}^{*}\leftarrow\) any element in \(\{i\in\mathcal{N}_{i_{t-1}}\mid\mathbf{x}_{t}\notin\mathcal{Q}_{k_{\min,t}}^{(i) }\}\);
4else
5\(k_{\min,t}\leftarrow\infty\);
6\(i_{t}^{*}\leftarrow\) any element in \(\mathcal{N}_{i_{t-1}}\);
7
8 end if Return: Next allocation \(\mathbf{y}_{t}=\mathbf{y}^{(i_{t}^{*})}\), guaranteed breach time \(k_{\min,t}\).
```
**Algorithm 5**Feedback Attack Strategy
## 5 Main Results
This section generalizes the defender strategy in the previous section to scenarios where the attacker can split its resources to multiple nodes. In particular, we show that if the defender has sufficient amount of resources to defend against any no-splitting attacker strategy, then it can defend against any attacker strategy, including the splitting ones. This result implies that the attacker can win the game if and only if it can win using a no-splitting strategy, and consequently the attacker does not have any incentive to split its resources to win the game. Finally, we obtain the critical resource ratio (CRR), which describes the necessary and sufficient amount of the defender resource required to guarantee defense against any attacker strategy.
### Generalization to Splitting Attacker
To extend the analysis from no-splitting strategies to more general strategies, we introduce the notion of subteams.
**Definition 8** (Attacker Subteam).: _We refer to the attacker resource allocated to each node as an attacker subteam. The size of the \(i\)-th attacker subteam (on node \(i\)) at time \(t\) is \([\mathbf{y}_{t}]_{i}\)._
In general, any attacker action can be viewed as a superposition of the subteam actions, which results in the splitting and merging of subteams into a new set of subteams. Figure 5 illustrates an example where two attacker subteams split and merge into a new set of three subteams. Note that the attacker's action to achieve the allocation in Figure 5(c) from Figure 5(a) is non-unique.
Based on the necessity and sufficiency of Q-sets, we define the \(i\)-th defender subteam as the subset of the defender resource that can defend against the \(i\)-th attacker subteam, assuming that the attacker subteam does _not_ further split in the future.
**Definition 9** (Defender Subteam).: _The \(i\)-th defender subteam is defined as_
\[\mathbf{x}_{t,T}^{(i)}\triangleq\frac{[\mathbf{y}_{t-1}]_{i}}{Y}\hat{\mathbf{ x}}_{t}^{(i)},\text{ where }\hat{\mathbf{x}}_{t}^{(i)}\in\mathcal{Q}_{T-t}^{(i)}, \tag{23}\]
_where the first subscript \(t\) tracks the current time step, and the second subscript \(T\) tracks the expected terminal time._
Figure 5: Splitting and merging of an attacker subteam. Self-loops on each node is omitted for clarity. (a) Two attacker subteams at \(t\): a 3-unit subteam on node 2 and a 4-unit subteam on node 3. (b) Each subteam splits into two: magenta-purple and yellow-red. (Note that the game is still in time step \(t\).) (c) The re-allocation (action) of each subteam at \(t\). (d) The resultant three subteams on nodes 1,3 and 4 at \(t+1\).
Note that the Q-set in (19) is defined based on the full attacker team size \(Y\), and \(\hat{\mathbf{x}}\) is used to denote an element of the original Q-sets. On the other hand, a defender subteam is its scaled version according to the size of the corresponding attacker subteam, and hence the scaling factor in (23).
Figure 6(a) shows an example of a valid defender subteam \(\mathbf{x}_{t,\infty}^{(2)}\) placed against an attacker subteam at node 2. Figure 6(b) shows how the attacker team may split into multiple subteams: magenta with size 1 and red with size 2. Based on the observed attacker action, the defender resources also split into two: cyan and blue that reacts against magenta and red, respectively. Finally, Figure 6(c) shows how the defender subteam may react to the observed attacker action.
In the following, we formally present the guarantees on the defense time (Theorem 7) as well as the defender's feedback strategy (Algorithms 6 and 7). We first present the main result of this section and provide the supporting lemmas later on.
**Theorem 7**.: _Suppose that, for a given terminal time \(T\), and for some \(t\in\{0,1,...,T\}\), the defender's state can be described as a superposition of the subteams:10_
Footnote 10: This condition is implicitly dependent on \(\mathbf{y}_{t-1}\) through the definition of the subteams in (23).
\[\mathbf{x}_{t}=\sum_{i=1}^{N}\mathbf{x}_{t,T}^{(i)}. \tag{24}\]
_Then, the defender has a strategy to guarantee defense until time step \(T\) against any admissible attacker strategy._
Proof.: We break the proof into three steps. **Step I:** In Lemma 5, we show that (24) is a sufficient condition for the defender to defend during the current time step. **Step II:** Lemma 6 provides a strategy to maintain condition (24) at the next time step against any admissible attacker strategy. In other words, for \(t\in\{0,\ldots,T-1\}\), if \(\mathbf{x}_{t}\) satisfies (24) for a given \(\mathbf{y}_{t-1}\), then for any \(\mathbf{y}_{t}\in\mathcal{R}(\mathbf{y}_{t-1})\), there is an admissible action \(K_{t}\) such that \(\mathbf{x}_{t+1}=K_{t}\mathbf{x}_{t}\) satisfies (24) at \(t+1\). **Step III:** Based on mathematical induction, condition (24) is satisfied for all time steps. Therefore, the defense is guaranteed until time \(T\).
**Remark 8**.: _Theorem 7 states the sufficiency for the defender to defend a certain number of time steps against all admissible attacker strategies. The tightness (necessity) of this condition will be discussed in terms of the required amount of resources (i.e., CRR) in Lemma 7 and Corollary 2._
**Lemma 5** (One-step Safety Guarantee).: _If the defender state \(\mathbf{x}_{t}\) satisfies (24), then we have \(\mathbf{x}_{t}\in\mathcal{P}_{\text{req}}(\mathbf{y}_{t-1})\). In other words, (24) provides sufficiency for the defender to defend the current time step \(t\)._
Proof.: Recalling the definition of defender subteams in (23), the condition (24) can be written as
\[\mathbf{x}_{t}=\sum_{i=1}^{N}\frac{[\mathbf{y}_{t-1}]_{i}}{Y}\hat{\mathbf{x} }_{t}^{(i)}.\]
By definition \(\hat{\mathbf{x}}_{t}^{(i)}\in\mathcal{Q}_{T-t}^{(i)}\), which implies \(\hat{\mathbf{x}}_{t}^{(i)}\in\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\). Therefore for any \(F_{t-1}\in\mathcal{F}\) we have
\[[\hat{\mathbf{x}}_{t}^{(i)}]_{j}\geq[F_{t-1}\mathbf{y}^{(i)}]_{j}=Y[F_{t-1} \mathbf{e}_{i}]_{j},\quad\forall\ j\in\mathcal{V}.\]
Figure 6: Splitting of an attacker subteam and the response of corresponding defender subteam. Self-loops on each node is omitted for clarity. (a) Initial state with a valid defender subteam placed against an attacker subteam at node 2. (b) The attacker resource splits into two: magenta with size 1 and red with size 2. Based on the observed attacker action, the original defender subteam also splits into two: cyan and blue that reacts against magenta and red, respectively. (c) The reaction of each defender subteams (cyan and blue) against corresponding attacker subteams.
Multiplying both sides with \(\frac{1}{Y}[\mathbf{y}_{t-1}]_{i}\), it follows that
\[\frac{1}{Y}[\mathbf{y}_{t-1}]_{i}[\hat{\mathbf{x}}_{t}^{(i)}]_{j}\geq[\mathbf{y }_{t-1}]_{i}[F_{t-1}\mathbf{e}_{i}]_{j}.\]
By taking the sum over \(i\), we obtain
\[[\mathbf{x}_{t}]_{j} =\frac{1}{Y}\sum_{i\in\mathcal{V}}[\mathbf{y}_{t-1}]_{i}[\hat{ \mathbf{x}}_{t}^{(i)}]_{j}\geq\sum_{i\in\mathcal{V}}[\mathbf{y}_{t-1}]_{i}[F_{ t-1}\mathbf{e}_{i}]_{j}\] \[=\sum_{i\in\mathcal{V}}\left[[\mathbf{y}_{t-1}]_{i}F_{t-1} \mathbf{e}_{i}\right]_{j}=\left[F_{t-1}\sum_{i\in\mathcal{V}}[\mathbf{y}_{t-1 }]_{i}\mathbf{e}_{i}\right]_{j}\] \[=\left[F_{t-1}\mathbf{y}_{t-1}\right]_{j}.\]
Since the above inequality holds for all \(F_{t-1}\in\mathcal{F}\), it follows that \(\mathbf{x}_{t}\in\mathcal{P}_{\text{req}}(\mathbf{y}_{t-1})\) (see Remark 4). Consequently, \(\mathbf{x}_{t}\) can defend the current time step \(t\).
The next lemma shows that the defender can preserve the condition in (24) against any attacker strategy.
**Lemma 6** (Inductive Condition).: _Suppose the defender's state at time \(t\) satisfies_
\[\mathbf{x}_{t}=\sum_{i=1}^{N}\mathbf{x}_{t,T}^{(i)}. \tag{25}\]
_Then, for any attacker action \(\mathbf{y}_{t}\in\mathcal{R}(\mathbf{y}_{t-1})\), there exists a defender's reaction \(\mathbf{x}_{t+1}\in\mathcal{R}(\mathbf{x}_{t})\) such that_
\[\mathbf{x}_{t+1}=\sum_{i=1}^{N}\mathbf{x}_{t+1,T}^{(i)}, \tag{26}\]
_i.e., the defender's state at the next time step can also be written as a combination of valid subteams defined in (23)._
Proof.: Denote an attacker's action that takes \(\mathbf{y}_{t-1}\) to \(\mathbf{y}_{t}\) as \(F_{t-1}\).11 Let \(\mathbf{f}_{i}\) be the \(i\)-th column of \(F_{t-1}\), i.e., \(F_{t-1}=[\mathbf{f}_{1},\mathbf{f}_{2},...,\mathbf{f}_{N}]\), where \(\mathbf{f}_{i}^{\top}\mathbf{1}=1\) (since \(F_{t-1}\) is left stochastic). We can interpret \(\mathbf{f}_{i}\) to be the splitting action of the attacker subteam on node \(i\) at time \(t-1\), where the fraction of a (possibly empty) subteam on node \(i\) relocating to node \(j\) is given by \([\mathbf{f}_{i}]_{j}\).
Footnote 11: This action may be non-unique as discussed in Section 3.1, but its existence suffices for the purpose of this proof.
For notational convenience, we drop the second subscript \(T\), when denoting the defender subteams, \(\mathbf{x}_{t,T}^{(i)}\). From Definition 9, we have that the re-scaled \(i\)-th defender subteam satisfies \(\hat{\mathbf{x}}_{t}^{(i)}=(Y\mathbf{x}_{t}^{(i)})/[\mathbf{y}_{t-1}]_{i}\in \mathcal{Q}_{T-t}^{(i)}\). From the Q-set definition, we can always construct a satisficing defender action \(K^{(i\to j)}\) against a no-splitting attacker moving from node \(i\) to \(j\), which guarantees that \(\hat{\mathbf{x}}_{t+1}^{(i\to j)}=K^{(i\to j)}\hat{\mathbf{x}}_{t}^{(i)}\in \mathcal{Q}_{T-t-1}^{(j)}\).
Intuitively, the \(i\)-th defender subteam should react to the splitting of the \(i\)-th attacker subteam in the following manner. First, the \(i\)-th defender subteam is divided into "sub-subteams", according to the \(i\)-th attacker subteam's splitting action \(\mathbf{f}_{i}\) from the previous time step (see Figure 6). The \(j\)-th defender sub-subteam of its \(i\)-th subteam then counteracts the \(j\)-th attacker sub-subteam that moves from node \(i\) to node \(j\). This counteraction is achieved by the defender sub-subteam applying the action \(K^{(i\to j)}\).
Following the intuition above, the \(j\)-th sub-subteam of the \(i\)-th defender subteam at time step \(t\) has the configuration \([\mathbf{f}_{i}]_{j}\mathbf{x}_{t}^{(i)}=\frac{[\mathbf{f}_{i}]_{j}[\mathbf{y}_{t-1}]_{ i}}{Y}\hat{\mathbf{x}}_{t}^{(i)}\), and it applies the action \(K^{(i\to j)}\) to counteract the attacker sub-subteam that moved from node \(i\) to node \(j\). The next configuration achieved by this defender sub-subteam is then given by
\[\mathbf{x}_{t+1}^{(i\to j)}=\frac{[\mathbf{f}_{i}]_{j}[\mathbf{y}_{t-1}]_{i}}{Y}K^ {(i\to j)}\hat{\mathbf{x}}_{t}^{(i)}=\frac{[\mathbf{f}_{i}]_{j}[\mathbf{y}_{t-1}]_{ i}}{Y}\hat{\mathbf{x}}_{t+1}^{(i\to j)}.\]
Note that \(\mathbf{x}_{t+1}^{(i\to j)}\) is only a part of the new \(j\)-th defender subteam, which originated from the previous \(i\)-th subteam.
By collecting defender resources originating from different subteams that reacted to the attacker resources that ended up at node \(j\) (i.e., \(\mathbf{x}_{t+1}^{(i\to j)}\) for \(i\in\mathcal{N}_{j}\)), the new \(j\)-th defender subteam can be computed as
\[\mathbf{x}_{t+1}^{(j)}=\sum_{i\in\mathcal{V}}\mathbf{x}_{t+1}^{(i\to j)}=\sum_ {i\in\mathcal{V}}\frac{[\mathbf{f}_{i}]_{j}[\mathbf{y}_{t-1}]_{i}}{Y}\hat{\mathbf{ x}}_{t+1}^{(i\to j)}. \tag{27}\]
We now verify that this is a valid defender subteam, i.e., it is a state in the corresponding Q-set (scaled by the size of the attacker subteam). By the definition in (23), the rescaled new \(j\)-th subteam is
\[\hat{\mathbf{x}}_{t+1}^{(j)}=\frac{Y}{[\mathbf{y}_{t}]_{j}}\mathbf{x}_{t+1}^{( j)}=\sum_{i}\frac{[\mathbf{f}_{i}]_{j}[\mathbf{y}_{t-1}]_{i}}{[\mathbf{y}_{t}]_{j}} \hat{\mathbf{x}}_{t+1}^{(i\to j)}.\lx@note{footnote}{Note that $\mathbf{x}_{t}^{(i)}=\frac{[\mathbf{y}_{t-1}]_{i}}{Y}\hat{\mathbf{x}}_{t}^{(i) }=\frac{[\mathbf{y}_{t-1}]_{i}}{Y}\sum_{j}\frac{[\mathbf{y}_{t-2}]_{i}}{[ \mathbf{y}_{t-1}]_{i}}[\mathbf{f}_{j}]_{i}\hat{\mathbf{x}}_{t}^{(j\to i)}= \sum_{j}\frac{[\mathbf{y}_{t-2}]_{i}}{Y}[\mathbf{f}_{j}]_{i}\hat{\mathbf{x}}_{t}^{( j\to i)}$, for $i$ such that $[\mathbf{y}_{t-1}]_{i}>0$.} \tag{28}\]
Noting that \(\sum_{i}[\mathbf{f}_{i}]_{j}[\mathbf{y}_{t-1}]_{i}=\sum_{i}[F_{t-1}]_{ji}[\mathbf{ y}_{t-1}]_{i}=[\mathbf{y}_{t}]_{j}\), we see that \(\hat{\mathbf{x}}_{t+1}^{(j)}\) is a convex combination of the states \(\{\hat{\mathbf{x}}_{t+1}^{(i\to j)}\}_{i}\). Since Q-sets are polytopes (Theorem 4), and also since \(\hat{\mathbf{x}}_{t+1}^{(i\to j)}\in\mathcal{Q}_{k-1}^{(j)}\) for all \(i\in\mathcal{V}\) by construction, we conclude that \(\hat{\mathbf{x}}_{t+1}^{(j)}\in\mathcal{Q}_{k-1}^{(j)}\). Thus, the new configuration at time \(t+1\) can be written as a superposition of valid subteams.
Finally, since \(\mathbf{x}_{t}=\sum_{i,j}\frac{[\mathbf{y}_{t-1}]_{i}[\mathbf{f}_{i}]_{j}}{Y}\hat {\mathbf{x}}_{t}^{(i)}\), we can construct the overall defender action \(K_{t}\) that takes \(\mathbf{x}_{t}\) in (25) to \(\mathbf{x}_{t+1}\) in (26) based on the sub-subteam actions \(K^{(i\to j)}\) (see Lemma 11 in Appendix D), which completes the proof.
A minimum working example that illustrates the concepts in the above proof is presented in Figure 7. The readers can use the figure as a roadmap for better understanding of the intuition behind Theorem 7.
As a direct consequence of Theorem 7, we can generalize the defender strategy in Section 4 to scenarios where the attacker splits its resource over multiple nodes.
```
Inputs: Graph \(\mathcal{G}\), total resources \(X\) and \(Y\), attacker initial allocation \(\mathbf{y}_{-1}\), game horizon \(T\);
1 Construct Q-sets via Algorithm 1;
2\(\mathcal{I}_{-1}\leftarrow\{i\,|\,[\mathbf{y}_{-1}]_{i}>0\}\);
3\(k_{\max,0}\leftarrow\arg\max_{k}\left\{\,k\mid\Delta_{X}\cap\mathcal{Q}_{k}^{(i )}\neq\varnothing\ \forall i\in\mathcal{I}_{-1}\right\}\);
4for\(i\in\mathcal{I}_{-1}\)do
5\(\hat{\mathbf{x}}_{0}^{(i)}\leftarrow\) any element in \(\Delta_{X}\cap\mathcal{Q}_{k_{\max,0}}^{(i)}\)
6 end for
7\(\mathbf{x}_{0}\leftarrow\sum\frac{[\mathbf{y}_{-1}]_{i}}{Y}\hat{\mathbf{x}}_{0} ^{(i)}\);
8Return: Initial allocation \(\mathbf{x}_{0}\), initial subteams \(\{\hat{\mathbf{x}}_{0}^{(i)}\}_{i}\), longest guaranteed defense time \(k_{\max,0}\)
```
**Algorithm 6**Initial Defender Allocation
### The Critical Resource Ratio
Before we move on to our formal analysis, we present some basic properties of CRR that are straightforward to obtain.
**Proposition 2**.: _The sequence \((\alpha_{T})_{T=1}^{\infty}\) is monotonically nondecreasing with respect to horizon \(T\)._
This property is obvious from the fact that the ability to defend over \(T\) time steps immediately implies the ability to defend any duration less than \(T\).
**Proposition 3** (Lowerbound of \(\alpha_{T}\)[23]).: _For a general graph and an arbitrary \(T\), \(\alpha_{T}\) is bounded from below by \(\underline{\alpha}=d_{\max}^{+}\), where \(d_{\max}^{+}=\max_{j\in\mathcal{V}}d_{j}^{+}\) is the maximum out-degree of the graph._
This property can be proved by considering the case where the attacker initially concentrates all its resources at the node with the maximum out degree. Unless the defender allocates an equal amount or more to every one of the neighboring nodes, the attacker will have an action to win the game, i.e., move all the attacker resources to the neighboring node where the defender allocates less than \(Y\) unit of resources.
Figure 7: A minimum working example for the proof of Lemma 6.
**Proposition 4** (Upperbound of \(\alpha_{T}\)).: _For a strongly-connected graph and an arbitrary \(T\in[0,\infty]\), \(\alpha_{T}\) is bounded from above by \(\bar{\alpha}=\sum_{i\in\mathcal{V}}L_{i}\), where \(L_{i}\) is the length of the shortest loop that passes through node \(i\)._
Proof.: Since the graph is strongly connected, for every node \(i\), there is a loop that passes through \(i\). Then, for every node \(i\), the defender can have \(L_{i}Y\) unit of resource patrolling the shortest loop that passes through \(i\), resulting every node on the loop (in particular, node \(i\)) having \(Y\) unit of defender resource at all time.
We leverage the results in the previous sections to identify the critical resource ratio (CRR, see Definition 1). In Section 4, we have shown that being in \(\mathcal{Q}_{k}^{(i)}\) is the necessary and sufficient for the defender to defend against any no-splitting attacker strategy that starts from \(\mathbf{y}_{-1}=\mathbf{y}^{(i)}\) for \(k\) steps. This leads to an intermediate version of the CRR defined for the case of no-splitting attacker starting on node \(i\):
\[\beta_{k}^{(i)}\triangleq\min_{\mathbf{x}\in\mathcal{Q}_{k}^{(i)}}\mathbf{1} ^{\top}\mathbf{x}. \tag{29}\]
Given that the attacker can freely select its initial state \(\mathbf{y}_{-1}\), we define the k-step CRR given a no-splitting attacker as
\[\beta_{k}\triangleq\max_{i\in\mathcal{V}}\beta_{k}^{(i)}. \tag{30}\]
**Proposition 5**.: _If \(\mathcal{Q}_{\infty}^{(i)}\neq\varnothing\) for some \(i\in\mathcal{V}\), then \(\beta_{\infty}^{(i)}=\beta_{\infty}^{(j)}<\infty\) for all \(i,j\in\mathcal{V}\)._
Proof.: Note that for any finite \(k\), \(\beta_{k+1}^{(i)}\geq\beta_{k}^{(j)}\) for all \(j\in\mathcal{N}_{i}\), since for a defender to defend an attacker starting from node \(i\) for \(k+1\) steps, it has to be able to defend \(k\) steps after the attacker moves to node \(j\). Consequently, we have \(\beta_{\infty}^{(i)}\geq\beta_{\infty}^{(j)}\) for all \(j\in\mathcal{N}_{i}\). Since the graph is strongly connected, there exists a directed path from \(j\) to \(i\). One can then cascade the inequality along the path from \(j\) to \(i\), and it follows that \(\beta_{\infty}^{(j)}\geq\beta_{\infty}^{(i)}\).
The following result shows that the general CRR is identical to the one defined for no-splitting attacker.
**Lemma 7** (Sufficient resource).: _Suppose that the defender has enough resource to defend against all no-splitting attacker strategies for \(k\) time steps. Then, the defender can guard against any attacker strategy for \(k\) time steps, which implies_
\[\alpha_{k}=\beta_{k}. \tag{31}\]
Proof.: It is obvious that \(\alpha_{k}\geq\beta_{k}\), since it is necessary to guard against no-splitting strategies. Consequently, it suffices to show that \(X=\beta_{k}Y\) is sufficient to guard against any admissible attacker strategy, including the ones with splitting.
Using the result of Theorem 7 with \(t=0\) and \(T=k\), consider the following initial defender state that is sufficient to guard against any given \(\mathbf{y}_{-1}\) over the next \(k\) time steps:
\[\mathbf{x}_{0}=\frac{1}{Y}\sum_{i}[\mathbf{y}_{-1}]_{i}\,\hat{\mathbf{x}}_{0}^{ (i)},\;\;\text{where}\;\hat{\mathbf{x}}_{0}^{(i)}\in\mathcal{Q}_{k}^{(i)}. \tag{32}\]
The minimum amount of resource required to achieve the above allocation is given by
\[\min_{(\hat{\mathbf{x}}_{0}^{(1)},\ldots,\hat{\mathbf{x}}_{0}^{( [V])})\in\mathcal{Q}_{k}^{(1)}\times\cdots\times\mathcal{Q}_{k}^{([V])}} \mathbf{1}^{\top}\Big{(}\frac{1}{Y}\sum_{i}[\mathbf{y}_{-1}]_{i}\,\hat{ \mathbf{x}}_{0}^{(i)}\Big{)}=\] \[=\frac{1}{Y}\sum_{i}[\mathbf{y}_{-1}]_{i}\,\Big{(}\min_{\hat{ \mathbf{x}}_{0}^{(i)}\in\mathcal{Q}_{k}^{(i)}}\mathbf{1}^{\top}\hat{\mathbf{ x}}_{0}^{(i)}\Big{)}=\frac{1}{Y}\sum_{i}[\mathbf{y}_{-1}]_{i}\,\beta_{k}^{(i)}X\] \[\leq\frac{1}{Y}\sum_{i}[\mathbf{y}_{-1}]_{i}\beta_{k}X=\beta_{k}X,\]
where the equality is given when the attacker initial state \(\mathbf{y}_{-1}\) places all its resources on the node \(i^{*}\in\operatorname*{arg\,max}_{i\in\mathcal{V}}\beta_{k}^{(i)}\). Hence, we have \(\alpha_{k}=\beta_{k}\).
**Corollary 2** (No Incentive to Split).: _For a given graph \(\mathcal{G}\) and resources \(X\) and \(Y\), the attacker has a strategy to win the dDAB game if and only if it has a no-splitting winning strategy._
Proof.: The sufficiency is given trivially. The necessity comes as a direct consequence of Lemma 7. If the attacker can win a dDAB game with some strategy, we have that \(Y\geq\alpha_{k}X\). From Lemma 7, we obtain \(Y\geq\alpha_{k}X=\beta_{k}X\), which implies that the attacker can also win with a no-splitting strategy.
The following corollary regarding the indefinite defense is a direct consequence of Corollary 2.
**Corollary 3**.: _If an indefinite defense is feasible against all no-splitting attacker strategies, the defender can also indefinitely defend against any attacker strategies._
Proof.: Note that if an attacker can win the game with a splitting strategy, it must breach a node at some finite time step. From Corollary 2, it can also win without splitting, which contradicts the assumption.
## 6 Algorithmic Solution
In this section, we first develop an algorithm to numerically construct the Q-sets. The proposed algorithm also helps us prove Theorem 4, which states that all Q-sets are polytopes. Recall that we treated the next state \(\mathbf{x}_{t+1}\) in the reachable set as the action for the defender to take at time step \(t\). In reality, however, the defender needs to find a feasible action \(K_{t}\in\mathcal{K}\) to reach \(\mathbf{x}_{t+1}\). In the second subsection, we formulate this action extraction problem as a linear program, which can be solved efficiently.
### Q-set Construction
Recall the recursive definition of the Q-sets in (19b):
\[\mathcal{Q}_{k}^{(i)}=\Big{\{}\mathbf{x}\;\big{|}\;\mathbf{x}\in\mathcal{P}_{ \text{req}}(\mathbf{y}^{(i)})\;\text{and}\;\mathcal{R}(\mathbf{x})\cap\mathcal{ Q}_{k-1}^{(j)}\neq\varnothing\;\forall j\in\mathcal{N}_{i}\Big{\}}.\]
To numerically construct the Q-sets, we first examine the properties of the set \(\{\mathbf{x}|\mathcal{R}(\mathbf{x})\cap\mathcal{Q}_{k}^{(j)}\neq\varnothing\}\). This set consists of states from which the defender can reach \(\mathcal{Q}_{k}^{(j)}\) at the next time step. It is unclear yet, whether these "inverse reachable sets" induce nice properties for the Q-sets.
We formally define the inverse reachable set of some set \(P\) as follow:
**Definition 10** (Inverse Reachable Set).: _Given a set \(P\subseteq\mathbb{R}^{N}_{\geq 0}\), we define the inverse reachable set of \(P\) as_
\[\mathcal{R}^{-1}(P)=\left\{\mathbf{x}\ \big{|}\ \mathcal{R}(\mathbf{x})\cap P \neq\varnothing\right\}. \tag{33}\]
With the notion of the inverse reachable set, we can simplify the recursive construction of Q-sets in (19) as
\[\mathcal{Q}^{(i)}_{0} =\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)}), \tag{34a}\] \[\mathcal{Q}^{(i)}_{k} =\left(\bigcap_{j\in\mathcal{N}_{i}}\mathcal{R}^{-1}(\mathcal{Q} ^{(j)}_{k-1})\right)\ \cap\ \mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\quad\forall\ k\geq 1. \tag{34b}\]
We now discuss the computation of the inverse reachable sets. Note that any admissible action \(K\in\mathcal{K}\) can be reversed. That is, if one can use an action to reach \(\mathbf{x}_{t+1}\) from some \(\mathbf{x}_{t}\), then one can also find a reverse action that brings the defender's allocation from \(\mathbf{x}_{t+1}\) to \(\mathbf{x}_{t}\). Based on this intuition, we introduce the notion of a reversed graph, which has the same node set as the original graph but with all the directed edges reversed.
**Definition 11**.: _For a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with connectivity matrix \(A\), its reversed graph \(\widetilde{\mathcal{G}}=(\mathcal{V},\widetilde{\mathcal{E}})\) is defined based on the connectivity matrix \(\widetilde{A}=A^{\top}\)._
We denote \(\widetilde{\mathcal{K}}\) as the admissible action set of \(\widetilde{\mathcal{G}}\). The reachable set for the reversed graph is then defined as
\[\widetilde{\mathcal{R}}(\mathbf{x})=\left\{\mathbf{x}^{\prime}\ |\ \exists \widetilde{K}\in\widetilde{\mathcal{K}}\ \text{ s.t. }\mathbf{x}^{\prime}=\widetilde{K}\mathbf{x}\right\}.\]
The following lemma relates the (forward) actions and the reversed actions.
**Lemma 8**.: _Given an arbitrary admissible action under the original graph \(K\in\mathcal{K}\) and an arbitrary starting state \(\mathbf{x}\), suppose the resultant state is \(\mathbf{x}^{\prime}=K\mathbf{x}\). We can reverse the action using an admissible action under the reversed graph \(\widetilde{K}\in\widetilde{\mathcal{K}}\) to achieve \(\mathbf{x}=\widetilde{K}\mathbf{x}^{\prime}\). The reverse action \(\widetilde{K}\) can be constructed as_
\[\left[\widetilde{K}\right]_{ij}=\begin{cases}\frac{[K]_{ji}[\mathbf{x}]_{i}}{[ \mathbf{x}^{\prime}]_{j}}&\text{if }[\mathbf{x}^{\prime}]_{j}>0,\\ \frac{1}{\sum_{i}[A]_{ij}}&\text{if }[\mathbf{x}^{\prime}]_{j}=0\text{ and }[ \widetilde{A}]_{ij}=1,\\ 0&\text{if }[\mathbf{x}^{\prime}]_{j}=0\text{ and }[\widetilde{A}]_{ij}=0. \end{cases} \tag{35}\]
Proof.: See Appendix E.
Based on the above result, we have the equivalence between the inverse reachable set of the graph \(\mathcal{G}\) and the reachable set of the reversed graph \(\widetilde{\mathcal{G}}\).
**Lemma 9**.: _For any graph \(\mathcal{G}\), we have_
\[\mathcal{R}^{-1}(P)=\widetilde{\mathcal{R}}(P)\qquad\forall\ P\subseteq \mathbb{R}^{N}_{\geq 0}.\]
Proof.: See Appendix E.
Lemma 9 leads directly to the following computationally-friendly definition of the Q-sets:
\[\mathcal{Q}^{(i)}_{0} =\mathcal{P}_{\text{req}}(\mathbf{y}^{(i)}), \tag{36a}\] \[\mathcal{Q}^{(i)}_{k} =\Big{(}\bigcap_{j\in\mathcal{N}_{i}}\widetilde{\mathcal{R}}( \mathcal{Q}^{(j)}_{k-1})\Big{)}\ \cap\ \mathcal{P}_{\text{req}}(\mathbf{y}^{(i)})\quad\forall\ k\geq 1. \tag{36b}\]
Based on the above result, we can easily prove that the Q-sets are polytopes.
**Theorem 4**.: _All Q-sets are polytopes._
Proof.: Recall that the reachable set of a polytope is also a polytope (see Lemma 2). With the equivalent Q-set definition in (36), one can easily show through an inductive argument that all Q-sets are polytopes.
As a direct consequence of Q-sets being polytopes, we have the following corollary.
**Corollary 4**.: _The k-step CRR, \(\alpha_{k}\), is attained at one of the vertices of the Q-set._
Proof.: Since the Q-sets are polytopes, the optimization for \(\beta^{(i)}_{k}\) is a linear program as in (29). Furthermore, since the CRR is bounded from below by zero, an optimal solution is attainable and can be attained on one of the vertices.
### Action Extraction From Q-Sets
Recall that all the results we obtained in the previous sections directly considered the states in the reachable set as the actions of the defender. However, when a dDAB game is played, the Player still needs to deploy a feasible action \(K_{t}\) to achieve the desired next state. Formally, the action extraction problem is formulated as:
**Problem 3** (Action Extraction).: _Given defender's current state \(\mathbf{x}_{t}\in\mathcal{Q}_{t}^{(i_{t-1})}\) and the observed attacker's next state \(i_{t}\in\mathcal{N}_{i_{t-1}}\), find the feasible action \(K_{t}\in\mathcal{K}\) that takes the defender to a new state \(\mathbf{x}_{t+1}\in\mathcal{R}(\mathbf{x}_{t})\cap\mathcal{Q}_{t}^{(i_{t})}\)._
By the construction of \(\mathcal{Q}_{t}^{(i_{t})}\), the existence of such new state \(\mathbf{x}_{t+1}\) is guaranteed. We propose to find a feasible action \(K_{t}\) that transitions the system to \(\mathbf{x}_{t+1}\) by solving a simple matrix equation. Recall the characterization of \(\mathcal{R}(\mathbf{x}_{t})\) in Lemma 1. Specifically, if \(\mathbf{x}_{t+1}\in\mathcal{R}(\mathbf{x}_{t})\cap\mathcal{Q}_{t}^{(i_{t})} \subseteq\mathcal{R}(\mathbf{x}_{t})\), it satisfies
\[\mathbf{x}_{t+1}=\sum_{\ell}\lambda^{(\ell)}\left(\hat{K}^{(\ell)}\mathbf{x}_ {t}\right). \tag{37}\]
Any \(\mathbf{x}_{t+1}\) in the intersection would have the safety guarantee, and consequently the selection can be arbitrary. Since the intersection set is a bounded polytope, one may simply select the centroid or a vertex of the intersection as \(\mathbf{x}_{t+1}\). Since \(\mathbf{x}_{t}\) and \(\{\hat{K}^{(\ell)}\}_{\ell}\) are all known variables at time step \(t\), the vector-form coefficients \(\boldsymbol{\lambda}\) can be found by solving the following problem:
\[\Phi\boldsymbol{\lambda}=\mathbf{x}_{t+1},\ \ \text{s.t.}\ \ \boldsymbol{ \lambda}\geq 0\ \text{and}\ \sum_{\ell}\lambda^{(\ell)}=1, \tag{38}\]
where the matrix \(\Phi\in\mathbb{R}^{|\mathcal{V}|\times|\hat{\mathcal{K}}|}\) has \(\left(\hat{K}^{(\ell)}\mathbf{x}_{t}\right)_{\ell=1}^{|\hat{\mathcal{K}}|}\) as its columns. Again, the feasibility of (38) is guaranteed, due to the construction of \(\mathcal{R}(\mathbf{x}_{t})\). With the solved \(\boldsymbol{\lambda}\), the feasible action that brings the defender from \(\mathbf{x}_{t}\) to \(\mathbf{x}_{t+1}\) is given by
\[K_{t}=\sum_{\ell}\lambda^{(\ell)}\hat{K}^{(\ell)}.\]
## 7 Numerical Illustrations
This section provides numerical examples that illustrate the results developed in the previous sections.
### Q-set Propagation
Figure 8 illustrates how the Q-sets, \(\mathcal{Q}_{k}^{(i)}\), change with the horizon \(k\). For the three-node graph selected for this example, the propagation in Algorithm 1 converges after four iterations, at which point the algorithm finds that \(k_{\infty}=4\). The CRR for this graph is \(\alpha_{\infty}=3\). It is worth noting that the Q-set for a given node may not change at every time step: e.g., \(\mathcal{Q}_{k}^{(1)}\) changes only twice between \(k=1\) to \(2\) and between \(k=3\) to \(4\).
We can verify the monotonicity of the Q-sets described in Remark 5 by observing how the Q-sets get "carved off" and become smaller as \(k\) increases. Specifically, some regions of the state space with small amount of resources get excluded when \(k\) changes from \(0\) to \(2\) and similarly from \(2\) to \(4\). As an example, the state \(\mathbf{x}=[0,0,1]\in\mathcal{Q}_{0}^{(1)}\) can guard against any immediate next action made by a unit attacker at node 1 (i.e., \(\mathbf{y}^{(1)}\)). This is shown by the red dot in \(\mathcal{Q}_{0}^{(1)}\) (top left subfigure in Figure 8). However, this state is insufficient to defend over two time steps, and thus it is not included in \(\mathcal{Q}_{2}^{(1)}\). Similarly, we can see that the state \(\mathbf{x}=[0,0,2]\in\mathcal{Q}_{2}^{(1)}\) is sufficient to guard over two time steps, but not for four or more time steps. The vertices of \(\mathcal{Q}_{4}^{(1)}\) (top right subfigure) are \([0,0,3]\), \([1,1,1]\), \([1,0,2]\), and \([0,2,1]\). One can verify that any of these states, as well as any convex combination of these states is sufficient to guard against one unit of no-splitting attacker indefinitely.
### Effect of Edges on CRR
The relationship between the CRR and the graph structure is not straightforward. One might, for example, expect a positive correlation between the number of edges and the CRR, since an increase in the number of outgoing edges from a node increases the number of neighboring nodes that must be covered by the defender. However, we show by a counter-example (found by the algorithm) that this is not the case.
The following example illustrates how the effect of additional edges can drastically change the CRR. Figure 9 provides examples of directed graphs with five nodes but with different edge sets. The corresponding indefinite-defense CRR \(\alpha_{\infty}\) for each graph is obtained using Algorithm 1.
In the simplest case of the ring graph, the defender only needs the same amount of resources as the attacker to guarantee indefinite defense, which matches the results in [23]. Interestingly, by adding only one directed edge connecting node 3 and node 4 to the graph, the CRR changes drastically to \(\alpha_{\infty}=5\). By further adding a self-loop on node 4, the CRR further increases to \(7\). Notably, this is greater than the number of nodes on this graph. We conjecture that \(\alpha_{\infty}>|\mathcal{E}|\) only occurs for directed graphs, but this remains to be verified. However, if we add a self-loop to node 3 instead of node 4, the CRR \(\alpha_{\infty}\) decreases from 5 to \(3\), which implies that additional edges may benefit the defender as well.
### Non-integer Resource Ratio
Another natural conjecture regarding the CRR is that it is always integer-valued. However, through the following dDAB example over a six-node graph (see Figure 12), we show that the resource ratio can be non-integer-valued for _finite-horizon_ dDAB games. For this example, Algorithm 1 returns that \(\alpha_{2}=3.5\). In other words, 3 units of defender resources is not enough for a two-step defense against a single unit of attacker resource, but 3.5 units is sufficient.
Figure 8: Illustration of Q-sets evolution with different horizons \(k\). (a) The three node graph used for this example. (b) We consider a single unit of attacker resource, and \(\mathcal{Q}_{k}^{(i)}\) for nodes \(i=\{1,2\}\) and horizon \(k=\{0,2,4\}\) are shown here for brevity. The red dot in each figure indicates the element in the Q-set that achieves the smallest amount of resource for \(k\)-step defense, i.e., \(\beta_{k}^{(i)}\) in (29). (c) The evolution of \(\beta_{k}^{(i)}\) on each node, until they converge at \(k_{\infty}=6\).
Figure 10: A six-node graph. All self-loops are explicitly presented.
Figure 9: Examples of how CRR changes with the graph structure. All self-loops are explicitly presented. The necessary and sufficient amount of blue agents are placed in the safe set for a given red agent in each figure.
Figure 11 presents a game tree where 3 units of defender resource fail to defend against a single attacker. The attacker selects to start on node 3, i.e. \(\mathbf{y}_{-1}=\mathbf{y}^{(3)}\). The initial defender allocation corresponds to the only feasible state with three unit of defender resource in the \(\mathcal{P}_{\text{req}}(\mathbf{y}^{(3)})\). The attacker then moves from node 3 to node 2 at time step 0. Note that with the attacker on node 2, it is necessary for the defender to place one unit of resource on both nodes 5 and 6 to be in the required set, which leads to the three possible configurations at the beginning of time step 1. For each of the configurations, the attacker has a corresponding move, which leads to a \(\mathcal{P}_{\text{req}}\) (marked with light blue) that the defender cannot achieve at the beginning of time step 2. For example, in trajectory (i), the attacker moves from node 2 to node 5 at time step 1. This move leads to a \(\mathcal{P}_{\text{req}}\) that has one unit of defender on each of the nodes 2, 3 and 5, which cannot be achieved by the defender.14 Consequently, the attacker has a strategy to defeat the defender at the end of time step 2.
Footnote 14: Notice that node 2 does not have a self-loop.
Figure 12 presents the game tree starting with 3.5 units of defender resource, and we show that regardless of the (no-splitting) strategy used by the attacker, the defender can defend until the end of time step 2. Since \(\alpha_{2}=3.5\) is attained with \(\beta_{2}^{(3)}=3.5\), we let the attacker start with \(\mathbf{y}_{-1}=\mathbf{y}^{(3)}\). It is easy to verify that the initial defender state \(\mathbf{x}_{0}\) is in \(\mathcal{P}_{\text{req}}(\mathbf{y}^{(3)})\). The attacker has three feasible moves at \(t=0\): move to node 2, move to node 5, or stay at node 3. We only present the first two moves in Figure 12, since for the third move, the defender can just maintain its current state as a countermeasure and does not lose any defense time.15 Furthermore, we focus on explaining the attacker's move to node 2, since the defense against attacker moving to node 5 can be achieved without using the half unit of resource on node 6. After observing that the attacker moves to node 2, the defender takes action (a)16 and arrives at the state at the beginning of time step 1. The attacker then has two options, either move to node 5 or to node 6. Suppose the attacker moves to node 6, the defender initiates action (b)17, which ensures that the configuration at the beginning of time step 2 is still in the required set. Similar moves can be made for trajectory (ii) to ensure the defense until the end of time step 2. For more details regarding the defender actions (a) to (c), see Appendix F.
Footnote 15: Even though node 2 does not have a self-loop, the defender resources on nodes 2 and 5 can swap locations to keep the current configuration.
Footnote 16: Action (a) splits defender resources so that the unit of defender on node 2 moves to node 6; the half unit on node 3 moves to node 2 and the other half stays on node 3; the half unit on node 6 moves to node 1, and finally the unit on node 5 stays.
Footnote 17: Action (b) moves the half unit on node 1 to node 5; the half unit on node 2 to node 5, the unit on node 6 to node 1, and the rest of the resources on nodes 3 and 5 stay.
Through splitting and re-grouping its resources, the defender achieves defense with only an additional half unit of resource. The strategy presented is found by the algorithms in Section 4, which verifies the efficacy of the proposed approach.
Figure 11: A two-time step game tree starting with one unit of attacker and three unit of defender. Regardless the strategy used by the defender, the defender will be defeated at the end of time step 2.
## 8 Open Problems
The Q-prop algorithm in Algorithm 1 is an iterative algorithm that finds the Q-sets. Through the sink problem in Section A, we have shown that there are graphs on which the Q-prop algorithm does not converge. Perhaps there are general conditions on the graph that guarantees that the Q-prop algorithm converges to the indefinite defense Q-sets. It is also of interest to see the convergence behavior of the algorithm, i.e. asymptotic vs. finite iteration.
Empirically, we observed that the critical resource ratio \(\alpha_{\infty}\) for undirected graph is always integer-valued. We further observed that \(\alpha_{\infty}\leq|\mathcal{V}|\) for all undirected graphs and \(\alpha_{\infty}>|\mathcal{V}|\) only for directed graphs. It is unclear whether these two observations can be formally proved or if additional strengthened conditions on the underlying graphs are required.
## 9 Conclusion
In this work, we formulated a dynamic adversarial resource-allocation problem by combining the Colonel Blotto game with ideas from population dynamics on graphs. Instead of achieving a desired allocation instantly, we require the resources of each player to traverse through the edges of the graph. An efficient reachable-set approach is developed to predict the evolution of the player's states. We provide a full characterization of the game by deriving the necessary and sufficient condition (the Q-sets) for either of the player to win the game and the associated strategies to achieve that. The efficacy of the proposed approach is verified through numerical simulations. Future work will investigate conditions required for the convergence of the Q-prop algorithm, which leads to guaranteed indefinite defense. It is also of interest to consider heterogeneous resources as in [10] and decentralized decision-making via the common-information approach [27].
|
2303.09080
|
Node Subsampling for Multilevel Meshfree Elliptic PDE Solvers
|
Subsampling of node sets is useful in contexts such as multilevel methods,
computer graphics, and machine learning. On uniform grid-based node sets, the
process of subsampling is simple. However, on node sets with high density
variation, the process of coarsening a node set through node elimination is
more interesting. A novel method for the subsampling of variable density node
sets is presented here. Additionally, two novel node set quality measures are
presented to determine the ability of a subsampling method to preserve the
quality of an initial node set. The new subsampling method is demonstrated on
the test problems of solving the Poisson and Laplace equations by multilevel
radial basis function-generated finite differences (RBF-FD) iterations.
High-order solutions with robust convergence are achieved in linear time with
respect to node set size.
|
Andrew P. Lawrence, Morten E. Nielsen, Bengt Fornberg
|
2023-03-16T04:53:26Z
|
http://arxiv.org/abs/2303.09080v2
|
# Node Subsampling for Multilevel Meshfree Elliptic PDE Solvers
###### Abstract
Subsampling of node sets is useful in contexts such as multilevel methods, computer graphics, and machine learning. On uniform grid-based node sets, the process of subsampling is simple. However, on node sets with high density variation, the process of coarsening a node set through node elimination is more interesting. A novel method for the subsampling of variable density node sets is presented here. Additionally, two novel node set quality measures are presented to determine the ability of a subsampling method to preserve the quality of an initial node set. The new subsampling method is demonstrated on the test problems of solving the Poisson and Laplace equations by multilevel radial basis function-generated finite differences (RBF-FD) iterations. High-order solutions with robust convergence are achieved in linear time with respect to node set size.
**Keywords:** node set, point cloud, subsampling, elimination, thinning, agglomeration, coarsening, multilevel, multicloud, multiresolution, meshfree, RBF, RBF-FD, Laplace equation, Poisson equation.
**Mathematical Subject Classification:** Primary: 65N50,65N22; Secondary: 65F10,65N06,65N55.
## 1 Introduction
Subsampling of variable density node sets has applications in polynomial approximation, numerical integration, artificial intelligence, machine learning, multilevel methods, and computer graphics. For each of these applications, algorithms exist in 1D, 2D, and even \(N\)-D space, but their utility is often application specific.
Subsampling methods have been specifically developed to choose points optimized for global polynomial approximation and numerical integration [10, 32, 35, 41]. Node sets have been optimized for global RBF collocation methods using multi-objective optimization [36]. However, the coarse node sets these algorithms produce do not, in general, preserve the variable density of the initial, fine node sets.
In the context of data driven artificial intelligence and machine learning, the process of tuning data rather than tuning model parameters has driven research on subsampling [25, 30, 38, 40]. With the exception of the generalized diversity subsampling algorithm in [38], these algorithms are either designed for uniform subsampling or statistical learning techniques such that they are not well-suited to the preservation of variable density data sets.
Research in computer graphics has led to considerable developments in the area of Poisson disk sampling which serves to subsample variable density node sets, producing resultant node sets with desirable statistical and minimum spacing properties [9, 11]. The process of Poisson disk sampling is recast as a weighted sample elimination or weighted subsampling problem in [53]. Other efforts have employed Poisson disk sampling to produce heirarchical node sets for multilevel methods using RBFs [27], albeit on uniform density node sets.
Use of a geometric multilevel method over a variable density node set requires a subsampling routine which maintains the variable density of the original node set. Algebraic multilevel algorithms coarsen the operators themselves and the coarse levels have no intuitive geometric meaning or interpretation [39, 45, 13]. As such, coarsening methods for algebraic multilevel algorithms are not useful for geometric multilevel schemes [37].
Algebraic multilevel methods (AMM) provide robust and scalable linear solvers for a wide class of problems. They are in principle a natural choice for meshfree discretizations since the hierarchical levels are a natural byproduct of
the inter-level transfer and coarse level operators. In the context of meshfree systems, AMM has been applied to methods that don't use RBF-FD [33][31] and those that do [52]. For solvers which use RBFs, it has been shown that geometric multilevel methods (GMM) converge in fewer iterations [52]. Additionally, the set-up time for AMM is higher overall [48][47]. The construction of the coarse levels themselves is higher in GMM, but that cost is reduced for a meshfree domain and motivates the need for a fast subsampling algorithm as explored in the following sections. Tests run in [47] demonstrate that AMM is sensitive to the mesh variation and resolution on the coarsest level. The proper choice of parameters (the strength parameter in particular) for AMM can reduce the total computation time by \(15\)-\(40\%\), per [47]. The GMMs have no such parameter sensitivity and have less sensitivity to mesh variation. According to [34], when using MGM and AGM as preconditioners for Krylov methods, the scheme will converge more quickly for preconditioned matrices for which the spectrum is more heavily clustered toward one. This corresponds to coefficient matrices1 with spectra clustered at zero. In the problems considered in [52], the spectra for MGM were more clustered around one than the compared AMM method (PyAMG [3]) in all cases. For these reasons, algebraic multigrid methods or the coarsening methods therein are not considered here.
Footnote 1: Those representing the application of one V-cycle of either GMM or AMM
The combination of geometric multilevel methods with meshfree solvers for partial differential equations have become increasingly popular. Meshfree methods such as radial basis function-generated finite differences (RBF-FD) discretize at scattered (quasi-uniform) nodes rather than with meshes. RBF-FD methods, in particular, allow for high geometric flexibility and can benefit from high density variation but require the underlying node sets to meet certain quality constraints in order to ensure stability and accuracy of the solution [19, 20, 29]. Robust algorithms for generating such node sets exist [18, 44, 46] and are utilised in this paper. The application of meshfree partial differential equation (PDE) solvers within a multilevel scheme requires a similarly robust algorithm for coarsening node sets [15, 16]. When implementing a multilevel algorithm, one typically starts with the initial, fine node set. Given a desired level of refinement, the task of producing a coarse node set from a fine node set can be accomplished in one of two ways: one can either select a subset of the fine node set or generate a node set that is independent of the fine node set. Many methods exist to create coarse node sets which are not subsets of the initial, fine node set [12, 43, 42]. However, the operators to coarsen and refine between independent node sets can introduce numerical instabilities and require more memory. Alternatively, selecting a subset simplifies the coarsening and refining operators and requires less memory. The combination of a multilevel scheme with RBFs has been explored before, however, primarily on uniform (Cartesian grid) or uniformly distributed scattered node sets [14, 27, 49, 50, 51, 33, 31]. The use of multilevel techniques on RBF-FD meshfree solvers for PDEs over variable density node sets is explored in [54], however the subsampling routine used therein, based on [24], is not adjustable to coarse node sets of any size; it is limited to coarsening by factors of \(1/n,n\in\mathbb{N}\). Due to this limitation, it is not considered in this paper. Though not applicable in it's original form (as it relies on information from a mesh at the fine level), an extension of the algorithm found in [22] can be applied to meet the outlined needs for a variable density node set subsampling algorithm. However, it also suffers from inflexible coarsening factors and, as such, is not considered here. The multilevel meshfree PDE solver presented here achieves high-order solutions with robust convergence in linear time with respect to node set size.
In contrast to the process of generating coarser node set from an initial fine node set, one might consider an initial coarse node set and the generation of finer node sets. Most refining algorithms use some residual function to determine if refinement should take place [6, 7, 55]. Most refinement techniques require user-supplied criteria in the form of a residual or principle function [28] at which some set of test nodes are evaluated based on the residual to determine where an how to refine. These function evaluations are an increased computational cost. Additionally, the goodness of the refinement depends heavily on the proposed test nodes. Simple ways of determining these nodes, such as using the halfway points between existing nodes [8], may not apply well enough to variable density node sets. On the other hand, more robust methods such as determining the Voronoi nodes [2] or the centroids of a node and it's \(K\) nearest neighbors [23] may still be ill suited for variable density refinement and introduce more significant increases in computational cost. Other refinement methods are limited to uniform refinement which is not appropriate for our current applications [43, 42]. Ultimately, refinement techniques will not be considered here.
Throughout this paper, the terms subsampling and coarsening will be used to refer to the process of selecting a subset from a collection of nodes or points. The subsampling algorithms considered in this paper are outlined in Section 2, boundary considerations for subsampling routines are covered in Section 3, numerical tests and comparisons between those presented earlier are presented in Section 4. Additionally, Section 5 includes two examples of a meshfree multilevel RBF-FD PDE solver utilizing the novel moving front node subsampling method from Section 2.1.
## 2 Subsampling Algorithms
This section surveys the methodology of four subsampling algorithms. In addition to a novel moving front method presented in Section 2.1, a weighted subsampling method based on [53], a method based on Poisson disk sampling,
and the generalized diversity subsampling method found in [38] are presented in Sections 2.2, 2.3, and 2.4 respectively.
### Moving Front
The novel subsampling algorithm presented here is a streamlined application of a'moving front' strategy akin to those found in the node generation algorithms in [46] and [18]. Algorithm 1 begins by sorting all nodes in the fine node set according to an arbitrary direction2; for example, from the bottom to the top. Then, the \(k\) (e.g. \(k=10\)) nearest neighbors to each node are determined3. For each node in the fine node set and working in the chosen direction, first check if the node has already been marked. If it has been marked, continue on to the next node. If it has not been marked, mark each of the \(k\) nearest neighbors that is within a distance \(c\) (for example \(c=1.5\)) of the present node's original nearest neighbor and above the present node in the sort. All marked nodes are then removed to produce the coarse node set. The moving front algorithm generalizes immediately to any number of space dimensions. A Python code for the moving front algorithm can be found in Appendix A.1.
Footnote 2: The directional sorting and progression of the algorithm enables a cost savings in that only the nodes above the present one need to be searched
```
1:functionMFSub(\(X\_fine=\{\boldsymbol{x_{1},...,x_{N}}\},c,k\))
2: Sort the nodes in \(X\_fine\)
3: Find the indices and distances of the \(k\) nearest neighbors for each point in \(X\_fine\)
4:for\(i=1:N\)do
5:if the node \(\boldsymbol{x_{i}}\) has not already been marked then
6: Determine which nearest neighbors are within a radius of \(c\) times the distance to the nearest neighbor
7: Of those, determine which are 'above' \(\boldsymbol{x_{i}}\) in the sort order
8: Mark these nodes
9:endif
10:endfor
11: Remove the marked nodes from \(X\_fine\) to produce \(X\_coarse\)return\(X\_coarse\)
12:endfunction
```
**Algorithm 1** Moving Front Algorithm
It should be reiterated that intrinsic to the moving front algorithm is a directional bias. More specifically, as the algorithm proceeds across a node set, the resultant node set will differ based on the direction in which the moving front travels. The effects of this directional bias are insignificant, however, as shown in the Sections 3.2 and 5.3.
In summary, the most efficient approach depends on the size of the input data and the available hardware resources. It's recommended to test both approaches on a subset of the data and compare their performance to determine the most efficient approach for a given problem.
### Weighted Subsampling
The weighted subsampling compared with here is based on the work presented in [53], but modified for variable density node sets.4 Each node is assigned a weight based on its distance to its nearest neighbors. The algorithm then iterates to remove the node with the highest weight, adjust the remaining weights accordingly, and repeat until the desired number of nodes remain. The code for this implementation can be found on the author's GitHub, [26].
Footnote 4: A sampling example presented in [53] should, in principle, serve to subsample variable density node sets. However, after repeated attempts, the example was not reproducible.
### Poisson Disk Subsampling
Given a radius of exclusion for each node (such that the radii of exclusion are spatially variable), a node is randomly selected from the fine node set and accepted into the coarse node set if its radius of exclusion does not overlap that of any of the previously accepted nodes. The first node in the coarse node set is chosen randomly. The radii of exclusion are the product of the distance of the nearest neighbor in the fine node set and a hyperparameter \(h\), \(r(x_{i})=c*r_{min}(x_{i})\). This algorithm is similar to the thinning method presented in [44] and the Poisson thinning method presented in [27]. The use of a nearest neighbor search to support spatially variable radii of exclusion limits
the computational complexity to being no better than \(O(N\log N)\) in contrast to the \(O(N)\) algorithm in [4]. The code for this implementation can be found on the author's GitHub, [26].
### Generalized Diversity Subsampling
The generalized Diversity Subsampling algorithm as found in [38] selects a subsample from the fine node set according to an arbitrary, specified distribution. The distribution utilized in this paper is a function of the distance to the nearest neighbor of each node.
## 3 Boundary Considerations
The purpose of this section is to illustrate the potential pitfalls that can occur if boundary nodes are included in the domain node set without being handled separately and to propose methods for overcoming those pitfalls. For the sake of demonstration, only the moving front algorithm is considered in this section. Initial node sets are generated by [46] and nodes near the boundary have been repelled5 prior to any subsampling.
Footnote 5: following the repel methodology described in [18]
### Subsample Boundary with Domain
When applying the moving front algorithm naively to a set for which the boundary nodes are included in the domain, the top boundary nodes are undesirably subsampled faster than the ones at the lower boundary, as demonstrated in Figure 1(a). One way to reduce the subsampling inconsistencies is to include nodes interior and exterior to each boundary as seen in Figure 1(b).
Another way to reduce the inconsistencies in subsampling which may be due to the inherent directional bias of the moving front algorithm is to alternate the direction between subsampling iterations. This technique of alternating direction is unsatisfactory, however, because an ideal method would be effective independent any inherent directional bias. To further improve robustness of the moving front method in the presence of boundaries, the following section considers subsampling boundaries separately.
Figure 1: The moving front subsampling algorithm applied to a test node set with two boundaries. The initial node set is subsampled three times. The boundary node set is included in the domain node set such that they are subsampled collectively and simultaneously. Subsampling performance along the boundary is improved by including nodes interior and exterior to all boundaries.
### Subsample Boundary Separately
First, the given boundary nodes are subsampled. Then, any domain nodes within a prescribed distance of the boundary nodes are removed. Finally, the domain nodes are subsampled. The effects of this two-step process can be seen in Figure 2. Figure 2(a) alternates direction of the moving front algorithm while Figure 2(b) does not. A significant improvement in how consistently the algorithm behaves across the node set is apparent, independent of any direction bias in the subsampling algorithm.
## 4 Comparisons of Subsampling Methods
This section compares the performance in preserving node density variation through iterative coarsening of the four subsampling algorithms found in Section 2. For each example, the primary node set has been generated from the trui image, see Figure 3(a), using the node generation algorithm from [46], see Figure 3(b). While this initial node set does not have immediate application to solving PDEs, its radically varying node densities make the trui image a good test problem for visually spotting any algorithmic artifacts. The trui image also contains regions of uniform density, thus illustrating subsampling capabilities on regions of both locally variable and locally uniform node densities.
Figure 2: The moving front subsampling algorithm applied to a test node set with two boundaries. The initial node set is subsampled three times. In these figures, the boundary node set is subsampled separately from the domain node set. Subsampling performance along the boundary is improved by subsampling boundary nodes independently. Additionally, no directional bias is detectable even when the algorithm does not alternate direction.
### Heuristic Comparison
This section provides visualizations of the subsampled node sets of each algorithm. Each algorithm is applied twice to the dithered trui image as seen in Figure 4. The original node set contains 36303 nodes, the first subsample contains 10553 nodes, and the second subsample contains 3404 nodes. Each algorithm discussed, excluding the generalized diversity subsampling algorithm6, requires a parameter, \(c\), to control the level of coarsening. To reproduce the subsamples in Figure 4, the values of \(c\) used in each algorithm are listed in Table 1. The moving front algorithm also relies on a choice of nearest neighbors which was \(k=10\) for these tests.
Footnote 6: The generalized diversity subsampling algorithm explicitly requires a target number of nodes as input rather than a parameter.
A heuristic comparison between the subsamples iterations primarily demonstrates the visual goodness of the first three algorithms over the generalized diversity subsampling algorithm. Among the remaining three, the woman's nostrils are more distinct in the moving front and Poisson disk algorithms than in the weighted subsampling while the Poisson disk algorithm seems to preserve the mouth slightly more clearly by the second subsampling. Additionally, the moving front and Poisson disk algorithms preserve a higher level of clarity in the patterns7 in the trui scarf than the weighted subsampling algorithm. Again, the moving front and Poisson disk subsampling algorithms each better preserve the density disparity between areas of low and high node density in the original dithering8 than do the weighted or generalized diversity subsampling algorithms. Finally, the Poisson disk algorithm may have a tendency to subsample too aggressively in places9. It should be noted that no direction bias of the moving front algorithm is visible.
\begin{table}
\begin{tabular}{c|c|c} Method & First Subsampling & Second Subsampling \\ \hline MF & 1.5101 & 1.518 \\ W & 3.44 & 3.1 \\ PD & 1.4931 & 1.5394 \\ \end{tabular}
\end{table}
Table 1: The parameters, \(c\), for reproducing the node sets in Figure 4 for the moving front (MF), weighted (W), and Poisson disk (PD) subsampling algorithms. The generalized diversity subsampling algorithm explicitly relies on the desired number of nodes in the coarse node set and thus has no parameter listed here. The moving front algorithm also relies on a choice of nearest neighbors which was \(k=10\) for these tests.
Figure 3: The original trui.png image and a dithered version with 36,303 nodes, obtained by the algorithm in [46].
Figure 4: A visualization of the initial dithered trui image (36303 nodes), the first subsampling (10553 nodes), and the second subsampling (3404 nodes). From top to bottom, the rows demonstrate the moving front, weighted, Poisson disk, and generalized diversity subsampling algorithms. The first column of the figure contains a redundant image of the initial dithering for convenience in comparison.
### Node Quality Measures
While visually comparing the results of each algorithm is useful, ultimately a more rigorous and quantitative comparison is desirable. As such, a variety of node quality measures are commonly discussed when comparing uniform subsampling methods [46, 38]. However, metrics which describe the quality of spatially variable node densities are less common [46]. One way of evaluating the quality of a variable density node set on its own is through local regularity distribution of distance to the nearest \(k\) neighbors \(\delta_{i,j}\), \(i=1,2,...k\) for each node \(x_{j}\).
It is here considered sufficient to expect the node generation algorithm to produce an initial node set which is sufficiently good10. The responsibility of a subsampling algorithm is then to preserve the characteristics of the original node set. A natural way to determine how well any subsampling preserves the quality of the initial node set is to measure the coarse node set in comparison to the fine node set.
Footnote 10: Where goodness is determined by any number of node quality measures chosen based upon context.
The two novel measures presented here are straightforward extensions of the commonly accepted measures of local regularity for evaluating the quality of variable density node sets and are referred to as measures of comparative local regularity (CLR). These CLRs contrast the average or standard deviation of distances to the \(k\) nearest neighbors of the fine node set from those of the coarse node set. For each node set \(X\), the Euclidian distance between each node \(x_{j}\in X\) and its \(k\) nearest neighbors is calculated as \(\delta_{i,j}=\|x_{j}-x_{i,j}\|\). The average \(\overline{\delta}_{j}\) or standard deviation \(\sigma_{j}\) of these distances is then found for each node \(x_{j}inX\). These are typical measures of local regularity. However, the present goal is measure how similar the initial and subsampled node sets are. To this end, the distributions of \(\overline{\delta}_{j}^{\text{\,fine}}\) and \(\overline{\delta}_{j}^{\text{\,coarse}}\) over each of the fine and coarse node sets, respectively, are first normalized to be between 0 and 1. Then the difference between these distributions is calculated at each of the nodes in the coarse node set and each of the collocated nodes in the fine node set11. Finally, the standard \(L_{2}\) norm is taken to produce a measure of CLR. The same process can be applied using the standard deviation \(\sigma_{j}\) of those distances \(\delta_{i,j}\).
Footnote 11: Effectively, \(\overline{\delta}_{j}^{\text{fine}}\) needs only be calculated at those nodes \(x_{j}\) in the fine node set which also belong to the coarse node set
Given that these CLRs measure the difference between a given initial node set and a subsampling of it, it is ideal to minimize these measures across algorithms. The presented CLRs should only be compared between subsamplings of the same size and from the same initial node set. Upon comparison of the average and standard deviation CLRs for each algorithm across various values of nearest neighbors as seen in Figure 5, it is clear that the moving front and Poisson disk algorithms preserve the characteristics of the initial dithered trui image better than the other two algorithms. This behavior is consistent across various sizes of node sets. Between them, however, it is not clear which one is qualitatively better.
### Computational Cost
When evaluating any numerical scheme, computational cost must be considered as much as any other aspect of performance. The moving front, weighted, and Poisson disk subsampling algorithms presented in this paper were coded in MATLAB12 The generalized diversity subsampling algorithm is coded in Python as presented in [38]. A comparison of the computational complexity can be found in Figure 6. The average execution times in Figure 6 are calculated based on ten repetitions of each algorithm subsampling from the node set size of the previous data point to the node set size for the given point using the timeit commands native to MATLAB and Python. The moving front algorithm is clearly the fastest algorithm of those compared here. The significant computational cost savings secures the moving front algorithm as the best overall performing algorithm.
Footnote 12: The code for the moving front algorithm included in Appendix A.1 is provided in Python for the readers convenience. Code is available in both MATLAB and Python on the authors GitHub [26].
## 5 Meshfree multilevel RBF-FD solver
Traditionally, multigrid solvers have been used to accelerate the solution of large systems of equations [5, 45]. Multigrid methods, however, require structured grids as the name indicates. In this section, it is illustrated how to utilize the geometric flexibility of RBF-FD in combination with the proposed node subsampling strategy to setup a geometric multilevel solver. Each example uses spatially varying node sets that seek to match the solutions. Consider the two-dimensional Poisson problem,
\[\nabla^{2}u=f,\ \ \mathbf{x}\in\Omega \tag{1}\] \[u=g,\ \ \mathbf{x}\in\partial\Omega\]
where \(u=u(\mathbf{x})=u(x,y)\in\mathbb{R}\) is the exact solution in a disk with unit diameter, i.e., \(\Omega=\{(x,y),x^{2}+y^{2}\leq 0.5\}\), \(g=g(\mathbf{x})\in\mathbb{R}\) specifies Dirichlet boundary conditions on \(\partial\Omega\) and \(f=f(\mathbf{x})\in\mathbb{R}\) specifies the source term. Two different
Figure 5: The comparative local regularity (CLR) of the average distance and standard deviation of distances for \(k=2,3,...,14\) nearest neighbors of various subsampling methods (weighted, moving front, Poisson disk, and generalized diversity subsampling) applied once and twice to the dithered trui image. For both measures of CLR, a lower value is better.
problems are solved using variable density node sets in order to test the applicability of the proposed node subsampling strategy. The first problem considered is the Poisson problem for which \(g=0\) and \(f=200e^{-100r}\left(100r-1\right)/r\) such that the solution given in polar coordinates is \(u(r,\theta)=2\exp(-r/0.01)\), while the other is a Laplace problem (i.e. \(f=0\)) for which \(g=\cos(10\theta)\) such that the solution given is \(u(r,\theta)=1024\cos(10\theta)r^{10}\) (see Figure 7).
### Radial basis function-generated finite differences
To discretize the problem in (1) using RBF-FD [1, 17, 20] we introduce the polyharmonic spline (PHS) radial basis functions \(\phi_{i}(r)=\phi\left(\|\mathbf{x}-\mathbf{x}_{i}\|_{2}\right)=\|\mathbf{x}-\mathbf{x}_{i}\|_{2 }^{2k+1}\) and bivariate monomials \(p_{j}(\mathbf{x})\) to approximate the exact solution to (1) as
\[u(\mathbf{x})\approx u_{h}(\mathbf{x})=\sum_{i=1}^{n}\kappa_{i}\phi\left(\|\mathbf{x}-\bm {x}_{i}\|_{2}\right)+\sum_{j=1}^{\ell}\gamma_{j}p_{j}(\mathbf{x}) \tag{2}\]
Figure 6: The computation time of each subsampling algorithm. The node set size is that of resultant subsample. Each subsample is subsequent such that the coarse node set of the previous iteration is the fine node set of the next. The execution time was averaged over ten iterations of the moving front, weighted, Poisson disk, and generalized diversity subsampling algorithms. Note the logarithmic scales. The moving front algorithm is significantly faster than any of the other algorithms.
Figure 7: The analytical solutions used for the (a) Poisson and (b) Laplace test problems.
which we require to match at \(n\) nodes, i.e., spatial points \(\{\mathbf{x}_{i}\}_{i=1}^{n}\),
\[u(\mathbf{x}_{i})=u_{h}(\mathbf{x}_{i}),\ \ \ \text{for}\ \ i=1,2,...,n, \tag{3}\]
while enforcing the additional constraints,
\[\sum_{i=1}^{n}\kappa_{i}p_{j}(\mathbf{x}_{i})=0,\ \ \ \text{for}\ \ j=1,2,...,\ell, \tag{4}\]
where \(\ell=(m+1)(m+2)/2\) is the number of monomial terms in a bivariate polynomial of degree \(m\). The above equations can be arranged in a linear system of equations,
\[\tilde{A}\begin{bmatrix}\mathbf{\kappa}\\ \mathbf{\gamma}\end{bmatrix}=\begin{bmatrix}A&P\\ P^{T}&0\end{bmatrix}\begin{bmatrix}\mathbf{\kappa}\\ \mathbf{\gamma}\end{bmatrix}=\begin{bmatrix}\mathbf{u}\\ \mathbf{0}\end{bmatrix}, \tag{5}\]
where \(\mathbf{\kappa},\mathbf{u}\in\mathbb{R}^{n}\), \(\mathbf{\gamma}\in\mathbb{R}^{\ell}\), \(A_{ij}=\phi\left(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|_{2}\right)\) is an entry of the RBF collocation matrix \(A\in\mathbb{R}^{n\times n}\) and \(P_{ij}=p_{j}(\mathbf{x}_{i})\) is an entry of the supplementary polynomial matrix \(P\in\mathbb{R}^{n\times\ell}\). Now, the linear operation \(\mathcal{L}\) can be approximated at an evaluation point \(\mathbf{x}_{e}\) as,
\[\mathcal{L}u|_{\mathbf{x}_{e}}\approx\sum_{i=1}^{n}\kappa_{i}\mathcal{L}\phi \left(\|\mathbf{x}-\mathbf{x}_{i}\|_{2}\right)|_{\mathbf{x}_{e}}+\sum_{j=1}^{\ell}\gamma_ {j}\mathcal{L}p_{j}(\mathbf{x})|_{\mathbf{x}_{e}}, \tag{6}\]
which again can be arranged in matrix-vector format,
\[\mathcal{L}u|_{\mathbf{x}_{e}}\approx\begin{bmatrix}\mathbf{a}^{T}&\mathbf{b}^{T}\end{bmatrix} \begin{bmatrix}\mathbf{\kappa}\\ \mathbf{\gamma}\end{bmatrix}=\begin{bmatrix}\mathbf{a}^{T}&\mathbf{b}^{T}\end{bmatrix} \tilde{A}^{-1}\begin{bmatrix}\mathbf{u}\\ \mathbf{0}\end{bmatrix}=\begin{bmatrix}\mathbf{w}^{T}&\mathbf{v}^{T}\end{bmatrix} \begin{bmatrix}\mathbf{u}\\ \mathbf{0}\end{bmatrix}=\mathbf{w}^{T}\mathbf{u} \tag{7}\]
where \(a_{i}=\mathcal{L}\phi\left(\|\mathbf{x}-\mathbf{x}_{i}\|_{2}\right)|_{\mathbf{x}_{e}}\) corresponds to the \(i\)th entry of \(\mathbf{a}\in\mathbb{R}^{n}\) and \(b_{j}=\mathcal{L}p_{j}(\mathbf{x})|_{\mathbf{x}_{e}}\) corresponds to the \(j\)th entry of \(\mathbf{b}\in\mathbb{R}^{\ell}\). The weights necessary for the multilevel solver, i.e. \(\mathbf{w}\in\mathbb{R}^{n}\), can equivalently be computed by solving the linear system,
\[\begin{bmatrix}A&P\\ P^{T}&0\end{bmatrix}\begin{bmatrix}\mathbf{w}\\ \mathbf{v}\end{bmatrix}=\begin{bmatrix}\mathbf{a}\\ \mathbf{b}\end{bmatrix}. \tag{8}\]
### Geometric multilevel elliptic solver
The meshfree geometric multilevel solver introduced here is based on similar ideas as the meshfree geometric multilevel method [52] used for solving PDEs on surfaces, although some parts differ. In this study, no Krylov subspace methods will be used to increase the rate of convergence [21]. Furthermore, the coarse grid difference operators are computed explicitly on each node set level. Finally, all restriction operations are performed as injection (i.e. directly using values from the fine node set). For further details on multilevel methods, the reader is referred to literature on the topics of multilevel approximation [14, 49] and multilevel solvers [52, 54].
A pseudocode for the proposed geometric multilevel solver is given in Algorithm 2, where \(u_{1}\) is the solution at the finest node set level, \(L=\{L_{j}\}_{j=1}^{p}\) are the difference operators computed for each level, \(I=\{I_{j+1}^{j}\}_{j=1}^{p-1}\) are the interpolation operators for each level and \(R=\{R_{j}^{j+1}\}_{j=1}^{p-1}\) are the restriction (injection) operators for each level. The multilevel solver performs up to \(i_{max}\) iterations (V-cycles) unless the relative residual \(||r_{1}^{i}||_{2}/||r_{1}^{0}||_{2}=||f_{1}-L_{1}u_{1}^{i}||_{2}/||f_{1}-L_{1}u_ {1}^{0}||_{2}\) of the \(i\)th iteration becomes less than a predefined tolerance \(tol\).
The basis of the geometric multilevel solver is the geometric multilevel V-cycle, which is described in Algorithm 3. During the V-cycles, pre- and post smoothing operations are performed using \((\nu_{1},\nu_{2})\) Gauss-Seidel relaxations, respectively. At the coarsest node set level, a sparse LU solver is used.
The pseudocode for performing the geometric multilevel preprocessing, i.e., establishing all the different subsets of nodes, \(X=\{X_{j}\}_{j=1}^{p}\), and the discrete operators \(L,I,R\), is given in Algorithm 4. The necessary input for this algorithm are two node sets, \(\{X_{bg},X_{b}\}\), which describe the scattered node set covering \(\Omega\) and boundary nodes on \(\partial\Omega\) at the finest node set level, respectively. Finally, the parameter \(N_{\min}\) is used to control the minimum number of boundary nodes at the coarsest node set level.
The multilevel solver is tested on node sets that have been generated with variable node densities as illustrated in Figure 8, where \(N_{\min}=60\) for the Poisson problem and \(N_{\min}=120\) for the Laplace problem. The node density function used in this study is defined by a linear transition between two prescribed node densities as,
\[\rho(d)=\begin{cases}\rho_{1},&d<d_{\lim}\\ \rho_{1}+(\rho_{2}-\rho_{1})(d-d_{\lim})/d_{\rm bl},&d_{\lim}\leq d\leq d_{\lim} +d_{\rm bl}\\ \rho_{2},&\text{otherwise}\end{cases} \tag{9}\]
```
1:functionmlspcv(\(u_{1},f_{1},L,I,R,\nu_{1},\nu_{2},tol,i_{max}\))
2:while\(i<i_{max}\) and \(||r_{1}||_{2}<||f_{1}||_{2}\cdot tol\)do
3:\(i\gets i+1\)
4:\(u_{1}\leftarrow\textsc{mlcvcv}(u_{1},f_{1},L,I,R,\nu_{1},\nu_{2})\)
5:endwhile
6:return u
7:endfunction
```
**Algorithm 2** Geometric multilevel solver
```
1:functionmlspcv(\(u_{1},f_{1},L,I,R,\nu_{1},\nu_{2}\))
2:\(u_{1}\leftarrow\textsc{relax}(u_{1},f_{1},L_{1},\nu_{1})\)
3:\(r_{2}\gets R_{1}^{2}(f_{1}-L_{1}u_{1})\)
4:for\(j=2\) to \(p-1\)do
5:\(e_{j}\leftarrow\textsc{relax}(0,r_{j},L_{j},\nu_{1})\)
6:\(r_{j+1}\gets R_{j}^{j+1}(r_{j}-L_{j}e_{j})\)
7:endfor
8:\(e_{p}=\textsc{lusolve}(L_{p},r_{p})\)
9:for\(j=p-1\) to \(2\)do
10:\(e_{j}\gets e_{j}+I_{j+1}^{j}e_{j+1}\)
11:\(e_{j}\leftarrow\textsc{relax}(e_{j},r_{j},L_{j},\nu_{2})\)
12:endfor
13:\(u_{1}\gets u_{1}+I_{2}^{1}e_{2}\)
14:\(u_{1}\leftarrow\textsc{relax}(u_{1},f_{1},L_{1},\nu_{2})\)
15:return\(u_{1}\)
16:endfunction
```
**Algorithm 3** Geometric multilevel V-cycle
```
1:functionmlspcv(\(u_{1},f_{1},L,I,R,\nu_{1},\nu_{2}\))
2:\(u_{1}\leftarrow\textsc{relax}(u_{1},f_{1},L_{1},\nu_{1})\)
3:\(r_{2}\gets R_{1}^{2}(f_{1}-L_{1}u_{1})\)
4:for\(j=2\) to \(p-1\)do
5:\(e_{j}\leftarrow\textsc{relax}(0,r_{j},L_{j},\nu_{1})\)
6:\(r_{j+1}\gets R_{j}^{j+1}(r_{j}-L_{j}e_{j})\)
7:endfor
8:\(e_{p}=\textsc{lusolve}(L_{p},r_{p})\)
9:for\(j=p-1\) to \(2\)do
10:\(e_{j}\leftarrow e_{j}+I_{j+1}^{j}e_{j+1}\)
11:\(e_{j}\leftarrow\textsc{relax}(e_{j},r_{j},L_{j},\nu_{2})\)
12:endfor
13:\(u_{1}\gets u_{1}+I_{2}^{1}e_{2}\)
14:\(u_{1}\leftarrow\textsc{relax}(u_{1},f_{1},L_{1},\nu_{2})\)
15:return\(u_{1}\)
16:endfunction
```
**Algorithm 4** Geometric multilevel preprocessing
where \(d=||\mathbf{x}||_{2}\) is the distance to the origin according to Figure 7, \(\rho_{1}\) is the node density in region 1, \(d_{\text{lim}}\) is a distance within which \(\rho_{1}\) is kept constant, whereas \(d_{\text{bl}}\) is the distance over which \(\rho_{1}\) linearly blends into \(\rho_{2}\). It should be noted that the nodes in the vicinity of boundary have been adjusted by means of repulsion only at the finest node set level [18]. It should be noted that the node density function, \(\rho(d)\), is chosen such that node density can be matched with the characteristics of the solutions.
The numerical test setups have been chosen to showcase the applicability of the node subsampling strategy for multilevel solvers using RBF-FD and to test whether the high-order accuracy will still be dictated by the degree of the augmented polynomials as shown, e.g., in [1]. Thus, the parameters used for computing the difference operators, \(L\), of polynomial degree \(m_{L}\) are chosen to be \((k,n)=(1,2\ell)\), while the parameters \((m_{I},k,n)=(0,0,5)\) are used for computing the interpolation operators, \(I\). The parameters for the interpolation operators are kept fixed for all choices of \(m_{L}\). Finally, the multilevel solver settings are defined as \((\nu_{1},\nu_{2},i_{max},tol)=(2,1,50,10^{-16})\) for both test problems, whereas the polynomial degree \(m_{L}\) ranges from 2 to 8.
Figure 8: Example of the multilevel node subsampling process for the Poisson problem node set (top) and the Laplace problem node set (bottom). Only nodes within the first quadrant of the Cartesian coordinate system are shown.
### Poisson Equation Test Problem
First, it can be seen from Figure 9 that the wall clock time scales linearly with the number of nodes, which is in accordance with expectations for any multigrid or multilevel solver [5, 45, 52]. Furthermore, the maximum relative error (\(||u-u_{h}||_{\infty}/||u||_{\infty}\)) decreases as function of node set resolution (\(\rho_{mean}=1/\sqrt{N}\)) and the slope is dictated by the polynomial degree of the difference operator, \(m_{L}\). This is in agreement with previous RBF-FD studies [1].
In this study, the low-order interpolation operators are chosen in order to accelerate the convergence of the multilevel solver. However, this choice is not aligned with the rule of thumb for the transfer operators, i.e. \(m_{L}+m_{R}>0\).
Figure 9: Poisson problem performance indicators of the implemented geometric multilevel solver for polynomial degrees of \(m_{L}=\{2,4,6,8\}\) from top to bottom. The mean node density is defined as \(\rho_{mean}=1/\sqrt{N}\).
2, which is used in traditional multigrid methods for the Poisson problem [45]. In this work, the order of the restriction operators are \(m_{R}=0\) because injection is used as the restriction operation between all node set levels. Nevertheless, no detrimental effects have been noticed in any of the numerical tests conducted.
Finally, the proposed multilevel solver should provide solutions without any directional bias. Hence, to identify whether any directional bias is present, the relative errors for all node set resolutions have been normalized and depicted in Figure 10. Thus, the scale factors used in Figure 10 refer to the plateau of the maximum relative error plots in Figure 9. As no particular directional pattern is noticed in Figure 10, it can be concluded that the subsampling process used for setting up the multilevel solver does not introduce any directional bias.
### Laplace Equation Test Problem
The performance indicators of the multilevel solver for the Laplace problem are illustrated in Figure 11. The same overall conclusions that were made for the Poisson problem can be made for the Laplace problem, i.e. high-order accuracy and linear scaling of the computation time. For \(m_{L}=8\), note that the convergence factor (\(||r_{1}^{i}||_{2}/||r_{1}^{i-1}||_{2}\)) of the multilevel solver is less for \(N=14419\) and \(N=28279\) compared with the other values of \(N\). The decrease in convergence factors is most likely caused by a stencil size that is too large as compared to the relatively low node density near the boundary since localized error peaks are present for both \(N=14419\) and \(N=28279\) (\(m_{L}=8\)) in Figure 12. This decrease in convergence factor does not occur if the node resolution is increased to \(N=55966\) or above. Furthermore, if the multilevel V-cycle is used as a preconditioner for a Krylov subspace method, e.g. the generalized minimal residual method or biconjugate gradient stabilized method, fewer iterations will be needed for the solution to converge and the solver will be more robust compared to the standalone multilevel solver. However, each iteration will become more computational expensive. Thus, whether the multilevel V-cycle should be used as a standalone solver or a preconditioner is a trade-off between computational cost and robustness.
The normalized relative error distributions in Figure 12 illustrate that no directional bias seems to be introduced by the subsampling process, which is the same conclusion as for the Poisson problem.
Figure 10: Normalized relative error distributions for various orders of the difference operators and node set resolutions for the Poisson problem. The color scale factors refer to the plateau of the maximum relative error plots in figure 9.
Figure 11: Laplace problem performance indicators of the implemented geometric multilevel solver for polynomial degrees of \(m_{L}=\{2,4,6,8\}\) from top to bottom. The mean node density is defined as \(\rho_{mean}=1/\sqrt{N}\).
## 6 Conclusion
A novel method for subsampling quasi-uniform node sets of highly variable density with sharp gradients is presented along with boundary preservation techniques and two novel measures for evaluating node quality of subsampling. The moving front subsampling algorithm demonstrates the capability to coarsen a node set with high contrast and detail. Additionally, the moving front algorithm maintains the characteristics of the original node set as outlined by the comparative local regularity of the average distance and standard deviation of distances to the \(k\) nearest nodes. It is also faster, both by a constant and in the limit as node set size increases, than any other algorithm considered in this paper for subsampling variable density node sets.
The utility of the moving front algorithm for the purpose of subsampling node sets in a meshfree multilevel PDE solver is demonstrated by solving both the Poisson and Laplace problems on variable density node sets. In both test cases, the meshfree PDE solver with the multilevel method and the proposed subsampling algorithm achieves the fast linear scaling of computational cost with node set size expected from a multilevel scheme. At the same time, this combination has no adverse impact on the expected high-order accuracy of the RBF-FD method. The meshfree multilevel PDE solver has been tested up through eighth order convergence and also demonstrates very robust performance.
**Acknowledgments:** Andrew Lawrence acknowledges support from the US Air Force13.
Figure 12: Normalized relative error distributions for various orders of the difference operators and node set resolutions for the Laplace problem. The color scale factors refer to the plateau of the maximum relative error plots in figure 11.
Subsampling Algorithms
### Moving Front Subsampling
The Python code for the moving front subsampling algorithm. The code can also be found on the author's GitHub repository in both MATLAB and Python along with examples of implementation [26].
import numpy as np from sklearn.neighbors import NearestNeighbors
def MFNUS(xy, fc=1.5, K=10): """ Moving Front Non-Uniform Subsampling
Args: xy (array): initial node set to be subsample c (float): coarsening factor K (float): number of nearest neighbors to check in algorithm
Returns: xy.sub (array): subsampled node set """ if xy.shape[0] < xy.shape[1]: xy = xy.T
# algorithm N = xy.shape[0] # Get the number of its dots sort.ind = np.lexsort(xy.T,axis=0) xy = xy[sort_ind,.] # Sort dots from bottom and up
# Create nearest neighbor pointers and distances nbrs = NearestNeighbors(n_neighbors=K+1, algorithm='auto').fit(xy) distances, indices = nbrs.knowigbors(xy)
for k in range(N): # Loop over nodes from bottom and up if indices[k, 0]!= N+1: # Check if node already eliminated ind = np.where(distances[k, 1:] < fc*distances[k, 1])[0] ind2 = indices[k, ind+1] ind2 = np.delete(ind2,ind2 < k) # Mark nodes above present one, and which indices[ind2, 0] = N+1 # are within the factor fc of the closest one elim_ind_sorted = indices[:, 0]!= N+1 xy.sub = xy[elim_ind_sorted]
return xy.sub
|
2302.11280
|
Topic-switch adapted Japanese Dialogue System based on PLATO-2
|
Large-scale open-domain dialogue systems such as PLATO-2 have achieved
state-of-the-art scores in both English and Chinese. However, little work
explores whether such dialogue systems also work well in the Japanese language.
In this work, we create a large-scale Japanese dialogue dataset,
Dialogue-Graph, which contains 1.656 million dialogue data in a tree structure
from News, TV subtitles, and Wikipedia corpus. Then, we train PLATO-2 using
Dialogue-Graph to build a large-scale Japanese dialogue system, PLATO-JDS. In
addition, to improve the PLATO-JDS in the topic switch issue, we introduce a
topic-switch algorithm composed of a topic discriminator to switch to a new
topic when user input differs from the previous topic. We evaluate the user
experience by using our model with respect to four metrics, namely, coherence,
informativeness, engagingness, and humanness. As a result, our proposed
PLATO-JDS achieves an average score of 1.500 for the human evaluation with
human-bot chat strategy, which is close to the maximum score of 2.000 and
suggests the high-quality dialogue generation capability of PLATO-2 in
Japanese. Furthermore, our proposed topic-switch algorithm achieves an average
score of 1.767 and outperforms PLATO-JDS by 0.267, indicating its effectiveness
in improving the user experience of our system.
|
Donghuo Zeng, Jianming Wu, Yanan Wang, Kazunori Matsumoto, Gen Hattori, Kazushi Ikeda
|
2023-02-22T10:57:59Z
|
http://arxiv.org/abs/2302.11280v1
|
# Topic-switch adapted Japanese Dialogue System based on PLATO-2
###### Abstract
Large-scale open-domain dialogue systems such as PLATO-2 have achieved state-of-the-art scores in both English and Chinese. However, little work explores whether such dialogue systems also work well in the Japanese language. In this work, we create a large-scale Japanese dialogue dataset, Dialogue-Graph, which contains 1.656 million dialogue data in a tree structure from News, TV subtitles, and Wikipedia corpus. Then, we train PLATO-2 using Dialogue-Graph to build a large-scale Japanese dialogue system, PLATO-JDS. In addition, to improve the PLATO-JDS in the topic switch issue, we introduce a topic-switch algorithm composed of a topic discriminator to switch to a new topic when user input differs from the previous topic. We evaluate the user experience by using our model with respect to four metrics, namely, coherence, informativeness, engagingness, and humanness. As a result, our proposed PLATO-JDS achieves an average score of **1.500** for the human evaluation with human-bot chat strategy, which is close to the maximum score of 2.000 and suggests the high-quality dialogue generation capability of PLATO-2 in Japanese. Furthermore, our proposed topic-switch algorithm achieves an average score of **1.767** and outperforms PLATO-JDS by **0.267**, indicating its effectiveness in improving the user experience of our system.
## 1 Introduction
Transformer-based methods are becoming fundamental techniques to developing human-like chatbots [4, 11, 12, 13]. Large-scale open-domain dialogue systems such as PLATO-2 [14] is designed to scale the model up to billion parameters leading to high-quality open-domain chatbots, have achieved state-of-the-art scores in both English and Chinese. However, such large-scale dialogue systems are rarely explored in Japanese.
In this work, we create a large-scale Japanese dialogue dataset, Dialogue-Graph by collecting 1.656 million dialogue data in a tree structure from News, TV subtitles and Wikipedia corpus. Then, we train PLATO-2 using Dialogue-Graph to build a large-scale Japanese dialogue system, PLATO-JDS. Moreover, we study some cases generated by PLATO-JDS and the result suggests that PLATO-JDS is difficult to suitably switch to a new topic during a dialogue. To solve this issue to further improve the user experience of our system, we introduce a topic-switch algorithm composed of a topic discriminator to switch previous topic to a new topic when user input differs from previous topic. The topic discriminator is a BERT-based binary classifier [4] that used to predict whether the user input belongs to previous topic or not. The topic discriminator is a BERT-based binary classifier [4] used to predict whether the user input belongs to previous topic or not. As shown in Figure 1, if the result is "Yes", the concatenation of the user input and previous dialogue is used as the input to PLATO-JDS, otherwise, only the user input is used.
We evaluate PLATO-JDS in terms of four metrics, namely, coherence, informativeness, engagingness and humanness [1], and the results demonstrate that our proposed PLATO-JDS achieves scores of 1.600, 1.467, 1.467, and 1.467 on four metrics, respectively for human evaluation with the human-bot chat strategy, and the average score is **1.500** and close to the maximum score of 2.000, suggesting that PLATO-2 can be adopted to achieve high-quality dialogue generation in Japanese. Furthermore, our proposed topic-switch algorithm achieves the average score of **1.767** and outperforms PLATO-JDS by **0.267**, indicating its effectiveness in improving the user experience of our system.
Our contributions include: 1) we manually collected 1.656 million dialogue data to build a tree-structured large Japanese dataset _Dialogue-Graph_; 2) we trained a Japanese PLATO-2, PLATO-JDS by using _Dialogue-Graph_. Both human and automatic evaluations demonstrate that PLATO-JDS is effective to achieve high-quality dialogue generation; 3) Moreover, we introduced a topic-switch algorithm to further improve the user experience of PLATO-JDS.
## 2 Related Work
Dialogue systems in the Japanese language domain have been developed by employing basic deep learning techniques, such as the AI love counseling system [16]. There are also some chatbot applications, such as Clova [15] and Rinna [21] that employ state-of-the-art NLP methods to improve the performance of dialogue generation. The Japanese language model [12] has trained two types of models on a public corpus, including the GPT-2 [1] and RoBERTa [14] models. Considering a large-scale dataset can be used to train high-performance dialogue systems and should be taken into account. Current works [17, 18] try to collect a large-scale dataset to improve Japanese dialogue system. Even so, few large-scale Japanese conversation systems have been trained by using a large Japanese language dataset, such as those in the millions in terms of size.
Recently, large-scale transformer-based dialogue generation models have significantly improved the performance of open-domain chatbots [19, 1, 10, 15].
2021a, Bao et al., 2021b, Roller et al., 2021, Lewis et al., 2020). DialoGPT (Zhang et al., 2020) is a transformer-based dialogue generation model trained by using 147M dialogue-like exchanges from Reddit comments and shows good performance in generating context-based responses by giving a single user input. PLATO (Bao et al., 2020) focuses on performing one-to-many dialogue generation (Kim et al., 2020) (_e.g._, given a single user input, the model can reply with more than one response) that is able to improve DialoGPT. Furthermore, PLATO-2 (Bao et al., 2021a) scales the PLATO model up to billions of parameters to achieve a new state-of-the-art open-domain dialogue systems by introducing curriculum learning (Bengio et al., 2009), which contains pretraining on a one-to-one generation subtask and finetuning on a one-to-many generation subtask. However, PLATO-2 is evaluated on English and Chinese data, and there is no work exploring its effectiveness on Japanese data to facilitate the development of Japanese dialogue systems. To construct a high-quality dialogue system, control generation methods (Dathathri et al., 2019, Keskar et al., 2019, Madotto et al., 2020, Smith et al., 2020, Du and Ji, 2021) have been widely applied in dialogue system. In this study, PLATO-JDS is trained by curriculum learning, which controls coherent and fluent responses through two steps, as shown in Figure 5.
To improve the user experience of dialogue systems, it is necessary to make advanced models have very good topic adaptability. The work (Xu et al., 2021) extracts topic segments from dialogue history in an unsupervised way to select the most appropriate response for the topic-switch as needed. The work (Xia et al., 2022) also adopts the topic segments method but also learns inter-segment relationships to improve the topic-switch. The work (Sugiyama et al., 2021) collect a mixed dataset with three characteristics: personality, empathy, and knowledge, to fine-tune BlenderBot to conduct a chit-chat system, which can switch topics frequently. Although these models track global topic flow throughout multi-turn dialogue, they have difficulty eliminating the interference of historical dialogues when users suddenly switch topic that is not related to the previous dialogue. Instead of inputting all the historical dialogues and user's input as previous methods, we propose a topic-switch algorithm with a topic discriminator to determine what the inputs of the trained PLATO-JDS are.
## 3 Dataset
In this section, we describe the details of Japanese dataset _Dialogue-Graph_ in SS3.1 and the data preprocessing in SS3.2.
### Data collection
In this work, we use the crowd-driven system (Ikeda and Hoashi, 2018) to collect our dialogue data. This system can greatly improve the efficiency of dialogue data collection because it uses an asynchronous approach to dialogue creation, where workers can create an utterance at any time without waiting for the previous workers to finish creating the utterances. Specifically, when a worker starts using the system, the system will assign a dialogue that requires
Figure 1: Overview of the topic-switch based PLATO-JDS. Given a user input with previous dialogue, the topic-switch consists of a topic discriminator that predicts if the user input is the same topic as previous dialogue, and switch to a new topic when it differs form previous dialogue’s. The PLATO-JDS is a pretrained Japanese dialogue generation model for generating a response by giving specific inputs from the topic-switch.
Figure 2: _Dialogue-Graph_: Start from **topic 0:food** with an initial utterance \(n^{0}_{0,0}\):”What is your favorite food?”, the _Dialogue-Graph_ grows to a one-to-many dialogue data containing eight dialogue turns in max. Here, one-to-many indicates that one user input can accommodate many responses.
the workers' input, and the worker creates an utterance based on the given topic as well as the previous utterances.
We recruit 60 participants to collect the dialogue dataset. We assumed that a dialogue consists of alternating utterances between role A and role B, all participants were divided into two groups of 30 individuals playing role A and role B. One role can create utterances for multiple dialogues, and a dialogue is created by multiple workers. In the end, we select only high-quality conversations by evaluation method the same as [11] and remove those dialogues that are not common sense, politically sensitive, and ethical concerns. To increase the efficiency of data collection and allow participants to create dialogues faster, we set the system to be accessible from 12:00 am to 3:00 pm. The dialogue data collection took over three months, costing 4 yen per utterance input and 32 yen per dialogue (about 50.30).
Inspired by the studies [15, 14], we also create tree-structure dialogues, with 9 levels from the first head node to the leaf node, i.e. the length of the conversation is 8 turns. We use 50 topics and each topic has an initial utterance, the same as the work [11], for example, the topic is family and his initial utterance is "Do you live with someone?". A worker created 8 utterances based on the starting utterance, which became a dialogue. Then, we evaluated the collected dialogues using the same evaluation method as in [11]. This method leverages the quality of the dialogues from three perspectives: efficiency, quality, and workers' interest. After the data collection was completed, we hired the same 60 participants to do the evaluation.
In Figure 2, we define _Dialogue-Graph_\(\mathcal{G}^{Dia}=(\mathcal{N},\mathcal{E})\), \(\mathcal{N}=\{n^{t}_{i,j}|0\leq t\leq 49,0\leq i\leq 8,j\geq 0\}\), where \(\mathcal{N}\) is the set of dialogue node \(n^{t}_{i,j}\), \(\mathcal{E}=\{e^{t}_{i,j\to m,n}|0\leq t\leq 49,0\leq i,m\leq 8,j,n\geq 0\}\) is the set of dialogue turns, such that \(e^{0}_{0,0\to 1,0}\) represents a single utterance turn from \(n^{0}_{0,0}\) to \(n^{0}_{1,0}\). Here, \(t,i/m\) and \(j/n\) indicate the number of topic types, utterance turns and candidates of the current dialogue node, respectively, the maximum number of \(t\) and \(i/m\) are \(50\) and \(8\), the minimum number of \(j/n\) is \(3\).
### Preprocessing
To prepare training data for training the topic discriminator and _PLATO-JDS_, we follow three policies to preprocess data from _Dialogue-Graph_. We split each preprocessed dataset to training, validation and testing set with a common rate of \(90\%:5\%:5\%\).
* **pos. vs neg.** As shown in Figure 3 (a), we extract arbitrary dialogue turns from the same topic and annotate them as positive samples. On the other hand, we annotate the counterpart as negative samples.
* **one-to-one mapping.** We prepare training data for the stage 1 of _PLATO-JDS_. As shown in Figure 3 (b), we employ the breadth-first search method (BFS) to extract 1.68M data samples in total. BFS is designed to traverse dialogue nodes in the graph to obtain utterance pairs. Starting from the root node, we extract all utterance turns as training data (_e.g._, \(n^{0}_{0,0}\to n^{0}_{1,0}\), \(n^{0}_{0,0}\to n^{0}_{1,3}\)). The definition of an utterance turn is provided in SS3.
* **one-to-many mapping.** We prepare training data for the stage 2 of _PLATO-JDS_. As shown in Figure 3 (c), we use the depth-first search method (DFS) method to generate 1.26M data samples. DFS is designed to traverse dialogue nodes in the graph to obtain all eight-turn dialogue as training data (_e.g._, \(n^{0}_{0,0}\to n^{0}_{1,0}\!\!\to\!\!n^{0}_{2,1}\!\!\to\!\!n^{0}_{3,1}\!\!\to\!\!n^{0}_{4,1}\!\! \to\!\!n^{0}_{5,0}\!\!\to\!\!n^{0}_{6,0}\to n^{0}_{7,0}\!\!\!\to\!\!n^{0}_{8,2}\)).
## 4 Approach
To achieve PLATO-JDS as shown in Figure 1, we design a topic-switch algorithm to utilize a topic discriminator to decide what the model inputs is by giving a user input and previous dialogue, then we feed it to a trained PLATO-JDS to generate a response. In this section, we first describe the detail of the topic discriminator in SS4.1 and PLATO-JDS in SS4.2 in detail, then we describe how the topic-switch algorithm works in SS4.3.
### Topic discriminator
The topic discriminator is a binary classifier that is designed to determine whether current user input belongs to a topic common to the previous dialogue. As shown in Figure 4, we build a BERT-based topic discriminator and train it by using the preprocessed Dialogue-Graph data described in SS3.2. We annotate data that belongs to a common topic as positive samples and the counterpart as
Figure 3: Training data samples for training a topic discriminator and _PLATO-JDS_.
negative samples. We concatenate one turn containing an utterance pair as a single sentence to input into the tokenization method followed by the backbone of BERT. The objective of this training task is to discriminate whether two utterances in one pair belongs to a common topic by optimizing the binary cross entropy (BCE) loss as follows.
\[L_{BCE}=-\frac{1}{N}\sum_{i=1}^{N}(Y_{i}\cdot\log\hat{Y_{i}}+(1-Y_{i})\cdot\log( 1-\hat{Y_{i}})) \tag{1}\]
where \(N\) is the number of samples, \(Y_{i}\) denotes the ground-truth label (0 or 1), while \(\hat{Y_{i}}\) is the predicted probability of the topic discriminator.
Here, we employ the SentencePiece [14] tokenization method to segment a raw input sentence directly into word sequences. SentencePiece is a language-agnostic tokenization method based on a subword idea to avoid unknown tokens while reducing the number of tokens. The Byte Pair Encoding (BPE) [12, 13] algorithm is a subword division algorithm employed by SentencePiece to shortlist high-frequency words as tokens and divide low-frequency words into two or more subwords as tokens. We use SentencePiece to split the Japanese dataset _Dialogue-Graph_ into 48k tokens. Finally, we employ the same input representations as the original PLATO-2 model including token embedding, role embedding, turn embedding and position embedding, respectively. Here, token embedding is a pretrained embedding for different subwords. Role embedding is designed to distinguish the role of speakers during one dialogue. Turn embedding is assigned according to relative order when there are multi-turn dialogues in the conversion. Position embedding is obtained based on the token position in each dialogue.
### Plato-Jds
We follow the original PLATO-2 model [1] to train it on Japanese data to perform dialogue generation. PLATO-2 is a large-scale transformer based dialogue generation framework that is trained via curriculum learning [1]. Curriculum learning is a two-stage training strategy and is shown in Figure 3. In the first stage, a coarse-grained generation model is pretrained to learn response generation under different dialogue contexts by using the preprocessed one-to-one mapping data shown in Figure 3 (b). In the second stage, a fine-grained generative model and an evaluation model are trained by using the preprocessed one-to-many mapping data shown in Figure 3 (c). We adopt the same input representations as the topic discriminator model.
**Stage 1, General response generation.** One-to-one mapping is a conventional approach and it is an efficient way to learn high-level properties of response generation. Given a \(i^{th}\) response \(R(i)\) and its previous dialogue context \(C(i)\), the model is trained by minimizing the negative log-likelihood (NLL) loss as follows,
\[L_{NLL}^{General}=-\operatorname{\mathbf{E}}\sum_{i=1}^{L}\log p(R(i)|C(i)) \tag{2}\]
where \(L\) is the length of the generated response \(R(i)\).
**Stage 2.1, Diverse Response Generation.** To further train the model to capture the relationship of one-to-many mapping, a discrete latent variable \(z\) is introduced. The model estimates the latent distribution of training samples \(p(z|C(i),R(i))\) and then generates sampled latent variable \(p(R(i)|C(i),z)\). The \(NLL\) loss of diverse response generation and the bag-of-words (BOW) loss are defined as follows.
\[L_{NLL}^{Diverse} =-\operatorname{\mathbf{E}}_{z\sim p(z|C(i),R(i))}\sum_{i=1}^{L} \log p(R(i)|C(i),z) \tag{3}\] \[L_{BOW}^{Diverse} =-\operatorname{\mathbf{E}}_{z\sim p(z|C(i),R(i))}\sum_{i=1}^{L} \log p(R(i)|C(i),z)\] \[L^{Diverse} =L_{NLL}^{Diverse}+L_{BOW}^{Diverse}\]
where \(L\) is the length of generated response \(R(i)\). The final generation model is optimized by the above integrated loss \(L^{Diverse}\).
**Stage 2.2, Coherent Response Selection**. Once diverse responses generated from one-to-many generation, the highest quality response is selected as the final output. The selection model is trained by estimating the coherence between a given utterance and its response by leveraging the capacity of distributed representation of the masked language model (MLM). Two objective loss of response coherence estimation (RCE) and MLM are as follows.
\[L_{RCE}^{Selection} =-\log p(l_{r}=1|C(i),R(i))-\log p(l_{r}=0|C(i),R(i)) \tag{4}\] \[L_{MLM}^{Selection} =-\operatorname{\mathbf{E}}\sum_{i\in L}\log p(x_{i}|^{x^{x}}/L)\] \[L^{Selection} =L_{RCE}^{Selection}+L_{MLM}^{Selection}\]
where \(l_{r}\) denote the generated response is consistent with dialogue text (\(R(i)\)) or not (\(R(i)\)), \(x\) denotes input tokens of context and response. \(\{x_{i}\}_{i\in L}\) denotes mask tokens, and \(\sfrac{x}{L}\) represents rest unmasked tokens.
### Topic-switch algorithm
A pretrained PLATO-JDS is used to generate a response by inputting the concatenation of the previous dialogue and current user input. We design a topic-switch algorithm to decide whether to include the previous dialogue used for response generation. We define the topic-switch algorithm in Algorithm 1. We utilize a topic discriminator to predict whether the topic of current user input belongs to the previous dialogue topic by comparing with an experimental threshold. If we get a score that is less than the
Figure 4: Overview of topic discriminator. It consists of data preprocessing described in § 3.2, tokenization method employed to obtain word sequences, the backbone of BERT for training a binary classifier.
threshold, the current user input is probably unrelated to the previous dialogue and the input for the PLATO-JDS module will be automatically switched to current user input. Conversely, the input for the PLATO-JDS module will include previous dialogue. Here, we set the threshold at 0.61 based on the trade-off between the precision and recall, where the recall is 99.75% and the precision is 95.72%.
**Input**: \(\varepsilon\): a threshold for deciding if switch to a new topic, \(i\): the number of turns, \(C(i)\): the previous dialogue of turn \(i\), \(\Phi(i)\): the user utterance of turn \(i\), \(\Gamma(a,b)\): the topic discriminator to predict whether \(a\) and \(b\) belong to a same topic.
**Output**: the context of next utterance \(C(i)\)
```
1:\(\beta=\Gamma(C(i),\Phi(i))\)
2:if\(\beta\leq\varepsilon\)then
3:\(C(i)\leftarrow\Phi(i)\)\(\triangleright\) Switch to a new topic
4:elseif\(\beta>\varepsilon\)then
5:\(C(i)\gets C(i)+\Phi(i)\)
6:endif
7:\(i\gets i+1\)
```
**Algorithm 1** Topic-switch algorithm
## 5 Experiment
In this section, we discuss the evaluation metrics in SS5.1. Then, we explain why we choose PLATO-2 as the backbone of dialogue generation module by discussing the result in the English data reported in PLATO-2's work in SS5.2. To evaluate our proposed topic-switch based _PLATO-JDS_, we compare our model with _PLATO-JDS_ and report the comparison results in SS5.4. Furthermore, we discuss the result of a case study in SS5.5, and the training detail in SS5.3.
### Evaluation Metric
We evaluate our model from human and automatic perspectives. To achieve a fair comparison, we utilize the same evaluation metrics used in the PLATO-2's study [1]. In terms of human evaluation, we asked 15 people _(11 male and 4 female)_ to perform a three-level evaluation _(0:bad,1:neutral,2:good)_ with respect to the utterance-level metrics of "coherence" and "informativeness", and the dialogue-level metrics of "engagingness" and "humanness" to evaluate the user experience. The definitions of these metrics are as follows:
* Coherence: leveraging the response whether it is relevant to the current topic and consistent with the context.
* Informativeness: measuring the response whether it is informative and is appropriate.
* Engagingness: judging the dialogue whether the evaluator likes to talk with the speaker in a long dialogue.
* Humanness: assessing the dialogue whether the speaker is a human being or the response is natural.
In terms of automatic evaluation, to evaluate the model's capacity on lexical diversity, we employ the corpus-level metrics [11] distinct-1 and distinct-2 _(distinct-1/2 in Table 1, 2)_, which are defined as the number of distinct \(unigrams\) and \(bigrams\) as scaled by the total number of generated tokens in response generation.
### Baseline
We set PLATO-2 as our baseline since it is the state-of-the-art dialogue generation model on the English and Chinese domains. PLATO-2 has achieved the best performance in both human and automatic evaluations compared to the Blender model [14] that mitigates undesirable toxic or bias traits of large corpora by introducing blended skills. However it requires extensive manual annotations. Considering the cost-performance ratio, we only implement PLATO-2 as _PLATO-JDS_ and compare it with our proposed topic-switch based _PLATO-JDS_, both trained on _Dialogue-Graph_ in this work. We perform both evaluations through bot-bot and human-bot chat strategies and provide the detail in SS5.4. The description of _PLATO-JDS_ is provided in SS4.2.
### Training Details
The _PLATO-JDS_ training include two stages. In stage 1, we employed negative log-likelihood (NLL) loss to capture the general characteristics of response generation via one-to-one mapping learning. In stage 2.1, we took the sum of NLL loss and bag-of-words (BOW) loss to learn the fine-grained generation. Here, BOW loss is used to reflect the training process of discrete latent variables. In stage 2.2, the objective loss was the sum of losses of response coherence estimation and Masked Language Model (MLM) loss [1]. More details of the above loss functions can be found in SS 4.2. Moreover, we set the maximum sequence length of context and response to 512, and the size of position embedding to 256. We trained _PLATO-JDS_ for 25 days by using NVIDIA RTX A5000 GPU (24G x 2).
In addition, we show the loss curve of model training and validation process in the stages 1, 2.1, and 2.2 in Figure 7 in Appendices. It shows that overfitting does not occur during model training, and demonstrates that the trained model is reliable.
### Result
#### 5.4.1 Bot-bot chat strategy
The bot-bot chat strategy is designed to evaluate dialogue generation by performing dialogue simulation between two chatbots to reduce time consumption and expense of full human evaluations, which require humans to spend time talking to the chatbot and scoring generated response. This strategy is commonly applied to achieve a fair evaluation of dialogue generation [11, 12]. We randomly selected 200 questions
Figure 5: Overview of the PLATO-JDS model and its data processing. The data preprocessing described in § 3.2, tokenization method as same as the topic discriminator (§ 4.1), and the backbone dialogue generation model of PLATO-2.
from our dataset as the start to produce ten-turns dialogues per question for models. As the result shown in Table 1, our topic-switch based _PLATO-JDS_ outperformed _PLATO-JDS_ on all human evaluation metrics, which suggests that the responses generated by our topic-switch based _PLATO-JDS_ are coherent in the current dialogue context, and contains much more information than _PLATO-JDS_. In addition, from the scores of "Engagingness" and "Humanness" metrics, we believe that the topic-switch based _PLATO-JDS_ is capable of improving the user experience on _PLATO-JDS_, which also proves the effectiveness of generating human-like responses. Furthermore, the result of distinct-1/2 demonstrated that our topic-switch based _PLATO-JDS_ contains a larger number of distinct \(unigrams\) and \(bigrams\) than _PLATO-JDS_, which indicates its capacity in terms of lexical diversity. In addition, from the average length of utterance and the average number of topics, our topic-switch based _PLATO-JDS_ achieved higher point than _PLATO-JDS_, which further verifies the validity of the topic-switch on switching to a new topic depending on the user input.
#### 5.4.2 Human-bot chat strategy
In addition to the bot-bot chat strategy, we also collected human-bot dialogue records to evaluate our model in a real-world setting. We employed 15 participants to chat with the bot, and each participant was requested to talk to the bot for 50 turns. We generated a total of 750 dialogue data. The results are summarized in Table 2. We obtained a very close comparison result as well as the result described in SS5.4.1. Although lower scores were obtained for our topic-switch based _PLATO-JDS_ compared with those in the bot-bot chat strategy, we achieved higher scores on all metrics compare to _PLATO-JDS_.
To further confirm the effectiveness of the topic-switch, we also compared the number of topics on average in the generated dialogue. As shown in Tables 1 and 2, topic-switch based _PLATO-JDS_ manages two more topics and longer dialogue than the _PLATO-JDS_, which demonstrates the effectiveness of the topic-switch on making _PLATO-JDS_ to generate a response based on a new topic. As a result, adapting the topic-switch algorithm that can improve _PLATO-JDS_ in both human and automatic evaluations and suggest the capability in improving the user experience.
### Case study
We pick up two human-bot dialogue samples of _PLATO-JDS_ with or without topic-switch by using a same start utterance in the same topic "beauty and health", which is to study how topic-switch affect the response generation. As shown in Figure 6, when user input changes to a new topic that differs from context in previous dialogue, we noted that our model with the topic-switch is able to generate an appropriate response but _PLATO-JDS_ can not. We believe that our proposed topic-switch forced _PLATO-JDS_ module to switch to a new topic to generate its relevant response, thereby demonstrating the effectiveness of the topic-switch. Furthermore, the case study also demonstrated that the coherence among all the evaluation results described in SS5.4.
\begin{table}
\begin{tabular}{c|c c} \hline
**MetricModel** & **w/o topic-switch** & **w/ topic-switch** \\ \hline \multicolumn{3}{c}{**Human evaluation**} \\ \hline Coherence & 1.667 & **1.800** \\ Informativeness & 1.533 & **1.867** \\ Engagingness & 1.533 & **1.733** \\ Humanness & 1.533 & **1.800** \\ \hline Average score & 1.567 & **1.800** \\ \hline \multicolumn{3}{c}{**Automatic evaluation**} \\ \hline Distinct-1/2 & 0.343/0.712 & **0.388/0.730** \\ \hline Length (avg.) & 14.016 & **14.667** \\ Topics (avg.) & 4.6 & **6.8** \\ \hline \end{tabular}
\end{table}
Table 1: The bot-bot chat strategy: comparison results of _PLATO-JDS_ (w/o topic-switch) and topic-switch based _PLATO-JDS_ (w/ topic-switch).
\begin{table}
\begin{tabular}{c|c c} \hline
**MetricModel** & **w/o topic-switch** & **w/ topic-switch** \\ \hline \multicolumn{3}{c}{**Human evaluation**} \\ \hline Coherence & 1.600 & **1.733** \\ Informativeness & 1.467 & **1.800** \\ Engagingness & 1.467 & **1.800** \\ Humanness & 1.467 & **1.733** \\ \hline Average score & 1.500 & **1.767** \\ \hline \multicolumn{3}{c}{**Automatic evaluation**} \\ \hline Distinct-1/2 & 0.339/0.709 & **0.376/0.715** \\ \hline Length & 13.970 & **14.333** \\ Topics (avg.) & 4.2 & **6.5** \\ \hline \end{tabular}
\end{table}
Table 2: The human-bot chat strategy: comparison results of _PLATO-JDS_ (w/o topic-switch) and topic-switch based _PLATO-JDS_ (w/ topic-switch).
Figure 6: Two dialogue samples generated with the _PLATO-JDS_ with or without topic-switch models. Starting from the same utterance “I have a pain in my stomach” that belongs to the topic of “Health”, when the user input change to the new topic of “Alcohol” in the red dotted box to chat; our topic-switch based _PLATO-JDS_ can switch to the “Alcohol” topic to generate an appropriate response but _PLATO-JDS_ can not.
Conclusion
We built a PLATO-2 based Japanese dialogue system PLATO-JDS, and proposed a topic-switch algorithm to make PLATO-JDS switch to a new topic if user input differs from the previous topic. We built the largest Japanese dialogue generation dataset _Dialogue-Graph_ by collecting 1.656M dialogue data from News, TV subtitles and Wikipedia corpus. our proposed PLATO-JDS achieved the average score of **1.500** for the human evaluation with human-bot chat strategy, and our proposed topic-switch algorithm further improved PLATO-JDS by **0.267** and achieved **1.767** human evaluation. The results suggest that PLATO-2 can be adopted to achieve high-quality dialogue generation in Japanese and the topic-switch is effective for improving the user experience of our system.
|
2304.02614
|
The Realizations of Steganography in Encrypted Domain
|
With the popularization and application of privacy protection technologies in
cloud service and social network, ciphertext has been gradually becoming a
common platform for public to exchange data. Under the cover of such a
plat-form, we propose steganography in encrypted domain (SIED) in this paper to
re-alize a novel method to realize secret communication Based on Simmons' model
of prisoners' problems, we discuss the application scenarios of SIED. According
to the different accesses to the encryption key and decryption key for secret
mes-sage sender or receiver, the application modes of SIED are classified into
four modes. To analyze the security requirments of SIED, four levels of
steganalysis attacks are introduced based on the prior knowledge about the
steganography system that the attacker is assumed to obtain in advance. Four
levels of security standards of SIED are defined correspondingly. Based on the
existing reversible data hiding techniques, we give four schemes of SIED as
practical instances with different security levels. By analyzing the embedding
and extraction characteris-tics of each instance, their SIED modes, application
frameworks and security lev-els are discussed in detail.
|
Yan Ke, Minqing Zhang, Jia Liu, Xiaoyuan Yang
|
2023-03-13T13:39:28Z
|
http://arxiv.org/abs/2304.02614v1
|
# The Realizations of Steganography in Encrypted Domain
###### Abstract
With the popularization and application of privacy protection technologies in cloud service and social network, ciphertext has been gradually becoming a common platform for public to exchange data. Under the cover of such a platform, we propose steganography in encrypted domain (SIED) in this paper to realize a novel method to realize secret communication Based on Simmons' model of prisoners' problems, we discuss the application scenarios of SIED. According to the different accesses to the encryption key and decryption key for secret message sender or receiver, the application modes of SIED are classified into four modes. To analyze the security requirments of SIED, four levels of steganalysis attacks are introduced based on the prior knowledge about the steganography system that the attacker is assumed to obtain in advance. Four levels of security standards of SIED are defined correspondingly. Based on the existing reversible data hiding techniques, we give four schemes of SIED as practical instances with different security levels. By analyzing the embedding and extraction characteristics of each instance, their SIED modes, application frameworks and security levels are discussed in detail.
Keywords:Steganography, Encrypted Domain, Prisoners' Problems, Steganalysis Attacks, Reversible Data Hiding.
## 1 Introduction
The modern secret communication technology originated from the military demands of the secret communication since World War II, and the realization of secret communication mainly consists of two major technologies: cryptography and steganography [1]. Steganography is an important covert communication technique, the characteristic of which is to maintain the undetectability of the existence of the secret communication. The realization of steganography technology is derived from the various public communication platforms with popular applications in social lives, such as website pages, social networks, images or video shows, etc.
The prototype of steganography was first defined with modern terminology in Simmons' founding work on subliminal channels and the prisoners' problems [2]. In prisoner's problems [2], Alice and Bob are in jail and they want to devise an escape plan
by exchanging hidden messages in innocent-looking covers (_e.g._, natural images). These covers are conveyed to one another by a common warden, Eve, who can eavesdrop all covers and can choose to interrupt the communication if they appear to be a stego-cover. In this model, the secret communication consists of three basic elements: 1. secret message, 2. cover, and 3. open channel. The open channel of existing steganography schemes is mainly derived from the public communication platforms. Communication platforms have been enriched with the development of the social environments. Since privacy protection techniques are widely introduced in cloud services, blockchain technology, and federated learning. Ciphertext are gradually becoming a common platform for the public. How to realize covert communication under the cover of ciphertext environments is drawing attentions.
This paper mainly focuses on the characteristics and security requirements of the steganography under the condition of communication in encrypted domain, and introduces several feasible realizations of steganography in encrypted domain (SIED), and analyzes the principles and security requirements of SIED.
## 2 Application modes of SIED
### Common Applications of Ciphertext
Combining the prisoner problem to illustrate the differences between SIED and traditional steganography, SIED are realized on the cryptosystem, in which Alice and Bob must perform encryption operations first. Then Alice and Bob could embed (extract) secret messages into (out of) the ciphertext. The data transmitted through Eve is the ciphertext carrying additional secret messages, _i.e._, stego-ciphertext. The proposal of SIED is based on the wide applications of ciphertext in common social life. There exists applications in real life for Alice to exchange messages by ciphertext.
For examples, to protect the user's privacy, existing social networks, cloud services, blockchain and streaming media platforms supports client Alice to encrypt their data before uploading to the service or another client. The data of Alice is usually some information from her daily life, not confidential information. Encryption port is just a common service open access to the public. For some individuals in special environments, whose available communication tools are mainly encrypted channels or cryptosystems, _e.g._, staff from the core research institution, inter-country manager of enterprises, or security departments, etc. All above could provide a promising tool to cover the communication of SIED.
Figure 1: The prisoners’ problems
### Applications Modes
In a cryptosystem, let the plaintext be denoted as \(\mathbf{P}\), the ciphertext as \(\mathbf{C}\), the encryption key as \(\mathbf{K}_{\text{Enc}}\), the decryption key as \(\mathbf{K}_{\text{Dec}}\), the encryption function as Enc (.,. ), the decryption function Dec(.,. ). The relationships among them are as shonw in Eqs. (1)-(2):
\[\mathbf{C} =\text{Enc}(\mathbf{K}_{\text{Enc}}\,,\mathbf{P}) \tag{1}\] \[\mathbf{P} =\text{Dec}(\mathbf{K}_{\text{Dec}}\,,\mathbf{C}) \tag{2}\]
be able to read the communication content. The SIED is implemented based on the above communication process.
There are two cases for data embedding modes of SIED. The 1\({}^{\text{st}}\) case is that the data hider (Alice) is the sender of the cryptosystem, that is, Alice has access to the encryption key, as shown in Fig. 3(a); the 2\({}^{\text{nd}}\) case is that Alice can only implement embedding after encryption, as shown in Fig. 3(b). Alice has to intercept the original ciphertext of the cryptosystem first, and then replay the embedded ciphertext into the channel and send it to the receiver. In this case, the computational complexity of Alice is relatively high. Considering the characteristics of the modern public key cryptosystem, the encryption key is usually open to the public. Therefore, the 2\({}^{\text{nd}}\) case may be more practical in reality.
There are also two cases for the extraction modes of SIED based on whether encryption key is necessary for data extraction, as shown in Fig. 4. The 1\({}^{\text{st}}\) case is that Bob is a legitimate receiver of the cryptosystem with the decryption key. Bob could extract the secret messages from the stego-ciphertext by using the decryption key. Even if Bob can master the decryption key, he may choose not to use that key for information extraction, depending on the requirements of the extraction function of the SIED algorithm. But in practical applications, the receiver or owner of the ciphertext is usually not the only one, Bob is just one of them. The 2\({}^{\text{nd}}\) case is that Bob is not a legitimate receiver of the cryptosystem. Bob needs to intercept the stego-ciphertext by eavesdropping, and extracts the secret messages from it without using the decryption key.
To sum up, there are four application modes of SIED that can be classified from the perspective of whether Alice or Bob can use the encryption key and decryption key. As shown in Table 1, the application modes include:
* All controlled (AC) mode.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Modes** & **Alice** & **Bob** & **Flowchart** \\ \hline AC & With encryption key & With decryption key & Fig.3 (a) \& Fig. 4(a) \\ EXC & Without encryption key & With decryption key & Fig.3 (b) \& Fig. 4(a) \\ EMC & With encryption key & Without decryption key & Fig.3 (a) \& Fig. 4(b) \\ AF & Without encryption key & Without decryption key & Fig.3 (b) \& Fig. 4(b) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The classification of SIED
Figure 4: Extraction modes, _i.e._, the two cases of extracting data from stego-ciphertext by Bob: (a) with decryption key; (b) without decryption key.
Extraction controlled (EXC) mode.
* Embedding controlled (EMC) mode.
* All free (AF) mode.
In AF mode, the steganography system is free from the security constraints of cryptosystems and could be applied to more complex applications.
The security of steganography is mainly reflected by the degree of concealing the existence of secret communication. Specifically, it requires the indistinguishability of the stego-cover and the original cover. Below, we specifically analyze the security requirements of SIED.
## 3 Security of SIED
### Indistinguishability
The indistinguishability of cover and stego-cover is an important guarantee of stego-security. For the attackers, it is a variety of covers that they can directly obtain in the open channels. Current steganography methods also focus on the indistinguishability, on which steganographic researchers have done a lot of work in recent years [1]. Firstly, the concept of an abstract distance D (D\(\geq\)0) is given, which quantifies the differentiability between different samples. D(\(P\)(\(C\)), \(P\)(\(C^{\prime}\)))=0 indicates that the ciphertext \(C\) is indistinguishable from the stego-ciphertext \(C^{\prime}\), which means not only the imperceptibility in the human visual system, but also the undetectability in the sense of statistical analysis [1].
\(C\) and \(C^{\prime}\) are indistinguishable iff (if and only if)
\[\text{D}(P\)(\(C\)), \(P\)(\(C^{\prime}\))=0 \tag{5}\]
\(C\) and \(C^{\prime}\) are distinguishable iff
\[\text{D}(P\)(\(C\)), \(P\)(\(C^{\prime}\))>0 \tag{6}\]
In practice, it is usually not directly evaluated by D=0. Here, (\(t\), \(\varepsilon\))-_indistinguishable_ is defined: \(C\) and \(C^{\prime}\) are (\(t\), \(\varepsilon\))-indistinguishable iff
\[\text{D}(P\)(\(C\)), \(P\)(\(C^{\prime}\))<\varepsilon \tag{7}\]
in polynomial time \(t\), in which \(\varepsilon\) is an arbitrary small value.
There are many specific mathematical tools to indicate the imperceptibility. ITU-T has defined several ways of subjective evaluation criteria, such as ITU-T Rec. P.910. Mean opinion score is the most representative subjective evaluation method for image quality [3]. For the statistical analysis, Cachin used Kullback-Leibler (KL) divergence distance [4] to describe the distribution deviation in steganography system [5], [6]. In [7], F. Cayre first introduced conditional probability to analyze the watermarking secu
rity which was referred to as stego-security [8], where \(P(\textbf{{C}}^{\prime}/\textbf{{K}})\) denotes the model of _C\({}^{\prime}\)_ by data hiding key \(K\).
\[P(\textbf{{C}}^{\prime}/\textbf{{K}}))=P(\textbf{{C}}) \tag{8}\]
The features of models of multimedia that acted as covers in traditional steganography have the characteristics of high dimensionality and high complexity. With the continuous increase of various types of models, the model dimension continues to increase. But the usability of the model still cannot reach the provable security [1]. On the other hand, steganalysis tools based on deep learning or reinforcement learning have made breakthroughs, threaten the practicability of steganography schemes. Fortunately, ciphertext has a natural advantage in guaranteeing theoretical security of steganography, because the ideal distribution model of ciphertext has been given already. According to "Communication theory of secrecy systems" by Shannon in 1949, the standard encrypted data follows randomly uniform distribution. Let the encrypted domain be a finite field F\({}_{q}\), the uniform distribution in F\({}_{q}\) is denoted as _U_(0, _q_). The information entropy of the data sampled from _U_(0, _q_) tends to be the largest. Therefore, when considering the embedding function and the constraints of the distribution of the stego-ciphertext, it is obvious that the stego-ciphertext should follow the randomly uniform distribution.
\[\text{P}(\textbf{{C}}^{\prime})=\text{P}(\textbf{{C}})=\textit{U}(0,\textit{q}) \tag{9}\]
In SIED, Eq. (9) is required to be satisfied after embedding. However, steganography security has not been achieved yet, because Eve here is just assumed to be an eavesdropper capable of stego-cover only attack (SCOA). In practice, what Eve can make is more than SCOA by accumulating knowledge from the steganography and cryptosystems.
### Attacks from Eve
There exists a unique equilibrium between two rational sides with certain prior knowledge and the fixed rules in game according to game theory. Based on the analysis of attack grading by Ke's [1], once the goal and the rules of the game are fixed, the maximum possible benefit of Eve is only associated with her prior knowledge [9][10]. According to Section II, stego-ciphertext can be received by legitimate receivers of the cryptosystem or eavesdroppers. Eve's ability is based on the above two identities. Attacks from Eve includes: stego-cover only attack (SCOA), known cover attack (KCA), chosen cover attack (CCA), adaptive chosen cover attack (ACCA), which are graded based on her prior knowledge of the systems of steganography and cryptography.
SIED involves two types of operations: encryption/ decryption, embedding/ extraction. The methodology of embedding is usually not a universal one, but proposed based on a specific encryption method. So, the cover includes not only the ciphertext, but also the plaintext and the encryption methods adopted in SIED. Intuitively, the more information about the targets obtained, the easier it is to achieve the attack, which is called a higher-level attack. The higher-level attack a algorithm can resist, the higher-level security it possesses.
### Security of steganography in encrypted domain
**Security against SCOA**
SCOA is the primary level attack on SIED where the attacker is assumed to have access only to a set of stego-ciphertext. In addition, the attacker still has some knowledge of the natural ciphertext. The attacker, Eve, is an eavesdropper outside the cryptosystem and does not possess encryption or decryption keys. The pre-knowledge that Eve can use in SCOA is from the stego-ciphertext that exists in the channel, e.g., the basic size of ciphertext data, the complexity of decryption, and the theoretical distribution characteristics of ciphertext. For example, in reality, an attacker may monitor public channels to obtain the stego-ciphertext.
To ensure security against SCOA, an additional data expansion in the ciphertext should not be resulted by embedding, nor an increase in the decryption complexity. As discussed in Section III. B, the stego-ciphertext should follow the randomly uniform distribution, so that the Eq. (10) is satisfied to resist SCOA.
\[\text{D}(P(\textbf{{C}}),P(\textbf{{C}}^{\prime}))\text{=}0\text{ or }\text{D}(P(\textbf{{C}}^{\prime}),\text{ }U(0,\text{ }q))\text{=}0 \tag{10}\]
**Security against KCA**
KCA is a steganalysis attack where, in addition to the prior knowledge under SCOA, Eve also has more knowledge from several pairs of the plaintext, the ciphertext, its corresponding stego-ciphertext and the decrypted result of the stego-ciphertext. The number of pairs is finite within the polynomial complexity. Eve can not only carry out the learning in SCOA, but also learn from the original ciphertext, the stego-ciphertext and the decrypted result of the stego-ciphertext. For example, in reality, in addition to monitoring all open channels, the attacker might be a legitimate receiver of the cryptosystem. She can normally decrypt the ciphertext. Compared with SCOA, Eve with KCA is more normalized and universal in applications.
Security against KCA requires the indistinguishability of the decrypted result of the stego-ciphertext and the plaintext besides the security requirements against SCOA.
\[\text{Dec}(\textbf{{K}}_{\text{Enc}}\,,\,\textbf{{C}})=\text{Dec}(\textbf{{K} }_{\text{Enc}}\,,\,\textbf{{C}}^{\prime}) \tag{11}\]
PSNR (Peak Signal to Noise Ratio) or MSE (Mean Square Error) can be introduced. If PSNR is \(\infty\), it demonstrates that the Eq. (11) is satisfied.
**Security against CCA**
CCA: In addition to the prior knowledge under the KCA, the attacker can also invoke several times of embedding or extraction process of the current steganography system. The number of the invoking operation is finite within polynomial complexity. In addition to the learning phase of KCA, Eve can invoke the embedding or extraction process to learn from the changes of process variables in the processes of embedding or data extraction. The cover invoked can be a stego-cover or a forged one by Eve. For example, in reality, the attacker can research the systems of cryptography and SIED for several times, or instigate a user of the systems to return results operated step by step on the stego-ciphertext.
On the basis of all the requirements of security against KCA, security against CCA requires that all the process variables generated by each step of decrypting stego
ciphertext are indistinguishable from the variables generated by decrypting the original ciphertext. Let the set of process variables generated from decrypting the ciphertext \(\mathbf{C}\) be \(\Theta\)=\(\{\mathbf{\delta}_{1}\), \(\mathbf{\delta}_{2}\), \(\mathbf{\delta}_{3}\),...\(|\)\(\mathbf{\delta}^{\leftarrow}\) Dec(\(\mathbf{K}_{\text{Enc}}\), \(\mathbf{C}\))\(\}\), the set of process variables generated from decrypting the stego-ciphertext \(\mathbf{C}^{\prime}\) be \(\Theta^{\prime}\)=\(\{\mathbf{\delta}_{1}\)', \(\mathbf{\delta}_{2}\)', \(\mathbf{\delta}^{\prime}\),...\(|\)\(\mathbf{\delta}^{\prime}\)'\(\leftarrow\) Dec(\(\mathbf{K}_{\text{Enc}}\), \(\mathbf{C}^{\prime}\))\(\}\). For any \(i\),
\[\text{D}(\mathbf{\delta}\text{, }\mathbf{\delta}^{\prime})\text{=}0\text{ }\mathbf{\delta}_{i}\text{ \in}\
Bob has to use the secret key to decrypt the stego-ciphertext to obtain the marked plaintext. Secret message could be extracted from the marked plaintext. The framework is as shown in Fig. 5, which conforms to AC mode.
**Security analysis**
Since FHEE-DE is constructed based on standard FHE operations, the data distribution of stego-ciphertext is consistent with the normal ciphertext. There is no secondary data expansion resulted by embedding, thus Eq. (10) can be satisfied. Therefore, the scheme can resist SCOA.
However, the scheme based on FHEE-DE cannot resist KCA. As shown in Fig. 5, the secret message can only be extracted from the marked plaintext, so it is the marked plaintext instead of the original plaintext that is obtained after decrypting the stego-ciphertext. Though the distortion in the marked plaintext can be reduced to a low level (PSNR \(\geq\)65 dB in [13]) and a recovery process can be performed to obtain the lossless plaintext, it cannot prevent Eve who own the decryption key from finding the distortion in the decryption result to confirm the existence of secret messages.
### SIED scheme against KCA
As analyzed in Section II, in practice, it is common to meet with Eve owning the decryption key. Therefore, a RDH-ED algorithm cannot act as the embedding function of SIED against KCA as long as the direct decryption distortion exists. To resist KCA is far more practical than to resisting SCOA.
**Algorithm and Framework**
The instance of SIED against KCA is based on Ke's algorithm in [14]. Controllable redundancy of LWE cryptosystem is utilized for embedding in encrypted domain. Scheme [14] takes advantage of the redundancy of quantization element to embed data and there is no additional data expansion resulted by embedding and the encryption strength is well maintained. The framework is as shown in Fig. 6. Alice embeds the secret message into the ciphertext to obtain the stego-ciphertext. Then it is transported
Figure 5: The application framework of SIED based on FHEE-DE in 13
to Bob, who has the secret key to extract secret messages from stego-ciphertext. It should be noted that the directly decrypted result of stego-ciphertext is the plaintext and no recovery process is needed. This mode conforms to EXC mode.
**Security analysis**
The embedding processes of [14] would not result in any secondary data expansion of ciphertext. It has been deduced in 14 that the stego-ciphertext follows the same distribution as original ciphertext, thus meeting the security requirement of D(\(P(\boldsymbol{C})\), \(P(\boldsymbol{C}^{\prime})\))=0. According to the theoretical analysis of the extraction function and the experimental results in [14], the directly decrypted result of the stego-ciphertext is the lossless plaintext, which meets the requirement in Eq. (11): Dec(\(\boldsymbol{K}_{\text{Enc}}\), \(\boldsymbol{C}\)) = Dec(\(\boldsymbol{K}_{\text{Enc}}\), \(\boldsymbol{C}^{\prime}\)). Therefore, the SIED based on [14] could resist KCA.
To resist CCA, all the possible open encryption parameters after embedding should not reveal any clues to the existence of steganography. However, the quantization vectors from original ciphertext and stego-ciphertext follow the different distributions as shown in Fig. 7.
There are more peaks in Fig. 7(b) than in Fig. 7(a). Namely, D(\(\boldsymbol{\delta}\), \(\boldsymbol{\delta}^{\prime}\))\(\neq\)0 \(\boldsymbol{\delta}\)\(\in\)\(\Theta\), \(\boldsymbol{\delta}^{\prime}\)\(\in\)\(\Theta^{\prime}\). Eve can learn the differences to confirm the presence of the steganography with a high probability. It cannot resist CCA.
### SIED scheme against CCA
**Algorithm and Framework**
In this section, a lossless data hiding method in encrypted domain based on encryption variable refreshing (EVR) algorithm was proposed, which could act as the embedding function of the realization of SIED against CCA. EVR is based on the cryptography
Figure 6: The application framework of SIED based on RDH-ED in [14].
Figure 7: Distributions of quantization vectors from (a) Original ciphertext; (b) Stego-ciphertext.
with randomly sampled variables in the encryption process, such as Paillier encryption. By refreshing the variable randomly, another ciphertext is generated. The bits of ciphertext are sampled from specific positions. If the sampled bits match the to-be-embedded bits, the stego-ciphertext is obtained. If the bits are not matched, the variable is refreshed again until the resampled bits are matched. The embedding function of EVR for SIED based on Paillier encryption is as following.
#### Key generation
Choose two big prime \(p\) and \(q\), and calculate \(N\)=\(p\times q\). Calculate the lowest common multiple of (\(p\)-1, \(q\)-1) denoted as \(\lambda\). Choose an integer \(g\in Z_{{}_{N^{2}}}^{{}^{*}}\), gcd(L(\(g^{\lambda}\)mod\(N^{2}\)), \(N\))=1, L(\(x\))=(\(x\)-1)/\(N\), \(Z_{{}_{N^{2}}}^{{}^{*}}\) is the set of integers that are less than \(N^{2}\) and relatively prime to \(N^{2}\). The public key for encryption is \(K_{1}(N,\,g)\), the secret key for decryption is \(K_{2}(p,\,q,\,\lambda)\).
#### Encryption
Choose an encryption process variable randomly \(r\in Z_{{}_{N}}^{{}^{*}}\). The plaintext is \(p\in Z_{N}\), the encrypted result is \(c=\operatorname{Enc}(K_{1}\,,p)=g^{p}\,\tau^{N}\,\text{mod}\,\,N^{2}\).
#### Decryption
The decrypted result of \(c\) is \(p=\operatorname{Dec}(K_{2}\,,\,c)=\frac{L(c^{\lambda}\,\text{mod}\,N^{2})}{L(g ^{\lambda}\,\text{mod}\,N^{2})}\,\text{mod}\,N\).
#### Bit sampling embedding by EVR
The to-be-refreshing variable is \(r\). The to-be-embedded bit is denoted as \(b_{\kappa}\). LSB(.) is to obtain the least significant bit of the input integer.
Step1: If \(b_{\tau}=\text{LSB}(c)\), \(c^{\prime}=c\); if \(b_{\tau}\neq\text{LSB}(c)\), _refresh_\(r^{\prime}\neq r\).
Figure 8: The application framework of SIED based on EVR.
Figure 9: The application framework of SIED based on KS-LSB.
Step2: \(c^{\prime}=\text{Enc}(K_{1}\,,p)\)= \(g^{p}\cdot r^{\mathcal{N}}\text{mod}\ N^{2}\).
Step 3: Repeat Step 2 until LSB(\(c^{\prime}\))=\(b_{\text{s}}\). Return \(c^{\prime}\) as the stego-ciphertext.
Decryption: \(m=\text{Dec}(K_{2}\,,\,c^{\prime})\).
Extraction: \(b_{\text{r}}\)= LSB(\(c^{\prime}\)).
The application is shown in Fig. 8. Alice needs to refresh the variables to obtain the stego-ciphertext. Bob can eavesdrop on the stego-ciphertext from the transmission channel. The decryption key is not necessary for Bob, even if he is a legitimate receiver of the cryptosystem. The decryption and extraction processes are independent. Therefore, the plaintext can be obtained without loss by directly decrypting the stego-ciphertext. It conforms to EMC mode.
**Security analysis**
In CCA, Eve cannot obtain any useful information from the stego-ciphertext which is another normal ciphertext, namely, D(\(P(\mathbf{C})\), \(P(\mathbf{C}^{\prime})\))=0. No ciphertext expansion exists and the computational complexity of decryption is not increased. There is no decryption distortion, _i.e._, \(\text{Dec}(\mathbf{K}_{\text{Enc}}\,,\,\mathbf{C})=\text{Dec}(\mathbf{K}_{\text{Enc}}\,,\, \mathbf{C}^{\prime})\). Since the operations of EVR embedding are all standard encryption operations, all the process variables conform to the distribution of the original ones of Paillier encryption, that is, it satisfy Eq. (12): D(\(\mathbf{\delta}\), \(\mathbf{\delta}^{\prime}\))=0 \(\mathbf{\delta}\)\(\in\)\(\Theta\), \(\mathbf{\delta}^{\prime}\)\(\in\)\(\Theta^{\prime}\). Therefore, the SIED based on EVR could resist CCA.
In ACCA, Eve could repeat learning from the generation of stego-ciphertext. Since Paillier encryption cannot resist the adaptive chosen ciphertext attack [15] and embedding process are part of the encryption process, Eve could learn from the adopted Paillier encryption to know about the bit sampling by EVR. Therefore, there is a high probability that the repeated sampling process will be suspected.
To resist Eve capable of ACCA, there might be two solutions for consideration. One is to adopt the cryptosystem that resists adaptive chosen ciphertext attack. It puts forward higher requirements for the encryption environment. The second is to construct the embedding method in which the embedding process and the encryption process are separable.
### SIED scheme against ACCA
**Algorithm and Framework**
We introduce key-switching based LSB (KS-LSB) algorithm in [13] as the embedding function of the realization of SIED against ACCA. In the embedding process of KS-LSB, the data-hiding key is constructed from the switching key, which is independent of the encryption key and can be openly published. The extraction is independent of the decryption process. Bob can extract data without using the decryption key. Therefore, it conforms to the AF mode.
The application is shown in Fig. 9. Alice implements KS-LSB to obtain stego-ciphertext without using the encryption key. It is then transmitted to the receiver of the cryptosystem. Bob can eavesdrop on the stego-ciphertext and extract data from the stego-ciphertext without the decryption key. By directly decrypting the stego-ciphertext, the plaintext can be obtained without loss, and the decryption and extraction processes are independent of each other.
**Security analysis**
No data expansion of ciphertext or additional computational complexity of decryption is resulted by embedding. The directly decrypted result of stego-ciphertext is the lossless plaintext, namely, \(\text{Dec}(\textbf{{K}}_{\text{Enc}}\), \(\textbf{{C}})=\text{Dec}(\textbf{{K}}_{\text{Enc}}\), \(\textbf{{C}}^{\prime})\). KS-LSB generates another standard ciphertext and Eve cannot gain any useful information from the stego-ciphertext or the process variables, namely, D(\(P(\textbf{{C}})\), \(P(\textbf{{C}}^{\prime})\))=0 and D(\(\boldsymbol{\delta}\), \(\boldsymbol{\delta}^{\prime}\))=0 \(\boldsymbol{\delta}\!\in\!\Theta\), \(\boldsymbol{\delta}^{\prime}\!\in\!\Theta\)'. Key-switching is implemented in lattice-based encryption that is proved secure and could resist adaptive chosen ciphertext attack [13]. It could well encapsulate KS-LSB embedding, so that Eve cannot learn any difference existing in the variables before and after embedding by repeating analyzing the embedding or extraction operations. And the processes of embedding and encryption are separable. Therefore, the SIED based on KS-LSB could resist ACCA.
## 5 Conclusion
With the popularization and application of privacy protection technologies in cloud service and social network, ciphertext has been gradually becoming a common platform for public to exchange data. In this paper, we propose the steganography in encrypted domain. Based on Simmons' model of prisoners' problems, we discuss the application scenarios and security requirements of SIED. According to the function of the encryption key and decryption key in data embedding and extraction, the application modes of SIED are classified into four modes. Four levels of steganalysis attacks are introduced based on the prior knowledge about the steganography system that the attacker is assumed to obtain. And then the four levels of steganography security of SIED are defined correspondingly. Based on the existing reversible data hiding algorithms, we give four schemes of SIED as practical instances with different security levels. By analyzing the embedding and extraction characteristics of each instance, the SIED modes, application frameworks and security levels are discussed in detail.
## Acknowledgment
This work was supported by National Natural Science Foundation of China under Grant No.62102450, No. 61872384 and No. 62272478 fundamental research fund project of Engineering University of PAP under Grant No. WJY202112. The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers.
|
2305.15041
|
Generating Faithful Synthetic Data with Large Language Models: A Case
Study in Computational Social Science
|
Large Language Models (LLMs) have democratized synthetic data generation,
which in turn has the potential to simplify and broaden a wide gamut of NLP
tasks. Here, we tackle a pervasive problem in synthetic data generation: its
generative distribution often differs from the distribution of real-world data
researchers care about (in other words, it is unfaithful). In a case study on
sarcasm detection, we study three strategies to increase the faithfulness of
synthetic data: grounding, filtering, and taxonomy-based generation. We
evaluate these strategies using the performance of classifiers trained with
generated synthetic data on real-world data. While all three strategies improve
the performance of classifiers, we find that grounding works best for the task
at hand. As synthetic data generation plays an ever-increasing role in NLP
research, we expect this work to be a stepping stone in improving its utility.
We conclude this paper with some recommendations on how to generate
high(er)-fidelity synthetic data for specific tasks.
|
Veniamin Veselovsky, Manoel Horta Ribeiro, Akhil Arora, Martin Josifoski, Ashton Anderson, Robert West
|
2023-05-24T11:27:59Z
|
http://arxiv.org/abs/2305.15041v1
|
# Generating Faithful Synthetic Data with Large Language Models:
###### Abstract
Large Language Models (LLMs) have democratized synthetic data generation, which in turn has the potential to simplify and broaden a wide gamut of NLP tasks. Here, we tackle a pervasive problem in synthetic data generation: its generative distribution often differs from the distribution of real-world data researchers care about (in other words, it is _unfaithful_). In a case study on sarcasm detection, we study three strategies to increase the faithfulness of synthetic data: grounding, filtering, and taxonomy-based generation. We evaluate these strategies using the performance of classifiers trained with generated synthetic data on real-world data. While all three strategies improve the performance of classifiers, we find that grounding works best for the task at hand. As synthetic data generation plays an ever-increasing role in NLP research, we expect this work to be a stepping stone in improving its utility. We conclude this paper with some recommendations on how to generate high(er)-fidelity synthetic data for specific tasks.
## 1 Introduction
From data annotation (Gilardi et al., 2023) to dataset creation (Josifoski et al., 2023), synthetic data offers previously unseen flexibility in the models we train (Eldan and Li, 2023) and in defining what and how we study the world around us (Ziems et al., 2023). Further, large language models (hereinafter LLMs) are now easily accessible through APIs, substantially decreasing the expertise and the time necessary to generate synthetic data and labels.
Here, we examine a pervasive problem in synthetic data generation with LLMs: _faithfulness_. The generative distribution of synthetic data created by LLMs often differs from the distribution of real-world data that we care about (Alaa et al., 2022). For instance, if we ask LLMs to generate tweets, these will likely be much better written than real tweets, and the topics and themes of those are likely to be less diverse. This is problematic, as classifiers trained on synthetic data would be systematically biased and may not perform well in real-world contexts.
We study three strategies to increase the faithfulness of synthetic data generated by LLMs: grounding, filtering, and taxonomy-based generation. As illustrated in Fig. 1, grounding consists of providing real-world examples from a training set in the LLM prompt; filtering consists of using a discriminator model (trained to distinguish real and synthetic data) to cull unfaithful synthetic data; and taxonomy-based generation consists of including a taxonomy in the prompt to encourage diversity.
We evaluate the aforementioned proposed strategies with a case study in Computational Social Science (CSS), a multidisciplinary field where easily accessible synthetic data and labels may be transformative in the years to come (Bail, 2023). Research in CSS often uses simple classifiers to estimate a linguistic characteristic or trait (referred to in the paper as a construct) in large text corpora, often obtained from the Web (Salganik, 2019). In that context, LLMs have been used to directly annotate the data in zero-shot fashion (Ziems et al., 2023), and, more relevant to the work at hand, to create synthetic data to train models in complex or low-resource tasks (Moller et al., 2023).
In the latter context, we consider the task of sarcasm detection, and using an existing dataset evaluate the performance of each of the proposed strategies in increasing the faithfulness of synthetically generated data. Using the macro-F1 of the classifiers trained with different prompting strategies as a proxy for the faithfulness of synthetic data, we find that grounding provides the best performance our of all classifiers trained with synthetic data. However, the model still performs worse in terms of macro-F1 than zero-shot ChatGPT annotation and a model trained on the real data.
## 2 Related work
**Data augmentation.** In low-resource and unbalanced settings, augmenting datasets with synthetic data can improve model performance in a variety of NLP tasks, including relation extraction (Papanikolaou and Pierleoni, 2020), sarcasm detection (Abaskohi et al., 2022), translation (Sennrich et al., 2015), and sentiment analysis (Maqsud, 2015); see Feng et al. (2021) for a comprehensive survey. Specifically relevant to this paper is the work of Moller et al. (2023), which uses Chat-GPT to generate new samples for sentiment, hate speech, and a social dimension, a low-resource task. Finally, Anaby-Tavor et al. (2020) proposed a general methodology for fine-tuning a language model on small datasets. The authors highlight that the synthetic data was unfaithful to the real-world data distribution, thus warranting a filtering scheme to remove unfaithful data points.
**Synthetic dataset creation.** Recent work has stretched beyond data augmentation to creating fully synthetic datasets. Eldan and Li (2023) used LLMs to create "Tiny Stories," showcasing how small a language model can learn the language of 2 to 3-year-old children. This paper relied on a form of "grounding" to encourage diversity in the concepts discussed. Another work by Josifoski et al. (2023) sampled knowledge graph triplets and generated texts using GPT-3. They then fine-tuned a model entirely on the synthetic data, and noted that the data was dissimilar from real human data.
**Synthetic data as a proxy for humans.** LLMs can also act as good proxies for specific human sub-populations (Argyle et al., 2022), leading to a series of studies using LLMs as "silicon samples" (Argyle et al., 2022; Horton, 2023; Dillion et al., 2023). Typically, these analyses have been done through a variant of controlled text generation (review available here (Zhang et al., 2022)). Further, an ever-increasing body of work illustrated the good performance of using LLMs as a proxy for human labeling (Wang et al., 2023; Gilardi et al., 2023; Ziems et al., 2023).
Naive synthetic data generation with LLMs, e.g., the **Simple** strategy in Fig. 1, can lead to data that is unfaithful to the underlying real-world data distribution (Josifoski et al., 2023). This paper's contribution is to propose and evaluate prompting strategies that allow us to address this issue.
Figure 1: Depiction of the proposed strategies to increase the faithfulness of synthetically generated data. On the left-hand side, we depict different prompting strategies: asking an LLM to generate synthetic data with a simple prompt (**Simple**); grounding the synthetic data generation with real-world examples (**Grounding**-rewrite); and providing a taxonomy along with your prompt (**Taxonomy**). We also train a discriminator to distinguish between real and fake prompts and filter the data (as indicated by the dotted orange boxes on the right-hand side; **Filtering**).
Methods
### Data
We use the _sarcasm detection_ dataset from the SemEval-2022 Task 6 (Farha et al., 2022). The train set includes over two thousand self-disclosed instances of sarcasm being shared on Twitter. The reason we choose sarcasm is because it is an inherently difficult task to annotate, and construct to capture. Sarcastic texts are highly context-specific and ambiguous by nature. Annotating a sarcastic corpus has been a long standing problem, with sarcastic comments representing < 1% of all text on social media (Reddit, for example). This renders it infeasible to blindly annotate texts since finding an instance of sarcasm is like searching for a needle in a haystack. Consequently, papers have traditionally relied on various heuristics to generate these datasets--like using the self-disclosed /s tag or asking users to share their own sarcastic Tweets (our task). These heuristics, however, lead to noisy labels and annotator bias (Oprea and Magdy, 2019).
### Evaluation
When evaluating how well our synthetic data captures a linguistic construct, we make the following assumption: _if a construct is properly present in a synthetic dataset, then a model fine-tuned on that dataset will successfully generalize to a real human dataset_. We thus evaluate our synthetic data in three steps. First, we split human-annotated data into two groups train and test, throwing away the labels for our train data. Second, we synthetically generate a new corpus through our various prompting strategies (see below). Third, we fine-tune a model on the various generated synthetic datasets, and evaluate them on the test portion of the human-annotated data.
### Prompting
To understand where synthetic data fails, we begin our analysis by manually inspecting the generated data. Three co-authors reviewed hundreds of examples of synthetically generated vs. real sarcastic texts and annotated their differences. We found that synthetic data generated with simple prompts: 1) exhibits a lack of topical diversity, i.e., it centered around a few topics of discussion; 2) lacks diversity in the construct of interest (namely sarcasm1); and 3) are not well stylistically aligned with real data; authors could easily discriminate between synthetic and real texts. These three assumptions and corresponding prompt designs are described in Table 1.2
Footnote 1: There are many ways a linguistic construct like sarcasm can manifest (irony, over- or under-statement, satire, etc.), and typically the language model would retreat to superficial notions of sarcasm like beginning sentences with “Oh” or “Wow”.
Footnote 2: The prompts in entirety are available at [https://github.com/epfl-dlab/faithful-data-gen](https://github.com/epfl-dlab/faithful-data-gen).
We propose three prompting strategies to account for these limitations, each building off the next. Examples of how the prompts build off each other are illustrated in Figure 2 and discussed below.
**Grounding.** We encourage topical diversity by _grounding_ the generations in real textual data. Specifically, in the prompt, we include an example of a real text and ask the model to either 1) generate new semantically similar examples (like in Moller et al. (2023) or Eldan and Li (2023)) or 2) rewrite the input text (style transfer).
**Taxonomy-based generation.** We break up generation into two steps, asking the LLM to 1) theorize _k_-ways a text can possess a specific construct and then sample across these \(k\) approaches, and 2) rewrite the text according to a specific variant of the construct. The idea here is that generation based on an initial taxonomy can cover a wider segment of _how_ a text can actually convey a construct, improving the downstream model.
**Filtering.** We fine-tune a model to discriminate between real and synthetic text and run that on the full batch of synthetically generated samples from the **Grounding** data. We then cull the examples that have a high likelihood of being synthetic. We do this because, at times, the synthetic data has artifacts that are always associated with a construct. Specifically, we fine-tune a BERT model to distinguish between the first decoding (i.e., if we generate 10 sentences, we only take the first sentence) and the real text to include a specific construct.
For simple prompts, we ask the LLM to generate sarcastic and not-sarcastic text, and for prompts
\begin{table}
\begin{tabular}{l l} \hline
**Goal** & **Strategy** \\ \hline Diversity in construct & Taxonomy creation \\ \hline Diversity in topics & Grounding \\ \hline Stylistic matching & Rewrite \\ \hline \end{tabular}
\end{table}
Table 1: Description of objectives in synthetic data generation alongside specific strategies to achieve them.
using **grounding**, we polarize each point in our dataset into two directions, i.e., making it both sarcastic and not-sarcastic. In practice, this means that for each prompt in Fig. 1, we have an alternate version where we substitute the word "sarcastic" for "not-sarcastic", resulting in a synthetic dataset that is balanced across the two classes.
### Models
**Generative model.** To generate the synthetic data, we used ChatGPT.3 The generation parameters for the model were set to temperature: 1, top p: 1, frequency penalty: 0.5, presence penalty: 0.4, max tokens: 700. We chose these parameters to maximize the diversity in the decoded text. The frequency penalty reduces the probability of a word depending on the frequency that it occurs, while the presence penalty puts a flat cost on each word when it occurs in the text. These two forms of penalties help encourage the model to produce higher perplexity texts instead of selecting the next most probable word. Moreover, temperature scaling produces a more shallow distribution over the next tokens, and a top-_p_ set to 1 will cause us to consider all these tokens.
Footnote 3: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
The generative data is then processed by removing artifacts of the generation. We defined these rules based on manual examination. The two most common problems that occurred were the model responding to the request in the affirmative ("Sure, here you go:") and outlining which taxonomy it uses prior to generating the sentence (only present in the taxonomy generation prompting). Both of these issues were addressed by splitting the first colon character ":" and restricting to text after it.
**Fine-tuned model.** Similar to previous work, we fine-tune a E5-base model on the synthetic data Moller et al. (2023); Wang et al. (2022). This model was originally trained using a contrastive loss and achieves strong performance in a fine-tuned classification setting. During fine-tuning, we kept the settings from previous work with a learning rate of \(2e^{-5}\), batch size of 32, and trained for 10 epochs.
## 4 Results
**Model performance.** We show the accuracy and the macro-F1 score for the different prompting strategies in the second and third columns in Table 2. A baseline predicting all data points in the training set as not-sarcastic ("All non-sarcastic") yields an accuracy of 0.72 and a macro-F1 score of 0.43. In practice, we find that models trained in all prompting strategies perform worse accuracy-wise than this baseline, and thus it is more meaningful to compare their macro-F1 score.
We find that the "simple" prompting strategy generalized the worse (macro-F1 score of 0.48), perhaps due to the lack of topical and construct diversity in the synthetically generated data. Note that here we prompted the model to generate 10 random instances of sarcastic and non-sarcastic texts five hundred times. The two synthetic datasets that performed best (macro-F1 score: 0.55) were derived from the "grounding" prompting strategy, where the prompt asked the LLM to, given an example, generate semantically similar text ("Grounding," the 2nd row) or re-write it ("Grounding (rewrite)," the 3rd row). Prompting with grounding and an LLM-generated taxonomy yielded a result between the "simple" and the "grounding" prompting strategies ("Grounding + Taxonomy," macro-F1 score: 0.51). Last, grounding the prompt and then filtering responses that were classified as synthetic with our discriminator yielded poor results ("Grounding + Filtering," macro-F1 score 0.26).
Finally, we note that zero-shot ChatGPT actually yields a higher macro-F1 score (0.60) than smaller models trained with synthetically generated data.
**Believability.** For each synthetic dataset generated, we further estimate how effective they are at fooling a synthetic vs. real classifier (which we refer to as the dataset's believability). The discriminator
Figure 2: Our prompting approach consists of four modular steps. (1) Initiate the model to generate an initial set of 10 data points. (2) Apply a grounding technique as the model generates these 10 data points. (3) Further augment the grounding process by providing the model with an initial taxonomy. (4) Lastly, the results from the grounding phase are filtered through a real-synthetic classifier to ensure their authenticity.
model was trained on individual generations of sarcastic and non-saracastic text and then fine-tuned to predict if a text is sarcastic or not. We report the fraction of each dataset predicted to be real by this classifier in the 4th row of Table 2, "Believability." Note that for the groundtruth annotations (which are all real), we obtain a score of 95%, meaning that the model believes that 95% of the text was considered to be real by the classifier.
The dataset with the highest "believability" is the one created using the grounding and filtering strategies ("Grounding + Filtering," believability 0.56). However, this metric may not capture faithfulness accurately in this case, as the criteria used for filtering are the same as the ones used to calculate the "believability" of a dataset. Thus, of the remaining strategies, the "Grounding + Taxonomy" strategy presents the highest performance (predicted real: 0.20), suggesting that data aided by a taxonomy picks up on fewer artifacts. Unsurprisingly, the "Simple" strategy performs the worst, (predicted real: 0.04), which is aligned with our qualitative analysis of the data, where we noted that most data points contain superficial sarcastic indicators like "Oh", "Wow", and question marks ("?"). Last, grounded approaches perform better than the simple strategy (predicted real: 0.13 for "Grounding" and 0.15 for "Grounding (rewrite)").
**Key takeaways.** Through the process of generating synthetic data, we drew takeaways that can be beneficial for future studies using synthetically generated data for either augmentation or as the entire dataset. We list these findings here:
* When producing synthetic data, it is necessary to generate several sentences for each individual real sample. Typically, the later generations capture more interesting forms of sarcasm than the initial generation and cover a broader range of topics.
* Grounding data is a key aspect of generating synthetic data. Without grounding, the model tends to generate texts that are specialized in terms of topics discussed and constructed used.
* Taxonomy creation can be useful for making the data appear real. However, it performs worse than grounding at staying true to the underlying construct. One potential reason for this is that we assume a uniform distribution over subvariants of sarcasm. This assumption is unlikely to hold in practice--in real life, there are a few types that represent most forms of sarcasm, with the rest representing a long tail. Applying a prior to the types of sarcasm we are likely can lead to more realistic generations.
* Filtering works poorly. This result is surprising given its prevalence in other data augmentation studies. This may be improved through a better classifier.
* A small capacity model like E5 may not be capable of capturing complex linguistic features like sarcasm. It may be a worthwhile effort to fine-tune on a larger model like Flan-T5.
## 5 Discussion
### Summary of findings
Investigating the ability of LLMs to generate _faithful_ synthetic data, we find that simple prompting
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline \multirow{2}{*}{Prompting Strategy} & \multicolumn{3}{c}{_Saracasm_} \\ & Accuracy & Macro-F1 & Believability \\ \hline Simple & **0.71** & 0.48 & 0.04 \\ \hline Grounding & 0.67 & **0.55** & 0.13 \\ \hline Grounding (rewrite) & 0.70 & **0.55** & 0.15 \\ \hline Grounding + Taxonomy & 0.67 & 0.51 & 0.20 \\ \hline Grounding + Filtering & 0.27 & 0.26 & **0.56** \\ \hline \hline Groundtruth annotations & 0.72 & 0.60 & 0.95 \\ \hline All non-saracastic & 0.77 & 0.43 & — \\ \hline Zero-shot ChatGPT & 0.60 & 0.59 & — \\ \hline \hline \end{tabular}
\end{table}
Table 2: For different prompting strategies (rows 2 to 6) and baselines (rows 7 to 10), we show the accuracy, macro-F1 score, and believability in a held-out test set.
strategies result in data that lacks diversity and differs stylistically from real-world data. To address these issues, we propose a suite of improved prompting strategies, namely, 'grounding,' 'filtering,' and 'taxonomy-based generation,' which we qualitatively find to generate samples that are more faithful to the real-world data distribution. Further, comparing the performance of classifiers trained with synthetic data generated using our proposed strategies on a downstream task of sarcasm detection, we find that 'grounding' resulted in the highest improvement, thereby indicating the importance of closely capturing topical diversity for the considered tasks.
### Implications
We argue that the implications of the aforementioned findings are three-fold.
First, our results suggest that synthetic data generation can be a resource-friendly alternative to human annotation achieving results five macro-F1 points worse than zero-shot annotation and a model trained on the real data. With only a few examples of data of the kind researchers wish to study (e.g., sarcastic tweets), they could bootstrap a synthetic dataset that can be used to train relatively simple, yet effective and easily deployable models. This strategy could also alleviate privacy concerns associated with using real-world data, allowing the study of sensitive topics without relying on collecting data from contexts where personally identifiable information is present (e.g., social media).
Second, synthetic data generation could be a helpful strategy for training future (potentially smaller) language models. Previous work has shown that smaller language models fine-tuned using well-curated samples from a constrained domain can outperform larger models on specific tasks (Zhou et al., 2023), and with our prompting strategies, this fine-tuning process could be bootstrapped with another language model, i.e., one could automatically generate this well-curated sample. More broadly, as language models scale up, and organizations require more and more data to train these models, synthetically generated data may be needed to continue the improvement of these models. Our work could be seen as a stepping stone for more research in this direction.
Finally, we hope that the proposed strategies enable more fine-grained analyses in fields like Computational Social Science that leverage NLP to study human-made constructs. Constructs like sarcasm are not black and white and reflect the subtle complexities of human language; sarcasm can really take many sub-forms like hyperbole, satire, irony, understatements, rhetorical questions, juxtaposition, and sardonic humor. Building a model to detect these classes of sarcasm can be intractable. Do we search for distinct datasets for each of these types of sarcasm? Do we annotate a large corpus of sarcastic texts to fit into this taxonomy? It's not entirely clear. However, this could be done with the taxonomy-based prompting strategy proposed in this work.
### Limitations and Future Work
Owing to its superior efficiency and cost effectiveness, we used ChatGPT for generating synthetic data in this work. However, in the future we aim to repeat all the analyses using data generated via GPT-4, which has shown to achieve substantial improvements over ChatGPT (Bubeck et al., 2023). In the same vein, we would like to fine-tune a larger language model on the order of hundred million parameters for the downstream task of sarcasm detection. This is primarily because sarcasm detection is a difficult task, and therefore could benefit from the abilities that only emerge in LLMs at scale (Wei et al., 2022).
Next, we would also like to extend our analyses to diverse NLP tasks. While the present work showcases the ability of our proposed prompting strategies to generate more faithful synthetic data using the task of sarcasm detection, our strategies are general and can be applied for other NLP tasks.
From an evaluation standpoint, we use the downstream performance of classifiers trained on the generated synthetic data to quantitatively assess the quality of generations. However, this is inherently a proxy for evaluating data faithfulness. In the future, we would like to perform a more direct evaluation, such as conducting a turing test, by asking humans to distinguish between real and synthetically generated data.
Finally, we intend to perform extensive tuning of different components of our pipeline. For example, while we fix the number of re-writes to 10, it would be fruitful to identify the optimal value of the number of re-writes as well as understand its relationship with the complexity of the underlying task. Similarly, following the success of self-refinement (Madaan et al., 2023), we would like to
explore the use of iterative refinement strategies to discriminate between real vs. synthetic data, which is currently performed in a single filtering step.
## Ethical considerations
All the datasets and resources used in this work are publicly available and do not contain any private or sensitive information about individuals. Moreover, all the findings are based on analyses conducted at an aggregate-level, and thus, no individual-level inferences can be drawn. However, human-like synthetic data can be used maliciously. We acknowledge this concern.
|
2303.07913
|
Rigorous derivation of Michaelis-Menten kinetics in the presence of
diffusion
|
Reactions with enzymes are critical in biochemistry, where the enzymes act as
catalysis in the process. One of the most used mechanisms for modeling
enzyme-catalyzed reactions is the Michaelis-Menten (MM) kinetic. In the ODE
level, i.e. concentrations are only on time-dependent, this kinetic can be
rigorously derived from mass action law using quasi-steady-state approximation.
This issue in the PDE setting, for instance when molecular diffusion is taken
into account, is considerably more challenging and only formal derivations have
been established. In this paper, we prove this derivation rigorously and obtain
MM kinetic in the presence of spatial diffusion. In particular, we show that,
in general, the reduced problem is a cross-diffusion-reaction system. Our proof
is based on improved duality method, heat regularisation and a suitable
modified energy function. To the best of our knowledge, this work provides the
first rigorous derivation of MM kinetic from mass action kinetic in the PDE
setting.
|
Bao Quoc Tang, Bao-Ngoc Tran
|
2023-03-14T13:55:56Z
|
http://arxiv.org/abs/2303.07913v1
|
# Rigorous derivation of Michaelis-Menten kinetics
###### Abstract
Reactions with enzymes are critical in biochemistry, where the enzymes act as catalysis in the process. One of the most used mechanisms for modeling enzyme-catalyzed reactions is the Michaelis-Menten (MM) kinetic. In the ODE level, i.e. concentrations are only on time-dependent, this kinetic can be rigorously derived from mass action law using quasi-steady-state approximation. This issue in the PDE setting, for instance when molecular diffusion is taken into account, is considerably more challenging and only formal derivations have been established. In this paper, we prove this derivation rigorously and obtain MM kinetic in the presence of spatial diffusion. In particular, we show that, in general, the reduced problem is a cross-diffusion-reaction system. Our proof is based on improved duality method, heat regularisation and a suitable modified energy function. To the best of our knowledge, this work provides the first rigorous derivation of MM kinetic from mass action kinetic in the PDE setting.
_Key words--_ Enzyme reactions; Michaelis-Menten kinetics; Improved duality method; Modified energy method; Cross-diffusion
###### Contents
* 1 Introduction
* 1.1 Problem setting
* 1.2 State of the art
* 1.3 Main results and key ideas
* 2 Uniform estimates
* 2.1 Preliminaries
* 2.2 Heat regularisation and improved duality method
* 2.3 A modified energy function
* 2.4 Uniform-in-\(\varepsilon\) bounds
* 3 Proofs
* 3.1 Proof of Theorem 1.1
* 3.2 Proof of Theorem 1.3
* 3.3 Proof of Theorem 1.5
* 3.4 Proof of Theorem 1.6
## 1 Introduction
### Problem setting
Enzyme-catalyzed reactions are critical in biochemistry, where the reaction rates can be accelerated by well over a million-fold comparing with non catalyzed reactions, and enzymes are eventually not getting either consumed or transformed into another substance. These reactions are typified by combination of an enzyme \(E\) with its substrate molecule \(S\) to form an enzyme-substrate complex \(C\), which is a needed step in enzyme mechanism and the key to study kinetic behaviors, see Henri [16, 17]. Diagrammatically, enzyme reaction mechanism reads
\[E+S\xrightleftharpoons[l_{1}]{k_{1}}C\xrightleftharpoons[l_{2}]{k_{2}}E+P,\] ( \[*\] )
where the substrate \(S\) binds the enzyme-catalyst \(E\) to form the complex \(C\) that can be transformed into the enzyme and a product \(P\). The numbers \(k_{1},l_{1},k_{2}\in(0,\infty)\) and \(l_{2}\in[0,\infty)\) are reaction rate constants. If \(l_{2}=0\), then (\(*\)) is called irreversible, i.e., the enzyme and product cannot react to synthesize the complex. In reversible case, \(l_{2}>0\), there is a backward reaction that forms the complex from the enzyme and product. Assume for a moment that the reaction is irreversible, i.e. \(l_{2}=0\), and the concentrations of enzyme (\(e\)), substrate (\(s\)), complex (\(c\)), and product (\(p\)) depend only on time. By applying the mass action law, one gets the differential system with mass action kinetic
\[\begin{cases}s^{\prime}=-k_{1}se+l_{1}c,\\ e^{\prime}=-k_{1}se+(l_{1}+k_{2})c,\\ c^{\prime}=k_{1}se-(l_{1}+k_{2})c,\\ p^{\prime}=k_{2}c\end{cases} \tag{1}\]
with initial data \(z(0)=z_{0},z\in\{s,e,c,p\}\). From the conservation laws \((e+c)^{\prime}=0\) and \((s+c+p)^{\prime}=0\), it leads to the reduced system
\[\begin{cases}s^{\prime}=-k_{1}s(e_{0}+c_{0}-c)+l_{1}c\\ c^{\prime}=k_{1}s(e_{0}+c_{0}-c)-(l_{1}+k_{2})c.\end{cases}\]
The quasi-steady-state-approximation hyphothesis assumes that the complex concentration \(c\) reaches its equilibrium almost instantly, that is we have \(0\approx c^{\prime}=k_{1}s(e_{0}+c_{0}-c)-(l_{1}+k_{2})c\), which leads to the evolution equation of the substrate with the famous _Michaelis-Menten kinetic_ (or briefly, MM kinetic)
\[s^{\prime}=-\frac{k_{1}k_{2}(e_{0}+c_{0})s}{k_{1}s+l_{1}+k_{2}}. \tag{2}\]
This type of reaction rate has become one of the most, if not the most, used kinetics in enzymatic, or more generally catalytic reactions in the literature. The aforementioned derivation of MM kinetic from mass action kinetic can be rigorously justified (see e.g. [15, 20]) under different conditions, in which the assumption that the initial enzyme concentration is sufficiently small, i.e. \(e_{0}/s_{0}\ll 1\), is of particular importance since it is biologically relevant.
In many contexts, e.g. experiments in vivo, chemical concentrations are spatially inhomogeneous as diffusion is hindered in the gel-like cytosol, and the cytosolic composition varies in different regions of the cell. In such scenarios, concentrations of chemicals are functions of both temporal and spatial variables, and it is natural to consider reaction-diffusion systems. Assume that the enzyme-catalysed reaction (\(*\)) occurs in a bounded vessel \(\Omega\subset\mathbb{R}^{N}\), \(N\geq 1\), with smooth boundary \(\partial\Omega\), and the diffusion process follows the Fick's second law. Let \(\delta_{S},\delta_{E},\delta_{C},\delta_{P}>0\) be the molecular diffusion rates of \(S,E,C,P\), and \(n_{S},n_{E},n_{C},n_{P}\) be the concentrations of \(S,E,C,P\), which are functions of \((x,\tau)\in\Omega\times[0,\infty)\). Thanks to the law of mass action, the governing equations for (\(*\)) are1
Footnote 1: The reaction-diffusion system (3) can be derived from reactive Boltzmann system describing the enzyme reaction (\(*\)), see [1].
\[\left\{\begin{array}{lclclclcl}\partial_{\tau}n_{S}&-&\delta_{S}\Delta n_{S }&=&-\,k_{1}n_{S}n_{E}&+&l_{1}n_{C}&&\text{in }Q_{T},\\ \partial_{\tau}n_{E}&-&\delta_{E}\Delta n_{E}&=&-\,k_{1}n_{S}n_{E}&+&(l_{1}+k _{2})n_{C}&-&l_{2}n_{E}n_{P}&\text{in }Q_{T},\\ \partial_{\tau}n_{C}&-&\delta_{C}\Delta n_{C}&=&+\,k_{1}n_{S}n_{E}&-&(l_{1}+k _{2})n_{C}&+&l_{2}n_{E}n_{P}&\text{in }Q_{T},\\ \partial_{\tau}n_{P}&-&\delta_{P}\Delta n_{P}&=&&+&k_{2}n_{C}&-&l_{2}n_{E}n_{P} &\text{in }Q_{T}.\end{array}\right. \tag{3}\]
where \(Q_{T}:=\Omega\times(0,T)\) for \(T\in(0,\infty]\). Denote by \(\nu=\nu(x)\) the unit outer normal vector at point \(x\in\partial\Omega\). This system is subjected to the homogeneous Neumann boundary condition
\[\partial_{\nu}n_{S}=\partial_{\nu}n_{E}=\partial_{\nu}n_{C}=\partial_{\nu}n_{ P}=0\quad\text{on }\partial\Omega\times(0,T), \tag{4}\]
where \(\partial_{\nu}f=\nabla f\cdot\nu\) is the directional derivative. The initial states are given by
\[\big{(}n_{S}(x,0),n_{E}(x,0),n_{C}(x,0),n_{P}(x,0)\big{)}=\big{(}n_{SI}(x),n_{ EI}(x),n_{CI}(x),n_{PI}(x)\big{)},\quad x\in\Omega. \tag{5}\]
Naturally, one would expect that the MM kinetic is also a suitable reaction rate for the substrate concentration, and in fact this has been used extensively in the literature. Unlike the case of the differential system (1), the derivation of MM kinetic from mass action law through the reaction-diffusion system (3) is more challenging. For instance, it is obvious that the conservation laws are not any more pointwise (unless both enzyme \(E\) and complex \(C\) are not diffusing) and a direct reduction using conservation laws is not possible. One can use asymptotic analysis to at least formally derive the MM kinetic from (3). More precisely, we consider the following small parameter
\[\varepsilon:=\frac{\int_{\Omega}n_{EI}(x)dx}{\int_{\Omega}n_{SI}(x)dx}\ll 1 \tag{6}\]
which mimics the basic quasi-steady state or pseudo-steady state hypothesis that the initial concentration of enzyme is much less than the initial substrate concentration [20, 21]. It
is remarked that this is considerably more general than imposing
\[\frac{n_{EI}(x)}{n_{SI}(x)}\ll 1\quad\text{or}\quad\frac{n_{EI}(x)+n_{CI}(x)}{n_{ SI}(x)}\ll 1\quad\text{ for all }x\in\Omega\]
since (6) allows \(n_{SI}\) to be zero on some non-zero measured set or \(n_{EI}\) to have spikes. Following the rescaling in [18, Section 3],
\[\tau=\frac{t}{\varepsilon},\quad d_{z}=\frac{\delta_{z}}{ \varepsilon},\quad\widetilde{n}_{E}(x,t)=\frac{n_{E}(x,\tau)}{\varepsilon}, \quad\widetilde{n}_{C}(x,t)=\frac{n_{C}(x,\tau)}{\varepsilon}, \tag{7}\] \[\widetilde{n}_{S}(x,t)=n_{S}(x,\tau),\quad\widetilde{n}_{P}(x,t) =n_{P}(x,\tau),\quad z\in\{S,E,C,P\} \tag{8}\]
and denoting \(u^{\varepsilon}=(u^{\varepsilon}_{j})_{1\leq j\leq 4}\) where
\[u^{\varepsilon}_{1}=\widetilde{n}_{S},\ u^{\varepsilon}_{2}= \widetilde{n}_{E},\ u^{\varepsilon}_{3}=\widetilde{n}_{C},\ u^{\varepsilon}_{4}= \widetilde{n}_{P},\quad\text{ and }\quad(d^{\varepsilon}_{j})_{1\leq j\leq 4}=(d_{S},d_{ E},d_{C},d_{P}),\]
we obtain from (3)-(5) the following \(\varepsilon\)-dependent reaction-diffusion system
\[\left\{\begin{array}{lll}\partial_{t}u^{\varepsilon}_{1}-d^{ \varepsilon}_{1}\Delta u^{\varepsilon}_{1}&=&-\,k_{1}u^{\varepsilon}_{1}u^{ \varepsilon}_{2}+l_{1}u^{\varepsilon}_{3}&\text{in }Q_{T},\\ \partial_{t}u^{\varepsilon}_{2}-d^{\varepsilon}_{2}\Delta u^{\varepsilon}_{2} &=&-\,\frac{1}{\varepsilon}\Big{(}k_{1}u^{\varepsilon}_{1}u^{ \varepsilon}_{2}-(k_{2}+l_{1})u^{\varepsilon}_{3}+l_{2}u^{\varepsilon}_{2}u^{ \varepsilon}_{4}\Big{)}&\text{in }Q_{T},\\ \partial_{t}u^{\varepsilon}_{3}-d^{\varepsilon}_{3}\Delta u^{\varepsilon}_{3} &=&+\,\frac{1}{\varepsilon}\Big{(}k_{1}u^{\varepsilon}_{1}u^{ \varepsilon}_{2}-(k_{2}+l_{1})u^{\varepsilon}_{3}+l_{2}u^{\varepsilon}_{2}u^{ \varepsilon}_{4}\Big{)}&\text{in }Q_{T},\\ \partial_{t}u^{\varepsilon}_{4}-d^{\varepsilon}_{4}\Delta u^{\varepsilon}_{4} &=&-\,l_{2}u^{\varepsilon}_{2}u^{\varepsilon}_{4}+k_{2}u^{\varepsilon}_{3}& \text{in }Q_{T},\\ \partial_{\nu}u^{\varepsilon}&=&0&\text{on }\partial\Omega\times(0,T),\\ u^{\varepsilon}(0)&=&u^{\varepsilon}_{0}&\text{in }\Omega,\end{array}\right. \tag{9}\]
where \(\partial_{\nu}u^{\varepsilon}\) shortly stands for \((\partial_{\nu}u^{\varepsilon}_{j})_{1\leq j\leq 4}\). Through out this paper, without loss of generality, we assume that the initial data \(u^{\varepsilon}_{0}\) does not depend on \(\varepsilon\), and consequently we will remove the superscript. Moreover, we consider the case of _slow diffusion_\(\delta_{z}=O(\varepsilon)\), \(z\in\{S,E,C,P\}\). This is achieved by assuming that the diffusion rates \(d^{\varepsilon}_{j}\) are convergent, i.e.,
\[\lim_{\varepsilon\to 0^{+}}d^{\varepsilon}_{j}=d_{j}\in(0,\infty)^{4},\,j=1, \ldots,4, \tag{10}\]
which we impose _throughout this paper_. From the second and third equations of (9), we expect, at least formally, that when \(\varepsilon\to 0\),
\[k_{1}u^{\varepsilon}_{1}u^{\varepsilon}_{2}-(k_{2}+l_{1})u^{\varepsilon}_{3}+ l_{2}u^{\varepsilon}_{2}u^{\varepsilon}_{4}\to 0=k_{1}u_{1}u_{2}-(k_{2}+l_{1})u_{3}+l_{2}u_{ 2}u_{4},\]
where it is implicitly assumed that \(u^{\varepsilon}_{j}\to u_{j}\), \(j\in\{1,\ldots,4\}\). It follows that
\[u_{3}=\frac{k_{1}u_{1}+l_{2}u_{4}}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}(u_{2}+u_ {3}). \tag{11}\]
By summing the second and third equations of (9) and let \(\varepsilon\to 0\), we formally have
\[\partial_{t}(u_{2}+u_{3})-d_{2}\Delta(u_{2}+u_{3}) =(d_{3}-d_{2})\Delta u_{3}\] \[=(d_{3}-d_{2})\Delta\left[\frac{k_{1}u_{1}+l_{2}u_{4}}{k_{1}u_{1}+ l_{2}u_{4}+k_{2}+l_{1}}(u_{2}+u_{3})\right].\]
Denote by \(v=u_{2}+u_{3}\), the system (9) in the limit \(\varepsilon\to 0\) is formally reduced to the following _cross-diffusion-reaction system_
\[\begin{cases}\partial_{t}u_{1}-d_{1}\Delta u_{1}=-\frac{(k_{1}k_{2}u_{1}-l_{1} l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}},\\ \partial_{t}v-d_{2}\Delta v=(d_{3}-d_{2})\Delta\left(\frac{(k_{1}u_{1}+l_{2}u_ {4})v}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\right),\\ \partial_{t}u_{4}-d_{4}\Delta u_{4}=+\frac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v }{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}},\end{cases} \tag{12}\]
subjected to homogeneous Neumann boundary conditions, and initial data \(u_{1}(0)=u_{10}\), \(v(0)=u_{20}+u_{30}\), and \(u_{4}(0)=u_{40}\). Consider the case of irreversible enzyme reaction in \((*)\), i.e. \(l_{2}=0\), and \(d_{2}=d_{3}\), we see that \(v\) can be solved independently from \(\partial_{t}v-d_{2}\Delta v=0\), and therefore can be considered known for the equation of the substrate \(u_{1}\), which now reads
\[\partial_{t}u_{1}-d_{1}\Delta u_{1}=\frac{-k_{1}k_{2}u_{1}v}{k_{1}u_{1}+k_{2}+ l_{1}}.\]
This is precisely the well-known _MM kinetic_ which has been used frequently in the literature. System (12) indicates that, in general, one should take into account the evolution of \(v\) featuring cross-diffusion. This formal derivation and related versions were given in e.g. [18, 19], but up the best of our knowledge, no rigorous proof has been investigated. Our present paper shows, therefore, for the first time, that the reduction from (9) to (12) is rigorous rather than just a formal derivation.
### State of the art
The rigorous derivation of MM kinetic in the case homogeneous setting, i.e. for the differential system (1), has been extensively investigated in the literature. Different conditions are proposed to validate the MM kinetic, for instance, when small initial enzyme concentrations are, or when fast reaction rate constants are large, see e.g. [20]. It is also worth noting that this research direction belongs to a developed theory of fast-slow systems or multi-time scale dynamics (see e.g. [19]).
The singular limit as \(\varepsilon\to 0\) of (9) falls into the problem of fast reaction limits for PDE, which has caught a lot of attention in the past decades. Studies in this direction go back to the works of Evans [17] and Martin [21] in the eighties where the former showed the convergence to a nonlinear diffusion problem while the latter proved the convergence to a Stefan free boundary type problem. Already these two works suggested interesting mathematical structures as well as complexity of fast reaction limits of PDE. Indeed, extensive studies on the subject showed that fast reaction limit leads to many different and interesting limiting systems, ranging from nonlinear diffusion equations [1, 16], Stefan free boundary problems [13, 21], cross
diffusion systems [1, 1, 2, 3], to a new derivation of the classical dynamical boundary conditions [1], various behaviour of moving interfaces [17], fractional kinetics [11, 12], or systems involving Young measures [13]. On the one hand, this variety of limiting dynamics shows the close connection of fast reaction limits of (bio-)chemical models to other phenomena of dynamical systems depending on different scales and situations. For instance, the classical dynamical boundary condition for parabolic equations, which has its root in modelling heat conduction in solids (when a solid is in contact with a well-stirred fluid at its surface), can be rigorously derived as a limit of a volume-surface reaction-diffusion system in which the reaction rate between the volume and surface-concentrations tends to infinity [1]. Another example is the famous SKT cross-diffusion system, named after Shigesada, Kawasaki, and Teramoto [16], can be derived as a formal limit of a reaction-diffusion system [14]. On the other hand, the analysis of reduced models can benefit from viewing it as the limit of a model which possesses useful structures. For instance, in [1] a study of fast reaction limit leads to a reduced algebraic-cross-diffusion system whose analysis seem impenetrable at the first glance, but becomes feasible thanks to the entropic structure of the original system, which is propagated via the fast reaction limit.
Our present paper contributes to this literature by showing that the classical MM kinetics for enzyme reaction in the presence of diffusion can also be rigorously derived from mass action kinetics by studying a fast reaction limit type problem using a suitable rescaling. We expect that our results have considerable applications, especially in deriving MM kinetics for (bio-)chemical catalytic reactions.
### Main results and key ideas
From the reduced system (12), it is clear that the cases \(d_{2}=d_{3}\) and \(d_{2}\neq d_{3}\) lead to different limit systems. These also introduce different difficulties when showing the rigorous derivation of the MM kinetics. In the following, we write \(L^{p+}\) and \(W^{k,p+}\) to indicate \(L^{p+\delta}\) or \(W^{k,p+\delta}\) for some \(\delta>0\).
**Theorem 1.1** (The case \(d_{2}=d_{3}\)).: _Assume \(d_{2}=d_{3}\). Consider componentwise non-negative initial data \(u_{0}\in W^{2,q_{0}+}(\Omega)\times L^{q_{0}+}(\Omega)^{2}\times W^{2,q_{0}+} (\Omega)\) for \(q_{0}\geq\max\{N+2,4\}\), and let \(u^{\varepsilon}\) be the classical solution to (9) for \(\varepsilon>0\). Then we have, as \(\varepsilon\to 0\),_
\[(u_{1}^{\varepsilon},u_{2}^{\varepsilon},u_{3}^{\varepsilon},u_{4}^{ \varepsilon})\longrightarrow(u_{1},u_{2},u_{3},u_{4})\quad\text{ in }\quad L^{\infty}(Q_{T})\times L^{q_{0}+}(Q_{T})\times L^{q_{0}+}(Q_{T}) \times L^{\infty}(Q_{T}),\]
_where \((u_{1},v,u_{4})\) with \(v=u_{2}+u_{3}\) is the bounded weak solution to (12) for \(d_{2}=d_{3}\). Moreover, we have the following convergence of the critical manifold_
\[\left\|u_{3}^{\varepsilon}-\frac{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}} v^{\varepsilon}\right\|_{L^{2}(Q_{T})}=O\left(\varepsilon^{1/2}\right)\quad\text{as} \quad\varepsilon\to 0. \tag{13}\]
**Remark 1.2**.:
* _Due to the rescaling (_8_), the assumption_ \(u_{20},u_{30}\in L^{q_{0}+}(\Omega)\) _in Theorem_ 1.1 _is equivalent to_ \(\left\|n_{EI}\right\|_{L^{q_{0}+}(\Omega)}=O(\varepsilon)=\left\|n_{CI}\right\| _{L^{q_{0}+}(\Omega)}\)_. This is somewhat stronger than (_6_), where only the smallness in_ \(L^{1}(\Omega)\) _is assumed. Nevertheless, since_ \(q_{0}<+\infty\)_, the assumption in Theorem_
1.1 still allows initial enzyme (and complex) to have spikes, which is biologically relevant (see [13])._
* _The convergence of critical manifold in (_13_) can be shown in a better norm, namely_ \(L^{p}(Q_{T})\) _for_ \(p>2\)_, with a price of slower convergence order._
Thanks to \(d_{2}=d_{3}\), which means that \(\lim_{\varepsilon\to 0}|d_{2}^{\varepsilon}-d_{3}^{\varepsilon}|=0\), and assumption (10), we can apply the improved duality lemma to see that \(\{u_{2}^{\varepsilon}\}_{\varepsilon>0}\) and \(\{u_{3}^{\varepsilon}\}_{\varepsilon>0}\) are bounded in \(L^{q}(Q_{T})\) for any \(1\leq q<\infty\). From this, we can utilise the equations of \(u_{1}^{\varepsilon}\) and \(u_{4}^{\varepsilon}\) to show that \(\{u_{1}^{\varepsilon}\}_{\varepsilon>0}\) and \(\{u_{4}^{\varepsilon}\}_{\varepsilon>0}\) are relatively compact in \(L^{\infty}(Q_{T})\). Moreover, it can be shown that \(\{\nabla u_{j}^{\varepsilon}\}\), \(j\in\{1,4\}\) are also bounded uniformly in \(L^{\infty}(Q_{T})\). In order to obtain the strong convergence of \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\), we use an energy function of the form
\[\mathcal{H}^{\varepsilon}(t):=\int_{\Omega}\left((k_{1}u_{1}^{\varepsilon}+l_ {2}u_{4}^{\varepsilon})^{p-1}(u_{2}^{\varepsilon})^{p}+(k_{2}+l_{1})^{p-1}(u _{3}^{\varepsilon})^{p}\right) \tag{14}\]
for some \(p\geq 2\). The essential difficulty in our problem, comparing to that of [13, 1], is that no positive lower bounds for \(u_{1}^{\varepsilon}\) or \(u_{4}^{\varepsilon}\) available. We therefore exploit the gradient estimates of \(u_{1}^{\varepsilon},u_{4}^{\varepsilon}\) and the \(L^{q}(Q_{T})\)-estimates of \(u_{2}^{\varepsilon},u_{3}^{\varepsilon}\), together with a uniform bound of \(\int_{0}^{T}\int_{\Omega}|\nabla u_{j}^{\varepsilon}|^{2}/(u_{j}^{\varepsilon })^{1+\sigma}\), \(j\in\{1,4\}\) for any \(\sigma\in[0,1)\), to obtain strong convergence of the critical manifold (13) and uniform boundedness of \(\{\nabla u_{3}^{\varepsilon}\}_{\varepsilon>0}\) in \(L^{2}(Q_{T})\). These firstly lead strong convergence of \(u_{2}^{\varepsilon}+u_{3}^{\varepsilon}\to u_{2}+u_{3}\), and consequently \(u_{2}^{\varepsilon}\to u_{2}\) and \(u_{3}^{\varepsilon}\to u_{3}\) by combining with the strong convergence of the critical manifold, where \((u_{1},v=u_{2}+u_{3},u_{4})\) is a weak solution to the reduced system (12). Furthermore, since \(d_{2}=d_{3}\), the limit system has a unique bounded weak solution, which implies that the whole sequence \(\{u_{j}^{\varepsilon}\}_{\varepsilon>0}\), \(j=1,\ldots,4\), is convergent as \(\varepsilon\to 0\) rather than just a subsequence.
**Theorem 1.3** (The case \(d_{2}\neq d_{3}\), strong convergence of solutions).: _Assume (10) and \(d_{2}\neq d_{3}\). Assume additionally that there exists_
\[p_{0}>\begin{cases}4&\text{ if }N=1,2,\\ \dfrac{6(N+2)}{N+4}&\text{ if }3\leq N\leq 8,\\ \dfrac{8(N+2)}{N+8}&\text{ if }N\geq 9,\end{cases} \tag{15}\]
_such that_
\[\dfrac{|d_{2}-d_{3}|}{d_{2}+d_{3}}<\dfrac{1}{C_{p_{0}^{\prime\prime}}^{ \mathsf{MR}}}\quad\text{ with }\quad p_{0}^{\prime}=\dfrac{p_{0}}{p_{0}-1} \tag{16}\]
_where \(C_{p_{0}^{\prime}}^{\mathsf{MR}}\) is the optimal constant in \(L^{p}\)-maximal regularity of parabolic equations (see Lemma 2.4). Consider componentwise non-negative initial data \(u_{0}\in W^{2,q_{0}+}(\Omega)\times L^{q_{0}+}(\Omega)^{2}\times W^{2,q_{0}+} (\Omega)\) for \(q_{0}:=\max\{N,p_{0},(N+2)/2\}\), and let \(u^{\varepsilon}\) be the classical solution to (9). Then we have, up to a subsequence as \(\varepsilon\to 0\),_
\[(u_{1}^{\varepsilon},u_{2}^{\varepsilon},u_{3}^{\varepsilon},u_{4}^{ \varepsilon})\longrightarrow(u_{1},u_{2},u_{3},u_{4})\quad\text{ in }L^{p}(Q_{T})\times L^{p_{0}-}(Q_{T})^{2}\times L^{p}(Q_{T})\]
_for any \(1\leq p<p_{1}\),_
\[p_{1}=\begin{cases}\frac{(N+2)p_{0}}{N+2-2p_{0}}&\text{ if }p_{0}<(N+2)/2,\\ <\infty\text{ arbitrary }&\text{ if }p_{0}=(N+2)/2,\\ \infty&\text{ if }p_{0}>(N+2)/2,\end{cases} \tag{17}\]
_where \((u_{1},v,u_{4})\) with \(v=u_{2}+u_{3}\) is a weak solution to (12) (see Definition 2.3 (a)). We also have the convergence of the critical manifold_
\[\left\|u_{3}^{\varepsilon}-\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon})v^{\varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon}+k_{2}+l_{1}}\right\|_{L^{2}(Q_{T})}=O\left(\varepsilon^{1/2} \right)\quad\text{as}\quad\varepsilon\to 0.\]
**Remark 1.4**.: _In Theorem 1.3, the strong convergence of \(u_{1}^{\varepsilon}\to u_{1}\) and \(u_{4}^{\varepsilon}\to u_{4}\) can be improved as_
\[(u_{1}^{\varepsilon},u_{4}^{\varepsilon})\to(u_{1},u_{4})\quad\text{ in }\quad L^{p^{*}}(Q_{T})\]
_for any \(p^{*}<+\infty\) if \(N\leq 8\), and \(p^{*}=\frac{4(N+2)}{N-4}\) if \(N\geq 9\). Similarly as Remark 1.2, the convergence of critical manifold can be obtained in a stronger norm \(L^{s}(Q_{T})\) for some \(s>2\), especially in the lower dimensions._
Theorem 1.3 considers the case when \(d_{2}\neq d_{3}\), and consequently the limit system (12) is a cross-reaction-diffusion system, and \(d_{2}\) and \(d_{3}\) close to each other enough such that (16) and (15) hold. This closeness condition is enough to apply the improved duality method to obtain a-priori bounds of \(u_{j}^{\varepsilon}\), \(j=\{1,2,3,4\}\) to estimate the energy (14), which are then sufficient to deduce the desired convergences in Theorem 1.3.
It can be seen from (15) that even in one dimension, conditions on \(d_{2}\) and \(d_{3}\) are still imposed since the unconditional improved duality method only gives \(L^{2+}(Q_{T})\) estimates. It is in fact possible to weaken (15) with the price of having only convergence for enzyme and complex.
**Theorem 1.5** (The case \(d_{2}\neq d_{3}\), strong convergence of critical manifold).: _Assume (10) and \(d_{2}\neq d_{3}\). If \(N\geq 3\), assume additionally that there exists_
\[p_{0}>\frac{3(N+2)}{N+4} \tag{18}\]
_such that_
\[\frac{|d_{2}-d_{3}|}{d_{2}+d_{3}}<\frac{1}{C_{p_{0}}^{\mathsf{MR}}}\quad\text { with }\quad p_{0}^{\prime}=\frac{p_{0}}{p_{0}-1} \tag{19}\]
_where \(C_{p_{0}^{\prime}}^{\mathsf{MR}}\) is the optimal constant in \(L^{p}\)-maximal regularity of parabolic equations (see Lemma 2.4). Consider componentwise non-negative initial data \(u_{0}\in W^{2,q_{0}+}(\Omega)\times L^{q_{0}+}(\Omega)^{2}\times W^{2,q_{0}+} (\Omega)\) for \(q:=\max\{N,p_{0},(N+2)/2\}\), and let \(u^{\varepsilon}\) be the classical solution to (9). Then we have, up to a subsequence as \(\varepsilon\to 0\),_
\[(u_{1}^{\varepsilon},u_{4}^{\varepsilon})\longrightarrow(u_{1},u_{4})\ \text{ in }\ L^{p}(Q_{T})^{2},\quad\text{ and }\quad(u_{2}^{\varepsilon},u_{3}^{\varepsilon})\rightharpoonup(u_{2},u_{3}) \ \text{ in }\ L^{2}(Q_{T})^{2}\]
_with \(1\leq p<\overline{p}_{1}\),_
\[\overline{p}_{1}=\left\{\begin{array}{ll}+\infty&\text{ if }N=1,2,\\ \frac{3(N+2)}{N-2}&\text{ if }N\geq 3,\end{array}\right.\]
_where \((u_{1},v=u_{2}+u_{3},u_{4})\) is a very weak solution to the limit system (12) (see Definition 2.3 (b)). The following strong convergence of the critical manifold holds_
\[\left\|u_{3}^{\varepsilon}-\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon})v^{\varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon}+k_{2}+l_{1}}\right\|_{L^{r}(Q_{T})}=O(\varepsilon^{1/6}).\]
_for \(r=4/3\) if \(N=1,2,\) and \(r=6/5\) if \(N\geq 3.\)_
Theorem 1.5 allows us to remove the closeness of diffusion coefficients of \(u_{2},u_{3}\) in case \(N=1,2,\) and weaken it from (15) to (18) in higher dimensions. Comparing to Theorem 1.3, we also obtain the strong convergence of the critical manifold in Theorem 1.5 but only weak convergence of \(u_{2}^{\varepsilon},u_{3}^{\varepsilon}\). It will become evident in our proof that under the assumptions of Theorem 1.5, it seems not possible to exploit the energy function (14) due to the lack of suitable estimates for \(u_{j}^{\varepsilon},j=1,2,3,4\). Our idea is to consider a _modified energy function_
\[\mathcal{H}_{\alpha}^{\varepsilon}(t):=\int_{\Omega}\Big{(}(k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon}+\alpha(\varepsilon))^{p-1}(u_{2}^{ \varepsilon})^{p}+(k_{2}+l_{1})^{p-1}(u_{3}^{\varepsilon})^{p}\Big{)}, \tag{20}\]
for \(1<p<2\), where \(\alpha(\varepsilon)\) satisfies \(\lim_{\varepsilon\to 0}\alpha(\varepsilon)=0\). It turns out that for suitable \(\alpha(\varepsilon)\), we obtain the strong convergence of the critical manifold in Theorem 1.5.
Finally, without imposing additional assumptions on the limit diffusion coefficients \(d_{1},\ldots,d_{4}\), we can still show the convergence of (9) to (12) and the critical manifold in a weak sense. This is stated in our final result.
**Theorem 1.6** (The case \(d_{2}\neq d_{3}\), weak convergence of critical manifold).: _Assume (10) and \(d_{2}\neq d_{3}\). Consider componentwise non-negative initial data \(u_{0}\in L^{2}(\Omega)^{4}\), and let \(u^{\varepsilon}\) be a global weak solution to (9). Then we have, up to a subsequence as \(\varepsilon\to 0\),_
\[(u_{1}^{\varepsilon},u_{4}^{\varepsilon})\to(u_{1},u_{4})\ \ \text{in}\ \ L^{p}(Q_{T})^{2},\ \ \ \ \text{and}\ \ \ \ (u_{2}^{\varepsilon},u_{3}^{\varepsilon})\rightharpoonup(u_{2},u_{3})\ \ \text{in}\ \ L^{2}(Q_{T})^{2}\]
_with \(1\leq p<\overline{p}_{1}\) (\(\overline{p}_{1}\) is defined in Theorem 1.5), where \((u_{1},v=u_{2}+u_{3},u_{4})\) is a very weak solution to the reduced system (12). Moreover, the critical manifold converges to zero in distributional sense, i.e._
\[\lim_{\varepsilon\to 0}\left|\iint_{Q_{T}}\left[u_{3}^{\varepsilon}-\frac{(k_{1}u_ {1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\right]\psi dxdt\right|=0 \tag{21}\]
_for any test function \(\psi\in C_{c}^{\infty}(Q_{T})\)._
The results in our paper can be summarised in Figure 1. In this table, we write \(L^{p}:=L^{p}(Q_{T})\), \(L^{p-}=\cap_{1\leq q<p}L^{q}(Q_{T})\) and dist. sense for distributional sense.
Figure 1: Rigorous derivation of Michaelis-Menten kinetics in the presence of diffusion.
**Notation**. In this paper, we will use the following notation:
* For \(T>0\) we write \(Q_{T}=\Omega\times(0,T)\). The classical Lebesgue spaces are denoted by \(L^{p}(\Omega)\) and \(L^{p}(Q_{T})\), \(1\leq p\leq\infty\). For \(1\leq p<\infty\), we write \(u\in L^{p+}(Q_{T})\) if there exists a constant \(\gamma>0\) such that \(u\in L^{p+\gamma}(Q_{T})\).
* We denote by \(C\) a generic constant which can be different from line to line or even in the same line. This constant may depend on fixed parameters of the problem, such as the dimension \(N\), the domain \(\Omega\), the fixed time horizon \(T>0\), the limit diffusion coefficients \(d_{1},\ldots,d_{4}\) in (10), etc., but _does not depend on the parameter \(\varepsilon>0\)_. Sometimes we write \(C(\alpha,\beta,\ldots)\) to emphasise the dependence of \(C\) on the parameters \(\alpha,\beta\), etc.
**Organisation of the paper**: In the next section, we derive uniform-in-\(\varepsilon\) bounds for the solution to (9). These bounds, as mentioned earlier, are obtained by utilising the improved duality method, the heat regularisation, and a modified energy function, which are presented in the consecutive subsections. The last section is devoted to the proofs of the main theorems 1.1-1.6.
## 2 Uniform estimates
### Preliminaries
We start with the global existence for the problem (9) for each positive value of the parameter \(\varepsilon\). We are interested in classical solutions in the following sense.
**Definition 2.1**.: _Given \(0<T\leq\infty\) and \(\varepsilon>0\). A vector \(u^{\varepsilon}=(u^{\varepsilon}_{j})_{j=1,\ldots,4}\) is called a classical solution of (9) on \((0,T)\) if its components belong to \(C([0,T);L^{p}(\Omega))\cap C((\tau,T);L^{\infty}(\Omega))\cap C^{2,1}(\overline {\Omega}\times(0,T))\) for some \(p>N/2\), and for all \(T>\tau>0\), and it satisfies (9) pointwise._
**Theorem 2.2**.: _Fix \(\varepsilon>0\). Then for each non-negative initial data \(u_{0}\in W^{2,q}(\Omega)\times L^{q}(\Omega)^{2}\times W^{2,q}(\Omega)\), \(q>\max\{N,2,(N+2)/2\}\), the system (9) has a unique global classical solution \(u^{\varepsilon}\)._
Proof.: It is obvious that the nonlinearities \(f_{j}:\mathbb{R}^{4}_{+}\to\mathbb{R}\), for \(j=1,\ldots,4\), are locally Lipschitz continuous and have at most quadratic growth. Therefore, by standard fixed-point-arguments, there exists only a local solution \(u^{\varepsilon}\in C([0,T_{\max});L^{p}(\Omega))^{4}\), \(N/2<p\leq q\), on a maximal interval \([0,T_{\max})\) to the integral system
\[u^{\varepsilon}_{j}(x,t)=e^{td_{j}\Delta}u_{j0}(x)+\int_{0}^{t}e^{(t-s)d_{j} \Delta}f_{j}(u^{\varepsilon}(x,s))ds,\quad j=1,...,4, \tag{22}\]
such that
\[T_{\max}=\infty\quad\text{or}\quad\lim_{t\to T_{\max}}\|u(\cdot,t)\|_{L^{ \infty}(\Omega)^{4}}=\infty\text{ if }T_{\max}<\infty. \tag{23}\]
The solution preserves positivity since \(f_{j}\), \(j=1,...,4\), are quasi-positive, i.e. \(f_{j}(u^{\varepsilon})\geq 0\) for all \(u^{\varepsilon}\in[0,\infty)^{4}\) with \(u^{\varepsilon}_{j}=0\). Moreover, thanks to smoothing effects of the Neumann heat semigroup, \(u^{\varepsilon}_{j}(\tau)\in L^{\infty}(\Omega)\) for some \(\tau\in(0,T_{\max})\). Taking these as initial data, and using the mass control structure
\[\sum_{j=1}^{4}f_{j}(u^{\varepsilon})\leq l_{1}u^{\varepsilon}_{3}+k_{2}u^{ \varepsilon}_{3}\leq\max\{l_{1},k_{2}\}\sum_{j=1}^{4}u^{\varepsilon}_{j},\]
we can apply [13] to conclude that there exists a global classical solution to (9).
We now give a definition of weak and very weak solutions to (12).
**Definition 2.3**.: _Assume \(d_{2}\neq d_{3}\)._
1. _A triple of non-negative functions_ \((u_{1},v,u_{4})\in C([0,T];L^{2}(\Omega))^{3}\cap L^{2}(0,T;H^{1}(\Omega))^{3}\) _is called a_ _weak solution_ _to_ (_12_)_, if_ \[(\partial_{t}u_{1},\partial_{t}v,\partial_{t}u_{4})\in L^{2}(0,T;(H^{1}(\Omega ))^{\prime})\times L^{2}(0,T;(H^{2}(\Omega))^{\prime})\times L^{2}(0,T;(H^{1}( \Omega))^{\prime})\] _and for all test functions_ \(\varphi\in L^{2}(0,T;H^{1}(\Omega))\)_,_ \(\psi\in L^{2}(0,T;H^{2}(\Omega))\) _with_ \(\partial_{\nu}\psi=0\) _on_ \(\partial\Omega\times(0,T)\) _it holds_ \[\iint_{Q_{T}}\varphi\partial_{t}u_{j}+d_{j}\iint_{Q_{T}}\nabla u_{j}\cdot \nabla\varphi=(-1)^{j}\iint_{Q_{T}}\frac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v}{ k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\varphi,\quad j\in\{1,4\},\] \[\iint_{Q_{T}}\psi\partial_{t}v+d_{2}\iint_{Q_{T}}\nabla v\cdot \nabla\psi=(d_{3}-d_{2})\iint_{Q_{T}}\frac{(k_{1}u_{1}+l_{2}u_{4})v}{k_{1}u_{ 1}+l_{2}u_{4}+k_{2}+l_{1}}\Delta\psi.\]
2. _A triple of non-negative functions_ \((u_{1},v,u_{4})\) _is called a_ _very weak solution_ _to (_12_), if_ \[(u_{1},u_{4})\in L^{2}(0,T;H^{1}(\Omega)),\quad(\partial_{t}u_{1},\partial_{t }u_{4})\in L^{2}(0,T;(H^{1}(\Omega))^{\prime})^{2},\quad v\in L^{2}(Q_{T}),\] _and for all test functions_ \(\varphi\in L^{2}(0,T;H^{1}(\Omega))\)_,_ \(\psi\in C_{c}^{\infty}(Q_{T})\) _it holds_ \[\iint_{Q_{T}}\varphi\partial_{t}u_{j}+d_{j}\iint_{Q_{T}}\nabla u_{j}\cdot \nabla\varphi=(-1)^{j}\iint_{Q_{T}}\frac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v}{ k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\varphi,\quad j\in\{1,4\},\] \[-\iint_{Q_{T}}v\partial_{t}\psi-d_{2}\iint_{Q_{T}}v\Delta\psi=(d_{3}-d_{2}) \iint_{Q_{T}}\frac{(k_{1}u_{1}+l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1 }}\Delta\psi.\]
### Heat regularisation and improved duality method
We first state the classical regularisation by the heat operator, which involves solutions to the following inhomogeneous heat equation
\[(\mathbf{H}_{d}(u_{0},f)):\quad\begin{cases}\partial_{t}u-d\Delta u=f,&\text{ in }Q_{T},\\ \partial_{\nu}u=0,&\text{on }\partial\Omega\times(0,T),\\ u(x,0)=u_{0}(x),&\text{in }\Omega.\end{cases}\]
**Lemma 2.4** ([16], Theorem 1).: _Let \(1<q<\infty\). Assume that \(f\in L^{q}(Q_{T})\) and let \(u\) be the weak solution to problem \((\mathbf{H}_{1}(0,f))\). Then there is an optimal constant \(C_{q}^{\mathsf{MR}}\) depending only on \(q\), the dimension \(N\), and the domain \(\Omega\), such that_
\[\|\Delta u\|_{L^{q}(Q_{T})}\leq C_{q}^{\mathsf{MR}}\|f\|_{L^{q}(Q_{T})}, \tag{24}\]
_where the superscript \(\mathsf{MR}\) indicates the maximal regularity property._
**Lemma 2.5**.: _Let \(1<q<\infty\) and \(d>0\). Assume that \(f\in L^{q}(Q_{T})\), \(u_{0}\in W^{2,q}(\Omega)\), and \(u\) be the weak solution to the problem \((\mathbf{H}_{d}(u_{0},f))\). Then, for_
\[p=\left\{\begin{array}{ll}\frac{(N+2)q}{N+2-2q}&\mbox{if }q<\frac{N+2}{2}, \\ \in[1,\infty)\mbox{ arbitrary}&\mbox{if }q=\frac{N+2}{2},\end{array}\right. \mbox{ and }\begin{array}{ll}r=\left\{\begin{array}{ll}\frac{(N+2)q}{N+2-q}& \mbox{if }q<N+2,\\ \in[1,\infty)\mbox{ arbitrary}&\mbox{if }q=N+2,\\ \infty&\mbox{if }q>N+2,\end{array}\right.\end{array}\]
_there holds_
\[\|u\|_{L^{p}(Q_{T})}+\|\nabla u\|_{L^{r}(Q_{T})}+\|\partial_{t}u \|_{L^{q}(Q_{T})}+\|\Delta u\|_{L^{q}(Q_{T})}\] \[\leq C_{1}C_{q,N,\Omega,T}\|f\|_{L^{q}(Q_{T})}+C_{2}C_{q,N, \Omega,T}\|u_{0}\|_{W^{2,q}(\Omega)}, \tag{25}\]
_where \(C_{q,N,\Omega,T}\) depends only on \(q,N,\Omega,T\), continuously depends on \(T\), and remains bounded for finite values of \(T>0\), and_
\[C_{1}:=(C_{q}^{\sf MR}+1)(1/d+1+T),\quad C_{2}:=T^{1/q}(2+d+dT).\]
Proof.: By the substitutions \(s=dt\), \(\widetilde{u}(s,\cdot)=u(s/d,\cdot)\), \(\widetilde{f}(s,\cdot)=f(s/d,\cdot)\), the equation \((\mathbf{H}_{d}(u_{0},f))\) of \(u\) becomes the equation \((\mathbf{H}_{1}(u_{0},\widetilde{f}/d))\) of \(\widetilde{u}\). Let us consider the decomposition \(\widetilde{u}=\widetilde{v}+\widetilde{w}\), where \(\widetilde{v}\) and \(\widetilde{w}\) are the solutions to \((\mathbf{H}_{1}(0,\widetilde{f}/d))\) and \((\mathbf{H}_{1}(u_{0},0))\), respectively. By Lemma 2.4,
\[\|\Delta\widetilde{v}\|_{L^{q}(Q_{dT})}\leq C_{q}^{\sf MR}\|\widetilde{f}/d\|_ {L^{q}(Q_{dT})}, \tag{26}\]
On the other hand, since the heat semigroup corresponding to the homogeneous Neumann boundary condition is a contraction semigroup on \(L^{q}(\Omega)\), we have
\[\|\Delta\widetilde{w}(s)\|_{L^{q}(\Omega)}=\|\Delta e^{s\Delta}u_{0}\|_{L^{q} (\Omega)}=\|e^{s\Delta}\Delta u_{0}\|_{L^{q}(\Omega)}\leq\|\Delta u_{0}\|_{L^ {q}(\Omega)}, \tag{27}\]
for all \(s\in(0,dT)\), where we note that \(e^{s\Delta}\) and \(\Delta\) are commute on \(W^{2,q}(\Omega)\) since \(\{e^{s\Delta}:s\geq 0\}\) is a continuous semigroup. Combining the estimates (26) and (27) gives
\[\|\Delta\widetilde{u}\|_{L^{q}(Q_{dT})}\leq C_{q}^{\sf MR}\|\widetilde{f}/d\|_ {L^{q}(Q_{dT})}+(dT)^{\frac{1}{q}}\|\Delta u_{0}\|_{L^{q}(\Omega)},\]
which is equivalent to
\[\|\Delta u\|_{L^{q}(Q_{T})}\leq C_{q}^{\sf MR}\|f/d\|_{L^{q}(Q_{T})}+T^{\frac{ 1}{q}}\|\Delta u_{0}\|_{L^{q}(\Omega)}.\]
and so the equation \((\mathbf{H}_{d}(u_{0},f))\) gives \(\|\partial_{t}u\|_{L^{q}(Q_{T})}\leq(C_{q}^{\sf MR}+1)\|f\|_{L^{q}(Q_{T})}+dT^{ \frac{1}{q}}\|\Delta u_{0}\|_{L^{q}(\Omega)}\). Therefore, by the Holder inequality,
\[\|u\|_{L^{q}(Q_{T})}\leq T^{\frac{1}{q}}\|u_{0}\|_{L^{q}(\Omega)}+T\|\partial _{t}u\|_{L^{q}(Q_{T})},\]
and consequently \(u\in W^{2,1}_{q}(Q_{T})\) with respect to
\[\|u\|_{W^{2,1}_{q}(Q_{T})}\leq(C_{q}^{\sf MR}+1)(1/d+1+T)\|f\|_{L^{q}(Q_{T})}+ T^{1/q}(2+d+dT)\|u_{0}\|_{W^{2,q}(\Omega)}. \tag{28}\]
Finally, by applying interpolation inequalities with different spaces of functions (on \(\mathbb{R}^{N+1}\)) depending on \(x\) and \(t\) as in [18, Lemma 3.3], there exists a constant \(C_{q,N,\Omega,T}\) satisfying (25).
**Remark 2.6**.: _At the first glance, the above lemma looks similarly as [18, Corollary of Theorem 9.1]. We emphasise, however, that the regularity (25) shows how the constants depend on \(T\), and especially on \(d\). This is crucial to estimate solutions of reaction-diffusion equations with diffusion coefficients \(d_{j}^{\varepsilon}\), \(1\leq j\leq 4\), which depend on \(\varepsilon\). There are similar estimates as (25) in [18, Corollary of Theorem 9.1] and Canizo-Desvillettes-Fellner [14, Lemma 3.3], where the dependence on \(T\) was stated._
We will utilise the following improved duality lemma.
**Lemma 2.7** (Improved duality estimate, [14, 20]).: _Let \(T>0\), \(1<q<\infty\), \(K\in\mathbb{R}\) and \(q^{\prime}=q/(q-1)\) be Holder conjugate exponent of \(q\). Assume that \(X_{1},\ldots,X_{m}\), \(m\geq 2\), are nonnegative, smooth functions satisfying the relation_
\[\left\{\begin{array}{lll}\partial_{t}\left(\sum_{i=1}^{m}X_{i} \right)&\leq&\Delta\left(\sum_{i=1}^{m}\kappa_{i}X_{i}\right)+K\sum_{i=1}^{m} X_{i}&\text{in }Q_{T},\\ \partial_{\nu}X_{i}&=&0&\text{on }\Gamma_{T},\\ X_{i}(0,x)&=&X_{i,0}(x)&\text{in }\Omega,\end{array}\right. \tag{29}\]
_for some constants \(\kappa_{i}>0\), \(i=1,\ldots,m\). Let \(\kappa_{\min}=\min\{\kappa_{i}\}\) and \(\kappa_{\max}=\max\{\kappa_{i}\}\). If_
\[C_{q^{\prime}}^{\text{MR}}\frac{|\kappa_{\max}-\kappa_{\min}|}{\kappa_{\max} +\kappa_{\min}}<1, \tag{30}\]
_and \(\sum_{i=1}^{m}X_{i,0}\in L^{q}(\Omega)\), then_
\[\sum_{i=1}^{m}\left\|X_{i}\right\|_{L^{q}(Q_{T})}\leq C(T)\left\|\sum_{i=1}^{m }X_{i,0}\right\|_{L^{q}(\Omega)}, \tag{31}\]
_where \(C\) depends continuously on \(T\) and on \(\kappa_{\min}\), \(\kappa_{\max}\)._
Proof.: With the change of variable \(\widetilde{X}_{i}(x,t)=e^{-Kt}X_{i}(x,t)\) we have
\[\partial_{t}\left(\sum_{i=1}^{m}\widetilde{X}_{i}\right)\leq\Delta\left(\sum_ {i=1}^{m}\kappa_{i}\widetilde{X}_{i}\right).\]
This allows us to apply the same arguments in [20, Lemma 3.9] to obtain, under the condition (30),
\[\sum_{i=1}^{m}\left\|\widetilde{X}_{i}\right\|_{L^{q}(Q_{T})}\leq C(T)\left\| \sum_{i=1}^{m}\widetilde{X}_{i}(\cdot,0)\right\|_{L^{q}(\Omega)}\]
which implies the desired estimates for \(X_{i}\) in (31). Here we emphasise the continuous dependence of the constant \(C\) on \(\kappa_{\max}\) and \(\kappa_{\min}\) as they are needed afterwards.
### A modified energy function
By observing from the equations of \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\) in (9), it is natural to expect the convergence to the critical manifold, i.e.
\[\mathcal{M}(u^{\varepsilon}):=(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon})u_{2}^{\varepsilon}-(k_{2}+l_{1})u_{3}^{\varepsilon}\to 0\]
in a suitable (preferably strong) topology. This turns out to be a subtle issue, especially in the case where the diffusion coefficients \(d_{2}\) and \(d_{3}\) are far away from each other. In the following, we look at the convergence of a perturbed critical manifold
\[\widetilde{\mathcal{M}}_{\alpha}(u^{\varepsilon}):=(k_{1}u_{1}^{\varepsilon }+l_{2}u_{4}^{\varepsilon}+\alpha(\varepsilon))u_{2}^{\varepsilon}-(k_{2}+l_ {1})u_{3}^{\varepsilon}\to 0, \tag{32}\]
where \(\alpha=\alpha(\varepsilon)>0\) will be chosen later such that \(\lim_{\varepsilon\to 0}\alpha(\varepsilon)=0\). Obviously, the strong convergence of \(\mathcal{M}(u^{\varepsilon})\) follows from the convergence of \(\widetilde{\mathcal{M}}_{\alpha}(u^{\varepsilon})\) once \(u_{2}^{\varepsilon}\) is bounded in a certain norm. For the sake of simplicity, we denote by
\[A_{2}:=k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}\quad\text{and}\quad A _{3}:=k_{2}+l_{1}.\]
Note that \(A_{2}\) depends on \(\varepsilon>0\) while \(A_{3}\) is a constant independent of \(\varepsilon\). For \(t>0\), we define the following function
\[\mathcal{H}^{\varepsilon}(t):=\int_{\Omega}((A_{2}+\alpha)u_{2}^{\varepsilon })^{p-1}u_{2}^{\varepsilon}+\int_{\Omega}(A_{3}u_{3}^{\varepsilon})^{p-1}u_{3} ^{\varepsilon}=:\mathcal{H}_{2}^{\varepsilon}(t)+\mathcal{H}_{3}^{\varepsilon }(t),\quad p>1. \tag{33}\]
By differentiating this function with respect to the temporal variable, and employing the structure of solutions of the problem (9), we obtain a-priori estimates in the following lemma, which are crucial to derive the limit (32) and also gradient estimates.
**Lemma 2.8**.: _Let \(\varepsilon>0\) and \(u^{\varepsilon}\) be the solution of (9)._
1. _Let_ \(p\in[2,\infty)\) _and_ \(\alpha=0\)_. Then, there exists_ \(C=C(\|u_{0}\|_{L^{\infty}(\Omega)\times L^{p}(\Omega)^{2}\times L^{\infty}( \Omega)})>0\) _independent of_ \(\varepsilon>0\) _such that_ \[\begin{split}\frac{1}{\varepsilon}\iint_{Q_{T}}\bigl{|}A_{2}u_{2}^{ \varepsilon}-A_{3}u_{3}^{\varepsilon}\bigr{|}^{p}+\sum_{j=2,3}\iint_{Q_{T}}A_ {j}^{p-1}(u_{j}^{\varepsilon})^{p-2}|\nabla u_{j}^{\varepsilon}|^{2}\\ \leq C\biggl{(}1+\iint_{Q_{T}}(u_{2}^{\varepsilon})^{p}\Bigl{[}A_ {2}^{p-2}\partial_{t}A_{2}+A_{2}^{p-3}|\nabla A_{2}|^{2}\Bigr{]}\biggr{)}.\end{split}\] (34)
2. _Let_ \(p\in(1,2]\) _and_ \(\alpha=\varepsilon^{\frac{1}{4-p}}\)_. Then, there exists_ \(C=C(\|u_{0}\|_{L^{\infty}(\Omega)\times L^{p}(\Omega)^{2}\times L^{\infty}( \Omega)})>0\) _independent of_ \(\varepsilon>0\) _such that_ \[\begin{split}\frac{1}{\varepsilon^{\frac{1}{4-p}}}\iint_{Q_{T}} \frac{\bigl{|}(A_{2}+\varepsilon^{\frac{1}{4-p}})u_{2}^{\varepsilon}-A_{3}u_ {3}^{\varepsilon}\bigr{|}^{2}}{\bigl{(}(A_{2}+\varepsilon^{\frac{1}{4-p}})u_{ 2}^{\varepsilon}+A_{3}u_{3}^{\varepsilon}\bigr{)}^{2-p}}+\varepsilon^{\frac{3- p}{4-p}}\sum_{j=2,3}\iint_{Q_{T}}A_{j}^{p-1}\frac{|\nabla u_{j}^{\varepsilon}|^{2}}{(u_{j}^{ \varepsilon})^{2-p}}\\ \leq C\left(1+\iint_{Q_{T}}(u_{2}^{\varepsilon})^{p}\Bigl{[}(A_ {2}+1)^{p-1}+|\partial_{t}A_{2}|+|\nabla A_{2}|^{2}\Bigr{]}\right).\end{split}\] (35)
Proof.: Let us firstly consider the term \(\mathcal{H}_{2}^{\varepsilon}(t)\). The term \(\mathcal{H}_{3}^{\varepsilon}(t)\) can be treated similarly. By
rewriting the equation of \(u_{2}^{\varepsilon}\) in (9) as \(\partial_{t}u_{2}^{\varepsilon}-d_{2}^{\varepsilon}\Delta u_{2}^{\varepsilon}=- \mathcal{M}(u^{\varepsilon})/\varepsilon\), we have, with the help of integration by parts,
\[\frac{d\mathcal{H}_{2}^{\varepsilon}}{dt} =-\frac{p}{\varepsilon}\int_{\Omega}\mathcal{M}(u^{\varepsilon}) ((A_{2}+\alpha)u_{2}^{\varepsilon})^{p-1}+(p-1)\int_{\Omega}(u_{2}^{ \varepsilon})^{p}(A_{2}+\alpha)^{p-2}\partial_{t}A_{2}\] \[\quad+d_{2}^{\varepsilon}p\int_{\Omega}(u_{2}^{\varepsilon})^{p-1 }(A_{2}+\alpha)^{p-1}\Delta u_{2}^{\varepsilon}\] \[=-\frac{p}{\varepsilon}\int_{\Omega}\mathcal{M}(u^{\varepsilon}) ((A_{2}+\alpha)u_{2}^{\varepsilon})^{p-1}+(p-1)\int_{\Omega}(u_{2}^{ \varepsilon})^{p}(A_{2}+\alpha)^{p-2}\partial_{t}A_{2}\] \[\quad-d_{2}^{\varepsilon}p(p-1)\int_{\Omega}(u_{2}^{\varepsilon} )^{p-1}(A_{2}+\alpha)^{p-2}\nabla A_{2}\nabla u_{2}^{\varepsilon}-d_{2}^{ \varepsilon}p(p-1)\int_{\Omega}(u_{2}^{\varepsilon})^{p-2}(A_{2}+\alpha)^{p- 1}|\nabla u_{2}^{\varepsilon}|^{2}.\]
For the term containing \(\nabla A_{2}\nabla u_{2}^{\varepsilon}\) the elementary inequality \(-xy\leq x^{2}/2+y^{2}/2\) yields
\[\frac{d\mathcal{H}_{2}^{\varepsilon}}{dt}\leq -\frac{p}{\varepsilon}\int_{\Omega}\mathcal{M}(u^{\varepsilon})(( A_{2}+\alpha)u_{2}^{\varepsilon})^{p-1}-\frac{d_{2}^{\varepsilon}p(p-1)}{2} \int_{\Omega}(u_{2}^{\varepsilon})^{p-2}(A_{2}+\alpha)^{p-1}|\nabla u_{2}^{ \varepsilon}|^{2}\] \[+(p-1)\int_{\Omega}(u_{2}^{\varepsilon})^{p}(A_{2}+\alpha)^{p-2} \partial_{t}A_{2}+\frac{d_{2}^{\varepsilon}p(p-1)}{2}\int_{\Omega}(u_{2}^{ \varepsilon})^{p}(A_{2}+\alpha)^{p-3}|\nabla A_{2}|^{2}.\]
The derivative \(d\mathcal{H}_{3}^{\varepsilon}/dt\) can be estimated similarly, where we note that \(\partial_{t}A_{3}\), \(\nabla A_{3}\) are equal to zero. Thus, by adding \(d\mathcal{H}_{2}^{\varepsilon}/dt\), \(d\mathcal{H}_{3}^{\varepsilon}/dt\), and integrating the resultant over \((0,T)\), we obtain
\[\mathcal{H}^{\varepsilon}(T)+ \frac{1}{\varepsilon}\iint_{Q_{T}}\mathcal{M}(u^{\varepsilon}) \Big{(}((A_{2}+\alpha)u_{2}^{\varepsilon})^{p-1}-(A_{3}u_{3}^{\varepsilon})^{ p-1}\Big{)}+\sum_{j=2,3}\iint_{Q_{T}}(u_{j}^{\varepsilon})^{p-2}A_{j}^{p-1}| \nabla u_{j}^{\varepsilon}|^{2} \tag{36}\] \[\leq C\bigg{(}\mathcal{H}(0)+\iint_{Q_{T}}(u_{2}^{\varepsilon})^ {p}\Big{[}(A_{2}+\alpha)^{p-2}\partial_{t}A_{2}+(A_{2}+\alpha)^{p-3}|\nabla A _{2}|^{2}\Big{]}\bigg{)},\]
where the constant \(C\) depends only on \(p,d_{2},d_{3}\). Here one can easily find a such constant, which does not depend on \(\varepsilon\), by recalling that \(d_{j}^{\varepsilon}\to d_{j}\) for \(j=2,3\). Moreover, it is clear that the term \(\mathcal{H}(0)\) is bounded under the regularity \(u_{10},u_{40}\in L^{\infty}(\Omega)\) and \(u_{20},u_{30}\in L^{p}(\Omega)\).
Let us show part (a). Since \(\alpha=0\) and \(p\geq 2\), we can apply by the elementary inequality \(|x^{\lambda}-y^{\lambda}|\geq|x-y|^{\lambda}\) for all \(x,y\geq 0\), \(\lambda\geq 1\), which deduces
\[\mathcal{M}(u^{\varepsilon})\big{(}(A_{2}u_{2}^{\varepsilon})^{p-1}-(A_{3}u_{3 }^{\varepsilon})^{p-1}\big{)}\geq\big{|}A_{2}u_{2}^{\varepsilon}-A_{3}u_{3}^{ \varepsilon}\big{|}^{p}. \tag{37}\]
The inequality (34) follows from combining (36), (37) with the non-negativity of \(\mathcal{H}^{\varepsilon}(T)\).
For part (b), we plug \(\mathcal{M}(u^{\varepsilon})=\widetilde{\mathcal{M}}(u^{\varepsilon})-\alpha u _{2}^{\varepsilon}\) into (36) to get
\[\iint_{Q_{T}}\widetilde{\mathcal{M}}(u^{\varepsilon})\Big{(}((A_{2 }+\alpha)u_{2}^{\varepsilon})^{p-1}-(A_{3}u_{3}^{\varepsilon})^{p-1}\Big{)}+ \varepsilon\sum_{j=2,3}\iint_{Q_{T}}(u_{j}^{\varepsilon})^{p-2}A_{j}^{p-1}| \nabla u_{j}^{\varepsilon}|^{2}\] \[\leq C\left(\varepsilon\mathcal{H}^{\varepsilon}(0)+\iint_{Q_{T}}(u _{2}^{\varepsilon})^{p}\Big{[}\alpha(A_{2}+\alpha)^{p-1}+\varepsilon(A_{2}+ \alpha)^{p-2}\partial_{t}A_{2}+\varepsilon(A_{2}+\alpha)^{p-3}|\nabla A_{2}|^{2 }\Big{]}\right)\] \[\leq C\left(\varepsilon\mathcal{H}^{\varepsilon}(0)+\iint_{Q_{T}}(u _{2}^{\varepsilon})^{p}\Big{[}\alpha(A_{2}+1)^{p-1}+\varepsilon\alpha^{p-2}| \partial_{t}A_{2}|+\varepsilon\alpha^{p-3}|\nabla A_{2}|^{2}\Big{]}\right)\] \[\leq C\left(1+\iint_{Q_{T}}(u_{2}^{\varepsilon})^{p}\Big{[}(A_{2 }+1)^{p-1}+|\partial_{t}A_{2}|+|\nabla A_{2}|^{2}\Big{]}\right)(\varepsilon+ \alpha+\varepsilon\alpha^{p-2}+\varepsilon\alpha^{p-3}),\]
where the constant \(C\) depends only on \(p,d_{2},d_{3}\). Now, for all \(x,y>0\) and \(\lambda\in(0,1)\), there exists \(z_{xy}\) such that \(|x^{\lambda-1}-y^{\lambda-1}|=(\lambda-1)z_{xy}^{\lambda-2}|x-y|\) and \(\min(x,y)\leq z_{xy}\leq\max(x,y)\). Since \(\lambda-2\) is negative,
\[|x^{\lambda-1}-y^{\lambda-1}|\geq(\lambda-1)(x+y)^{\lambda-2}|x-y|.\]
By applying this inequality, we have
\[\widetilde{\mathcal{M}}(u^{\varepsilon})\Big{(}(A_{2}+\alpha)u_{2}^{ \varepsilon}-A_{3}u_{3}^{\varepsilon}\Big{)}\geq(p-1)\left((A_{2}+\alpha)u_{2 }^{\varepsilon}+A_{3}u_{3}^{\varepsilon}\right)^{p-2}\big{|}(A_{2}+\alpha)u_{ 2}^{\varepsilon}-A_{3}u_{3}^{\varepsilon}\big{|}^{2}.\]
By choosing \(\alpha=\varepsilon^{\delta}>0\) for \(\delta>0\) and noting that
\[\max_{0<\delta<1/(3-p)}\min\big{\{}1;\,\delta;\,1-(2-p)\delta;\,1-(3-p)\delta \big{\}}=1/(4-p),\]
we arrive at the optimal choice \(\alpha=\varepsilon^{\frac{1}{4-p}}\). This leads to the desired estimate (35).
**Remark 2.9**.: _Lemma 2.8 suggests that we need to control terms on the right hand side of (35) and (34) uniformly in \(\varepsilon>0\). This certainly depends on uniform estimates of solutions to (9) that we derive. As it turns out, part (a) will be utilised when very good controls of solutions can be obtained, while part (b) is more suitable for having weaker controls of solutions. It is also remarked that the latter does not give us any uniform estimates for the gradients of \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\)._
### Uniform-in-\(\varepsilon\) bounds
**Lemma 2.10**.: _Then there exists \(\varepsilon_{*}>0\) such that_
\[\sup_{0<\varepsilon<\varepsilon_{*}}\left(\sum_{j=1}^{4}\|u_{j}^{\varepsilon} \|_{L^{2+}(Q_{T})}\right)\leq C\left(T,\|u_{0}\|_{L^{2+}(\Omega)^{4}}\right). \tag{38}\]
_Moreover,_
1. _If_ \(d_{3}-d_{2}=0\)_, then for any_ \(1<p<\infty\)_, there exists_ \(\varepsilon_{p}>0\) _such that_ \[\sup_{0<\varepsilon<\varepsilon_{p}}\big{(}\|u_{2}^{\varepsilon}\|_{L^{p}(Q_ {T})}+\|u_{3}^{\varepsilon}\|_{L^{p}(Q_{T})}\big{)}\leq C\left(T,\|u_{20}\|_{ L^{p}(\Omega)},\|u_{30}\|_{L^{p}(\Omega)}\right).\]
2. _If_ \(d_{3}-d_{2}\neq 0\)_, then for any_ \(2<p<\infty\)_, if_ \(d_{2}\) _and_ \(d_{3}\) _satisfy_ \[C_{p^{\prime}}^{\mathsf{MR}}\frac{|d_{2}-d_{3}|}{d_{2}+d_{3}}<1,\] (39) _where_ \(p^{\prime}=p/(p-1)\) _is the conjugate Holder exponent of_ \(p\)_, then there exists_ \(\varepsilon_{p}>0\) _such that_ \[\sup_{0<\varepsilon<\varepsilon_{p}}\big{(}\|u_{2}^{\varepsilon}\|_{L^{p}(Q_ {T})}+\|u_{3}^{\varepsilon}\|_{L^{p}(Q_{T})}\big{)}\leq C\left(T,\|u_{20}\|_{ L^{p}(\Omega)},\|u_{30}\|_{L^{p}(\Omega)}\right).\] (40)
Proof.: Adding the equations in (9) leads to, thanks to the non-negativity of \(u_{j}^{\varepsilon}\), \(j=1,\ldots,4\),
\[\left\{\begin{aligned} \partial_{t}\sum_{j=1}^{4}u_{j}^{ \varepsilon}-\Delta\left(\sum_{j=1}^{4}d_{j}^{\varepsilon}u_{j}^{\varepsilon} \right)&\leq(l_{1}+k_{2})\sum_{j=1}^{4}u_{j}^{\varepsilon}& \text{in }Q_{T},\\ \partial_{\nu}u_{j}^{\varepsilon}&=0,& \text{on }\Gamma_{T},\\ u_{j}^{\varepsilon}(0)&=u_{j,0}&\text{in } \Omega.\end{aligned}\right.\]
According to [1, Lemma 3.19], there exists a constant \(1<p_{*}<2\) such that
\[C_{p_{*}}^{\mathsf{MR}}\frac{|\max_{j}d_{j}-\min_{j}d_{j}|}{\max_{j}d_{j}+\min_ {j}d_{j}}<1.\]
Since \(d_{j}^{\varepsilon}\to d_{j}\) as \(\varepsilon\to 0^{+}\) for \(j=1,\ldots,4\), the exists \(\varepsilon_{*}>0\) such that
\[C_{p_{*}}^{\mathsf{MR}}\frac{|\max_{j}d_{j}^{\varepsilon}-\min_{j}d_{j}^{ \varepsilon}|}{\max_{j}d_{j}^{\varepsilon}+\min_{j}d_{j}^{\varepsilon}}<1,\]
for all \(0<\varepsilon<\varepsilon_{*}\). Lemma 2.7 therefore yields that there exists \(C_{\max_{j}d_{j}^{\varepsilon},\,\min_{j}d_{j}^{\varepsilon}}(\|u_{0}\|_{L^{2 +}(\Omega)^{4}})\) depending on \(T,l_{1},k_{2},\Omega\) and continuously depending on \(\max_{j}d_{j}^{\varepsilon}\), \(\min_{j}d_{j}^{\varepsilon}\) such that
\[\sup_{0<\varepsilon<\varepsilon_{*}}\bigg{(}\sum_{j=1}^{4}\|u_{j}^{ \varepsilon}\|_{L^{2+}(Q_{T})}\bigg{)}\leq\bigg{(}\sup_{0<\varepsilon< \varepsilon_{*}}C_{\max_{j}d_{j}^{\varepsilon},\,\min_{j}d_{j}^{\varepsilon}} \bigg{)}\leq C(\max_{j}d_{j},\,\min_{j}d_{j}),\]
which shows (38).
In the case where \(d_{3}-d_{2}=0\), it follows from (10) that the fraction \(|d_{3}^{\varepsilon}-d_{2}^{\varepsilon}|/(d_{3}^{\varepsilon}+d_{2}^{ \varepsilon})\to 0\). Therefore, for any \(p\in(1,\infty)\), there exists \(\varepsilon_{p}>0\) such that
\[C_{p^{\prime}}^{\mathsf{MR}}\frac{|d_{3}^{\varepsilon}-d_{2}^{\varepsilon}|}{ d_{3}^{\varepsilon}+d_{2}^{\varepsilon}}<1, \tag{41}\]
for all \(0<\varepsilon<\varepsilon_{p}\). Consequently, Lemma 2.7 entails the uniform w.r.t. \(\varepsilon\in(0,\varepsilon_{p})\) bounds in \(L^{p}(Q_{T})\) of \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\).
In the case \(d_{3}-d_{2}\neq 0\), if (39) is fulfilled, then due to (10), we can find \(\varepsilon_{p}>0\) such that (41) holds. The estimate (40) is then a direct consequence of Lemma 2.7.
Without further assumptions, the improve duality method only gives us \(L^{2+}(Q_{T})\)-estimates for \(u_{1}^{\varepsilon}\) and \(u_{4}^{\varepsilon}\). To obtain better estimates for these two functions, we utilise their equations and the heat regularisation.
**Lemma 2.11**.: _Let \(u^{\varepsilon}\) be the solution to (9). We have the following uniform-in-\(\varepsilon\) bounds_
\[\begin{split}\sup_{0<\varepsilon<\varepsilon_{*}}\sum_{j\in\{1;4 \}}&\left(\|u_{j}^{\varepsilon}\|_{L^{\frac{2(N+2)}{N-2}+(Q_{T})}} +\|\nabla u_{j}^{\varepsilon}\|_{L^{2}(Q_{T})}+\|\partial_{t}u_{j}^{ \varepsilon}\|_{L^{1+}(Q_{T})}\right)\\ &\leq C\left(T,\|u_{0}\|_{W^{2,2+}(\Omega)\times L^{2+}(\Omega)^{ 2}\times W^{2,2+}(\Omega)}\right),\end{split} \tag{42}\]
_where we use the convention \(\frac{2(N+2)}{N-2}+=+\infty\) if \(N\leq 2\)._
Proof.: Since the nonlinearities \(f_{1}(u^{\varepsilon})=-k_{1}u^{\varepsilon}_{1}u^{\varepsilon}_{2}+l_{1}u^{ \varepsilon}_{3}\) and \(f_{4}(u^{\varepsilon})=-l_{2}u^{\varepsilon}_{2}u^{\varepsilon}_{4}+k_{2}u^{ \varepsilon}_{3}\) are quadratic, by Lemma 2.10 we have
\[\sup_{0<\varepsilon<\varepsilon_{*}}\bigg{(}\sum_{j\in\{1;4\}}\|f_{j}(u^{ \varepsilon})\|_{L^{1+}(Q_{T})}\bigg{)}\leq C(T,\|u_{0}\|_{L^{2+}(\Omega)^{4} }). \tag{43}\]
Hence, from the equations
\[\partial_{t}u^{\varepsilon}_{j}-d^{\varepsilon}_{j}\Delta u^{\varepsilon}_{j} =f_{j}(u^{\varepsilon}),\quad j=1,4,\]
we can now use the maximal regularity in Lemma 2.5 to get the bound on the derivative
\[\sup_{0<\varepsilon<\varepsilon_{*}}\sum_{j\in\{1;4\}}\|\partial_{t}u^{ \varepsilon}_{j}\|_{L^{1+}(Q_{T})}\leq C\left(T,\|u_{0}\|_{W^{2,2+}(\Omega) \times L^{2+}(\Omega)^{2}\times W^{2,2+}(\Omega)}\right), \tag{44}\]
where we have used (43) and \(d^{\varepsilon}_{j}\to d_{j}\) as \(\varepsilon\to 0^{+}\). Note that a direct application of Lemma 2.5 with the right hand side \(f_{j}(u^{\varepsilon})\in L^{1+}(Q_{T})\) does not give the uniform estimates for \(u^{\varepsilon}_{j}\) and \(\nabla u^{\varepsilon}_{j}\) as in (42), especially in high dimensions. We will utilise the non-negativity of \(u^{\varepsilon}\). Indeed, from
\[\partial_{t}u^{\varepsilon}_{1}-d^{\varepsilon}_{1}\Delta u^{\varepsilon}_{1} =-k_{1}u^{\varepsilon}_{1}u^{\varepsilon}_{2}+l_{1}u_{3}\leq l_{1}u^{ \varepsilon}_{3},\]
we see that \(0\leq u^{\varepsilon}_{1}\leq w^{\varepsilon}\) where \(w^{\varepsilon}\) is the solution to
\[\partial_{t}w^{\varepsilon}-d^{\varepsilon}_{1}\Delta w^{\varepsilon}=l_{1}u^{ \varepsilon}_{3},\quad\partial_{\nu}w^{\varepsilon}=0,\quad w^{\varepsilon}(x,0)=u_{10}(x)\]
in which \(u^{\varepsilon}_{3}\) is uniformly bounded in \(L^{2+}(Q_{T})\) w.r.t. \(\varepsilon\). Now we can apply Lemma 2.5 to the equation of \(w^{\varepsilon}\), and the comparison principle to obtain
\[\|u^{\varepsilon}_{1}\|_{L^{\frac{2(N+2)}{N-2}+}(Q_{T})}\leq\|w^{\varepsilon} \|_{L^{\frac{2(N+2)}{N-2}+}(Q_{T})}\leq C\left(T,\|u_{10}\|_{W^{2,2+}(\Omega)}\right).\]
The estimate for \(u^{\varepsilon}_{4}\) can be shown in the same way. Concerning the gradient estimate, we multiply the equation of \(u^{\varepsilon}_{1}\) by \(u^{\varepsilon}_{1}\) then integrate on \(Q_{T}\) to get
\[\frac{1}{2}\|u^{\varepsilon}_{1}(T)\|^{2}_{L^{2}(\Omega)}+d^{\varepsilon}_{1} \iint_{Q_{T}}|\nabla u^{\varepsilon}_{1}|^{2}dxdt\leq\frac{1}{2}\|u_{10}\|^{2} _{L^{2}(\Omega)}+l_{1}\iint_{Q_{T}}u^{\varepsilon}_{1}u^{\varepsilon}_{3}dxdt.\]
Thanks to the \(L^{2+}(Q_{T})\)-bounds of \(u^{\varepsilon}_{1}\), \(u^{\varepsilon}_{3}\), and (10) we get the gradient estimates of \(u^{\varepsilon}_{1}\). The estimate of \(\nabla u^{\varepsilon}_{4}\) follows in the same way, so we omit it.
**Lemma 2.12**.: _Let \(q\) and \(\varepsilon_{*}\) be defined by Lemma 2.10. Then, for all \(\sigma\in[0,1)\),_
\[\sup_{0<\varepsilon<\varepsilon_{*}}\bigg{(}\sum_{j=1,4}d^{\varepsilon}_{j} \iint_{Q_{T}}\frac{|\nabla u^{\varepsilon}_{j}|^{2}}{(u^{\varepsilon}_{j})^{1+ \sigma}}+\sum_{j=1,4}\iint_{Q_{T}}\frac{u^{\varepsilon}_{3}}{(u^{\varepsilon}_ {j})^{\sigma}}\bigg{)}\leq C\left(T,\|u_{0}\|_{L^{2+}(\Omega)^{4}}\right). \tag{45}\]
Proof.: For \(\sigma\in[0,1)\) and \(\delta>0\), we define
\[\mathcal{E}^{\delta,\varepsilon}(t):=\int_{\Omega}(u^{\varepsilon}_{1}(t)+ \delta)^{1-\sigma}+\int_{\Omega}(u^{\varepsilon}_{4}(t)+\delta)^{1-\sigma}=: \mathcal{E}^{\delta,\varepsilon}_{1}(t)+\mathcal{E}^{\delta,\varepsilon}_{4} (t).\]
We denote by \(u_{j}^{\delta,\varepsilon}:=u_{j}^{\varepsilon}+\delta\) for \(\delta>0\). By the first equation of (9),
\[\frac{d\mathcal{E}_{1}^{\delta,\varepsilon}}{dt}=\frac{4\sigma}{1-\sigma}d_{1}^{ \varepsilon}\int_{\Omega}\left|\nabla\sqrt{(u_{1}^{\delta,\varepsilon})^{1- \sigma}}\,\right|^{2}+(1-\sigma)\int_{\Omega}(u_{1}^{\delta,\varepsilon})^{- \sigma}(-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon }).\]
Integrating the latter equality over \((0,T)\) and noting the nonnegativity of \(\mathcal{E}_{1}\) give
\[\begin{split}\mathcal{E}_{1}^{\delta,\varepsilon}(0)& +\frac{4\sigma}{1-\sigma}d_{1}^{\varepsilon}\iint_{Q_{T}}\left| \nabla\sqrt{(u_{1}^{\delta,\varepsilon})^{1-\sigma}}\,\right|^{2}+l_{1}(1- \sigma)\iint_{Q_{T}}\frac{u_{3}^{\varepsilon}}{(u_{1}^{\delta,\varepsilon})^{ \sigma}}\\ &=\mathcal{E}_{1}^{\delta,\varepsilon}(T)+(1-\sigma)k_{1}\iint_{Q _{T}}(u_{1}^{\delta,\varepsilon})^{-\sigma}u_{1}^{\varepsilon}u_{2}^{ \varepsilon}.\end{split} \tag{46}\]
Summing the equations of \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\) then integrate on \(\Omega\times(0,T)\) lead to
\[\sup_{T\geq 0}\int_{\Omega}(u_{2}^{\varepsilon}(T)+u_{3}^{\varepsilon}(T)) dx\leq\int_{\Omega}(u_{20}+u_{30})dx.\]
Integrating the equation of \(u_{1}^{\varepsilon}\) on \(\Omega\times(0,T)\) gives
\[\begin{split}\int_{\Omega}u_{1}^{\varepsilon}(T)dx& \leq\int_{\Omega}u_{10}dx+l_{1}\int_{0}^{T}\int_{\Omega}u_{3}^{ \varepsilon}(t)dxdt\\ &\leq\int_{\Omega}u_{10}dx+l_{1}T\int_{\Omega}(u_{20}+u_{30})dx.\end{split}\]
Therefore, by Holder's inequality
\[\mathcal{E}_{1}^{\delta,\varepsilon}(T)\leq\left(\int_{\Omega}(u_{1}^{ \varepsilon}(T)+1)dx\right)^{1-\sigma}|\Omega|^{\sigma}\leq C(T).\]
We now use \((u_{1}^{\delta,\varepsilon})^{-\sigma}u_{1}^{\varepsilon}\leq(u_{1}^{\delta, \varepsilon})^{1-\sigma}\leq u_{1}^{\delta,\varepsilon}+1\), let \(\delta\to 0\), then use Fatou's lemma to get
\[\begin{split}&\frac{4\sigma}{1-\sigma}d_{1}^{\varepsilon}\iint_{Q_{T }}\left|\nabla\sqrt{(u_{1}^{\varepsilon})^{1-\sigma}}\,\right|^{2}+l_{1}(1- \sigma)\iint_{Q_{T}}\frac{u_{3}^{\varepsilon}}{(u_{1}^{\varepsilon})^{\sigma}} \\ &\leq C(T)+k_{1}\limsup_{\delta\to 0}\iint_{Q_{T}}(u_{1}^{ \delta,\varepsilon})^{1-\sigma}u_{2}^{\varepsilon}\\ &\leq C(T)+k_{1}\|u_{1}^{\delta,\varepsilon}+1\|_{L^{2}(Q_{T})} \|u_{2}^{\varepsilon}\|_{L^{2}(Q_{T})}\leq C\left(T,\|u_{0}\|_{L^{2+}(\Omega)^ {4}}\right).\end{split}\]
The term \(\mathcal{E}_{4}^{\delta,\varepsilon}\) can be treated similarly to \(\mathcal{E}_{1}^{\delta,\varepsilon}\). The inequality (45) then follows.
## 3 Proofs
### Proof of Theorem 1.1
**Lemma 3.1**.: _Assume (10) and \(d_{2}=d_{3}\). For any \(q>N+2\), there exists \(\varepsilon_{q}\in(0,1)\) such that_
\[\sup_{0<\varepsilon<\varepsilon_{q}}\left(\|\partial_{t}u_{1}^{ \varepsilon}\|_{L^{q}(Q_{T})}+\|\Delta u_{1}^{\varepsilon}\|_{L^{q}(Q_{T})}+ \|u_{1}^{\varepsilon}\|_{L^{\infty}(Q_{T})}+\|\nabla u_{1}^{\varepsilon}\|_ {L^{\infty}(Q_{T})}\right)\leq C(T), \tag{47}\] \[\sup_{0<\varepsilon<\varepsilon_{q}}\left(\|\partial_{t}u_{1}^{ \varepsilon}\|_{L^{q}(Q_{T})}+\|\Delta u_{4}^{\varepsilon}\|_{L^{q}(Q_{T})}+ \|u_{4}^{\varepsilon}\|_{L^{\infty}(Q_{T})}+\|\nabla u_{4}^{\varepsilon}\|_{L^ {\infty}(Q_{T})}\right)\leq C(T), \tag{48}\]
_where the constants depend on \(\|u_{0}\|_{W^{2,q}(\Omega)\times L^{q}(\Omega)^{2}\times W^{2,q}(\Omega)}\)._
Proof.: By Lemma 2.10 there exists \(\varepsilon_{q}\in(0,1)\) such that
\[\sup_{0<\varepsilon<\varepsilon_{q}}\Big{(}\|u_{2}^{\varepsilon}\|_{L^{q}(Q_{T })}+\|u_{3}^{\varepsilon}\|_{L^{q}(Q_{T})}\Big{)}\leq C\left(T,\|u_{20}\|_{L^{q }(\Omega)},\|u_{30}\|_{L^{q}(\Omega)}\right). \tag{49}\]
From the equation of \(u_{1}^{\varepsilon}\) in (9), \(\partial_{t}u_{1}^{\varepsilon}-d_{1}\Delta u_{1}^{\varepsilon}\leq l_{1}u_{3} ^{\varepsilon}\), where \(l_{1}u_{3}^{\varepsilon}\) is uniformly bounded in \(L^{q}(Q_{T})\) w.r.t. \(\varepsilon\). Since \(q>(N+2)/2\), we can make use of the comparison principle and heat regularisation (see Lemma 2.5) to conclude that
\[\|u_{1}^{\varepsilon}\|_{L^{\infty}(Q_{T})}\leq C\left(d_{1}^{ \varepsilon},T,\|u_{10}\|_{W^{2,q}(\Omega)}\right)\leq C\left(T,\|u_{10}\|_{ W^{2,q}(\Omega)}\right)\]
since \(C(d_{1}^{\varepsilon},T,\|u_{10}\|_{W^{2,q}(\Omega)})\) depends continuously on \(d_{1}^{\varepsilon}\) and due to (10). This, together with (49), implies that
\[\sup_{0<\varepsilon<\varepsilon_{q}}\left\|-k_{1}u_{1}^{\varepsilon}u_{2}^{ \varepsilon}+l_{1}u_{3}^{\varepsilon}\right\|_{L^{q}(Q_{T})}\leq C(T).\]
Taking into account \(q>N+2\), another application of Lemma 2.5 gives the desired estimates
\[\sup_{0<\varepsilon<\varepsilon_{q}}\left(\|\partial_{t}u_{1}^{ \varepsilon}\|_{L^{q}(Q_{T})}+\|\Delta u_{1}^{\varepsilon}\|_{L^{q}(Q_{T})}+ \|\nabla u_{1}^{\varepsilon}\|_{L^{\infty}(Q_{T})}\right)\leq C(T).\]
Estimates for \(u_{4}^{\varepsilon}\) are obtained in the same way.
The bounds in Lemma 3.1 will be used in combination with the modified energy in Lemma 2.8 (a) to obtain convergence of critical manifold as well as gradient estimates of \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\).
**Lemma 3.2**.: _Assume (10) and \(d_{2}=d_{3}\). Then there exists \(\varepsilon_{0}>0\) such that_
\[\begin{split}\sup_{0<\varepsilon<\varepsilon_{0}}\left(\|\nabla u _{3}^{\varepsilon}\|_{L^{2}(Q_{T})}^{2}+\frac{1}{\varepsilon}\iint_{Q_{T}}|(k _{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})u_{2}^{\varepsilon}-(k_{2}+l _{1})u_{3}^{\varepsilon}|^{2}\right)\\ \leq C(T,\|u_{0}\|_{W^{2,q}(\Omega)\times L^{q}(\Omega)^{2}\times W ^{2,q}(\Omega)}),\end{split} \tag{50}\]
_where \(q>\max\{N+2;4\}\)._
Proof.: From Lemma 2.8 (a), by choosing \(p=2\), we have in particular
\[\frac{1}{\varepsilon}\iint_{Q_{T}}\left|A_{2}u_{2}^{\varepsilon}-A_{3}u_{3}^ {\varepsilon}\right|^{2}+A_{3}\iint_{Q_{T}}|\nabla u_{3}^{\varepsilon}|^{2} \leq C\left(1+\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2}\left[\partial_{t}A_{2}+ \frac{|\nabla A_{2}|^{2}}{A_{2}}\right]\right), \tag{51}\]
where \(C=C(\|u_{0}\|_{L^{\infty}(\Omega)\times L^{2}(\Omega)^{2}\times L^{\infty}( \Omega)})\). From Lemmas 2.10 (a) and 3.1, there exists \(0<\varepsilon_{0}<\varepsilon_{q}\) such that
\[\begin{split}\left|\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2} \partial_{t}A_{2}\right|\leq&\,C\|u_{2}^{\varepsilon}\|_{L^{4}(Q_{T })}^{2}\left(\|\partial_{t}u_{1}^{\varepsilon}\|_{L^{2}(Q_{T})}+\|\partial_{ t}u_{4}^{\varepsilon}\|_{L^{2}(Q_{T})}\right)\\ \leq&\,C(T,\|u_{0}\|_{W^{2,q}(\Omega)\times L^{q}( \Omega)^{2}\times W^{2,q}(\Omega)}).\end{split} \tag{52}\]
Now using Lemmas 3.1 and 2.10, 2.12, we estimate the remaining term for some \(\sigma\in(0,1)\), \(\sigma\) is
enough closed to \(1\),
\[\begin{split}\left|\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2}\frac{|\nabla A _{2}|^{2}}{A_{2}}\right|&=\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2} \frac{|\nabla A_{2}|^{2/(1+\sigma)}}{A_{2}}|\nabla A_{2}|^{2\sigma/(1+\sigma)} \\ &\leq\|\nabla A_{2}\|_{L^{\infty}(Q_{T})}^{\frac{2\sigma}{1+ \sigma}}\left(\iint_{Q_{T}}\frac{|\nabla A_{2}|^{2}}{A_{2}^{1+\sigma}}\right)^ {\frac{1}{1+\sigma}}\left(\iint_{Q_{T}}(u_{2}^{\varepsilon})^{\frac{2(1+ \sigma)}{\sigma}}\right)^{\frac{\sigma}{1+\sigma}}\\ &\leq C(T,\|u_{0}\|_{W^{2,q}(\Omega)\times L^{q}(\Omega)^{2}\times W ^{2,q}(\Omega)}),\end{split} \tag{53}\]
where we used
\[\iint_{Q_{T}}\frac{|\nabla A_{2}|^{2}}{A_{2}^{1+\sigma}}\leq C\left(\iint_{Q_ {T}}\frac{|\nabla u_{1}^{\varepsilon}|^{2}}{(u_{1}^{\varepsilon})^{1+\sigma} }+\iint_{Q_{T}}\frac{|\nabla u_{4}^{\varepsilon}|^{2}}{(u_{4}^{\varepsilon})^ {1+\sigma}}\right)\leq C(T)\]
thanks to the non-negativity of \(u_{1}^{\varepsilon}\) and \(u_{4}^{\varepsilon}\). From (52) and (53) we obtain the desired estimates of Lemma 3.2.
Since \(A_{2}=k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}\) and we do not have lower bounds of \(u_{1}^{\varepsilon}\) and \(u_{4}^{\varepsilon}\), the energy estimates in Lemma 2.8 (a) does not give gradient estimate for \(u_{2}^{\varepsilon}\). To overcome this, we use the relation between \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\)
\[\partial_{t}(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})-\Delta(d_{2}^{ \varepsilon}u_{2}^{\varepsilon}+d_{3}^{\varepsilon}u_{3}^{\varepsilon})=0 \tag{54}\]
to transfer the gradient estimates from \(u_{3}^{\varepsilon}\) to \(u_{2}^{\varepsilon}\).
**Lemma 3.3**.: _Assume (10) and \(d_{2}=d_{3}\). Then it holds_
\[\sup_{0<\varepsilon<\varepsilon_{0}}\|\nabla u_{2}^{\varepsilon}\|_{L^{2}(Q_ {T})}\leq C(T,\|u_{0}\|_{W^{2,q}(\Omega)\times L^{q}(\Omega)^{2}\times W^{2,q }(\Omega)})\]
_for where \(q>\max\{N+2;4\}\)._
Proof.: By rewriting (54) as
\[\partial_{t}(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})-d_{2}^{\varepsilon} \Delta(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})=(d_{3}^{\varepsilon}-d_{2}^{ \varepsilon})\Delta u_{3}^{\varepsilon} \tag{55}\]
then multiplying by \(u_{2}^{\varepsilon}+u_{3}^{\varepsilon}\) gives
\[\begin{split}\frac{1}{2}\frac{d}{dt}\|u_{2}^{\varepsilon}& +u_{3}^{\varepsilon}\|_{L^{2}(\Omega)}^{2}+d_{2}^{\varepsilon}\int_{ \Omega}|\nabla(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})|^{2}\\ &=(d_{3}^{\varepsilon}-d_{2}^{\varepsilon})\int_{\Omega}\nabla u _{3}^{\varepsilon}\cdot\nabla(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})\\ &\leq\frac{d_{2}^{\varepsilon}}{2}\int_{\Omega}|\nabla(u_{2}^{ \varepsilon}+u_{3}^{\varepsilon})|^{2}+\frac{(d_{3}^{\varepsilon}-d_{2}^{ \varepsilon})^{2}}{2d_{2}^{\varepsilon}}\int_{\Omega}|\nabla u_{3}^{ \varepsilon}|^{2}.\end{split}\]
Integrating this on \((0,T)\) gives
\[d_{2}^{\varepsilon}\|\nabla(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})\|_{L^{2} (Q_{T})}^{2}\leq\|u_{20}+u_{30}\|_{L^{2}(\Omega)}^{2}+\frac{(d_{3}^{\varepsilon }-d_{2}^{\varepsilon})^{2}}{d_{2}^{\varepsilon}}\|\nabla u_{3}^{\varepsilon} \|_{L^{2}(Q_{T})}^{2}.\]
Thanks to (10) and Lemma 3.2, it follows that
\[\|\nabla(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})\|_{L^{2}(Q_{T})}^{2}\leq C(T, \|u_{0}\|_{W^{2,q}(\Omega)\times L^{q}(\Omega)^{2}\times W^{2,q}(\Omega)}).\]
The gradient estimate of \(u_{2}^{\varepsilon}\) then follows from
\[\|\nabla u_{2}^{\varepsilon}\|_{L^{2}(Q_{T})}^{2}\leq 2\|\nabla(u_{2}^{\varepsilon} +u_{3}^{\varepsilon})\|_{L^{2}(Q_{T})}^{2}+2\|\nabla u_{3}^{\varepsilon}\|_{L^ {2}(Q_{T})}^{2}.\]
**Remark 3.4**.: _Well-known duality methods show that any \(L^{p}(Q_{T})\)-estimate for \(u_{3}^{\varepsilon}\) can be transferred to \(u_{2}^{\varepsilon}\) provided (54) holds and vice versa, see e.g. [10]. Lemma 3.3 shows this transfer of regularity also holds for \(L^{2}(Q_{T})\)-estimate of gradients. In fact, it is true for \(L^{q}(Q_{T})\)-estimate of gradients for any \(1<q<\infty\). Indeed, by (55)_
\[\partial_{t}(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})-d_{2}^{\varepsilon} \Delta(u_{2}^{\varepsilon}+u_{3}^{\varepsilon})=(d_{3}^{\varepsilon}-d_{2}^{ \varepsilon})\mathrm{div}(\nabla u_{3}^{\varepsilon}),\]
_we can apply results similarly to [11] (for Neumann instead of Dirichlet boundary conditions) to obtain_
\[\|u_{2}^{\varepsilon}+u_{3}^{\varepsilon}\|_{L^{q}(0,T;W^{1,q}(\Omega))}\leq C \left(1+\|\nabla u_{3}^{\varepsilon}\|_{L^{q}(Q_{T})}\right)\]
_which gives bounds on \(\|\nabla u_{2}^{\varepsilon}\|_{L^{q}(Q_{T})}\) depending on \(\|\nabla u_{3}^{\varepsilon}\|_{L^{q}(Q_{T})}\)._
We need another result concerning the well-posedness of the limit system (12).
**Proposition 3.5**.: _Assume \(d_{2}=d_{3}\). Then for any non-negative, bounded initial data \((u_{10},v_{0},u_{40})\), there exists a unique bounded weak solution \((u_{1},v,u_{4})\) to (12)._
Proof.: Since \(d_{2}=d_{3}\), the equation of \(v\) is reduced to \(\partial_{t}v-\Delta v=0\), which gives the global well-posedness of a non-negative solution \(v\) as well as the uniform bound
\[\sup_{t\geq 1}\|v(t)\|_{L^{\infty}(\Omega)}\leq\|v_{0}\|_{L^{p}(\Omega)}.\]
With this bound, it is straightforward that the nonlinearities in the equations of \(u_{1}\) and \(u_{4}\) are bounded by linear functions, which implies at once the global existence of bounded weak solutions. Moreover, the nonlinearities are quasi-positive, and therefore the non-negativitiy of solutions follows, provided the non-negavitiy of initial data. Finally, the uniqueness of solutions is guaranteed by their boundedness and local Lipschitz continuity of the nonlinearities.
It is now ready to prove Theorem 1.1.
Proof of Theorem 1.1.: By Lemma 3.1, \(u_{1}^{\varepsilon},u_{4}^{\varepsilon}\) are uniformly bounded in \(L^{\infty}(0,T;W^{1,\infty}(\Omega))\), and \(\partial_{t}u_{1}^{\varepsilon},\partial_{t}u_{4}^{\varepsilon}\) in \(L^{q}(Q_{T})\) for any \(1<q<\infty\) if \(u_{0}\in W^{2,q_{0}+}(\Omega)\times L^{q_{0}+}(\Omega)^{2}\times W^{2,q_{0}+}(\Omega)\). The Aubin-Lions lemma yields that \(\{u_{1}^{\varepsilon}\}_{\varepsilon>0},\{u_{4}^{\varepsilon}\}_{\varepsilon>0}\) are relatively compact in \(L^{\infty}(Q_{T})\). Hence, up to subsequences,
\[u_{1}^{\varepsilon}\to u_{1},\quad u_{4}^{\varepsilon}\to u_{4}\quad\text{ in }\quad L^{\infty}(Q_{T})\quad\text{ as }\quad \varepsilon\to 0^{+}.\]
Now, by Lemmas 3.2 and 3.3, \(\{\nabla v^{\varepsilon}\}_{\varepsilon}=\{\nabla u_{2}^{\varepsilon}+\nabla u _{3}^{\varepsilon}\}_{\varepsilon}\) is bounded in \(L^{2}(Q_{T})\). It follows from (55) that \(\partial_{t}v^{\varepsilon}=d_{2}^{\varepsilon}\Delta v^{\varepsilon}+(d_{3}^ {\varepsilon}-d_{2}^{\varepsilon})\Delta u_{3}^{\varepsilon}\) is bounded in \(L^{2}(0,T;(H^{1}(\Omega))^{\prime})\). Another application the Aubin-Lions lemma gives \(\{v^{\varepsilon}\}_{\varepsilon}\) is relatively compact in \(L^{2}(Q_{T})\). Due to Lemma 2.10 (a), \(\{v^{\varepsilon}\}_{\varepsilon}\) is relatively compact in \(L^{q_{0}+}(Q_{T})\).
We shall show the strong convergence of \(u_{2}^{\varepsilon}\) and \(u_{3}^{\varepsilon}\). We have
\[\left\|\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{ \varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}- u_{3}^{\varepsilon}\right\|_{L^{2}(Q_{T})} =\left\|\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon} )u_{2}^{\varepsilon}-(k_{2}+l_{1})u_{3}^{\varepsilon}}{k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\right\|_{L^{2}(Q_{T})}\] \[\leq\frac{1}{k_{2}+l_{1}}\|(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon})u_{2}^{\varepsilon}-(k_{2}+l_{1})u_{3}^{\varepsilon}\|_{L^{2}(Q_ {T})}\to 0\]
thanks to Lemma 3.2. On the other hand, due to the convergence of \(u_{1}^{\varepsilon}\to u_{1},u_{4}^{\varepsilon}\to u_{4}\) in \(L^{\infty}(Q_{T})\), and of \(v^{\varepsilon}\to v\) in \(L^{q^{+}_{0}}(Q_{T})\), it follows that
\[\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_ {1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\to\frac{(k_{1}u_ {1}+l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\quad\text{ in }\quad L^{2}(Q_{T}).\]
Therefore, \(u_{3}^{\varepsilon}\to u_{3}\) in \(L^{2}(Q_{T})\) and consequently
\[u_{3}^{\varepsilon}\to u_{3}=\frac{(k_{1}u_{1}+l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u _{4}+k_{2}+l_{1}}\quad\text{ in }\quad L^{q^{+}_{0}}(Q_{T}).\]
The strong convergence of \(u_{2}^{\varepsilon}\to u_{2}\) in \(L^{q^{+}_{0}}(Q_{T})\) follows immediately.
It remains to show that \((u_{1},v,u_{4})\) is the unique bounded to (12), and hence the convergence \((u_{j}^{\varepsilon})_{j=1,\ldots,4}\) to \((u_{j})_{j=1,\ldots,4}\) holds for \(\varepsilon\to 0\) and not only for a subsequence. By rewriting
\[-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon} =(k_{1}u_{1}^{\varepsilon}+l_{1})\bigg{(}u_{3}^{\varepsilon}- \frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_ {1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\bigg{)}-\frac{(k _{1}k_{2}u_{1}^{\varepsilon}-l_{1}l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k _{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\] \[\qquad\longrightarrow-\frac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v} {k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\;\text{ in }L^{2}(Q_{T}).\]
Thus, for any test function \(\varphi\in L^{2}(0,T;H^{1}(\Omega))\), letting \(\varepsilon\to 0\) in
\[\iint_{Q_{T}}\varphi\partial_{t}u_{1}^{\varepsilon}+d_{1}^{\varepsilon} \iint_{Q_{T}}\nabla u_{1}^{\varepsilon}\cdot\nabla\varphi=\iint_{Q_{T}}(-k_{ 1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon})\varphi,\]
leads to
\[\iint_{Q_{T}}\varphi\partial_{t}u_{1}+d_{1}\iint_{Q_{T}}\nabla u_{1}\cdot \nabla\varphi=-\iint_{Q_{T}}\frac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v}{k_{1}u_ {1}+l_{2}u_{4}+k_{2}+l_{1}}\varphi.\]
Similarly,
\[\iint_{Q_{T}}\varphi\partial_{t}u_{4}+d_{4}\iint_{Q_{T}}\nabla u_{4}\cdot \nabla\varphi=\iint_{Q_{T}}\frac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v}{k_{1}u_ {1}+l_{2}u_{4}+k_{2}+l_{1}}\varphi.\]
For the equation of \(v^{\varepsilon}\) we let \(\varepsilon\to 0\) in
\[\iint_{Q_{T}}\varphi\partial_{t}v^{\varepsilon}+d_{2}^{\varepsilon}\iint_{Q_ {T}}\nabla v^{\varepsilon}\cdot\nabla\varphi=-(d_{3}^{\varepsilon}-d_{2}^{ \varepsilon})\iint_{Q_{T}}\nabla u_{3}^{\varepsilon}\cdot\nabla\varphi,\]
to conclude, thanks to \(\{\partial_{t}v^{\varepsilon}\}_{\varepsilon>0}\) is bounded in \(L^{2}(0,T;(H^{1}(\Omega))^{\prime})\), \(\{\nabla v^{\varepsilon}\}_{\varepsilon>0},\{\nabla u_{3}^{\varepsilon}\}_{ \varepsilon>0}\) is bounded in \(L^{2}(Q_{T})\), and (10) with \(d_{2}=d_{3}\),
\[\iint_{Q_{T}}\varphi\partial_{t}v+d_{2}\iint_{Q_{T}}\nabla v\cdot\nabla \varphi=0.\]
Therefore \((u_{1},v,u_{4})\) is a weak solution to (12) as desired.
### Proof of Theorem 1.3
**Lemma 3.6**.: _Assume (10) and (16) with \(p_{0}\) in (15). Then there exists \(\varepsilon_{0}>0\) such that_
\[\sup_{0<\varepsilon<\varepsilon_{0}}\sum_{j\in\{1,4\}}\big{(}\|\partial_{t}u_{ j}^{\varepsilon}\|_{L^{p_{2}}(Q_{T})}+\|\nabla u_{j}^{\varepsilon}\|_{L^{p_{3}}(Q_{T}) }\big{)}\leq C(T,\|u_{0}\|_{W^{2,p_{0}}(\Omega)\times L^{p_{0}}(\Omega)^{2} \times W^{2,p_{0}}(\Omega)}\big{)}\]
_for some \(\varepsilon>0\) and_
\[p_{2}=\begin{cases}\frac{(N+2)p_{0}}{2(N+2-p_{0})}&\text{if }p_{0}<\frac{N+2}{2}, \\ <p_{0}&\text{if }p_{0}=\frac{N+2}{2},\\ p_{0}&\text{if }p_{0}>\frac{N+2}{2}\end{cases}\quad\text{and}\quad p_{3}= \begin{cases}\frac{(N+2)p_{0}}{2(N+2)-3p_{0}}&\text{if }p_{0}<\frac{N+2}{2},\\ <\frac{(N+2)p_{0}}{N+2-p_{0}}&\text{if }p_{0}=\frac{N+2}{2},\\ \frac{(N+2)p_{0}}{N+2-p_{0}}&\text{if }\frac{N+2}{2}<p_{0}<N+2,\\ <\infty&\text{if }p_{0}\geq N+2.\end{cases} \tag{56}\]
Proof.: Thanks to Lemma 2.10 (b), there exists \(\varepsilon_{0}>0\) such that
\[\sup_{0<\varepsilon<\varepsilon_{0}}\big{(}\|u_{2}^{\varepsilon}\|_{L^{p_{0} }(Q_{T})}+\|u_{3}^{\varepsilon}\|_{L^{p_{0}}(Q_{T})}\big{)}\leq C\left(T,\|u_ {20}\|_{L^{p_{0}}(\Omega)},\|u_{30}\|_{L^{p_{0}}(\Omega)}\right).\]
From the equation of \(u_{1}^{\varepsilon}\), we have \(\partial_{t}u_{1}^{\varepsilon}-d_{1}^{\varepsilon}\Delta u_{1}^{\varepsilon} \leq l_{1}u_{3}^{\varepsilon}\), and therefore by the heat regularisation in Lemma 2.5 and comparison principle,
\[\|u_{1}^{\varepsilon}\|_{L^{p_{1}}(Q_{T})}\leq C(d_{1}^{\varepsilon},T,\|u_{10 }\|_{W^{2,p_{0}}(\Omega)})\leq C(T,\|u_{10}\|_{W^{2,p_{0}}(\Omega)})\]
where
\[p_{1}=\begin{cases}\frac{(N+2)p_{0}}{N+2-2p_{0}}&\text{if }p_{0}<(N+2)/2,\\ <\infty\text{ arbitrary}&\text{if }p_{0}=(N+2)/2,\\ \infty&\text{if }p_{0}>(N+2)/2.\end{cases} \tag{57}\]
It follows that \(\partial_{t}u_{1}^{\varepsilon}-d_{1}^{\varepsilon}\Delta u_{1}^{\varepsilon} =-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon}\in L ^{p_{2}}(Q_{T})\), where
\[p_{2}=\begin{cases}\frac{p_{0}p_{1}}{p_{0}+p_{1}}&\text{if }p_{1}<\infty,\\ p_{0}&\text{if }p_{1}=\infty,\end{cases} \tag{58}\]
which implies \(p_{2}\) in (56). Another application of the heat regularisation in Lemma 2.5 gives
\[\sup_{0<\varepsilon<\varepsilon_{0}}\big{(}\|\partial_{t}u_{1}^{\varepsilon} \|_{L^{p_{2}}(Q_{T})}+\|\nabla u_{1}^{\varepsilon}\|_{L^{p_{3}}(Q_{T})}\big{)} \leq C(T),\ \ \text{with}\ \ p_{3}=\begin{cases}\frac{(N+2)p_{2}}{N+2-p_{2}}&\text{if }p_{2}<N+2,\\ <\infty&\text{if }p_{2}=N+2,\\ \infty&\text{if }p_{2}>N+2.\end{cases} \tag{59}\]
Similarly, we have
\[\sup_{0<\varepsilon<\varepsilon_{0}}\big{(}\|\partial_{t}u_{4}^{\varepsilon} \|_{L^{p_{2}}(Q_{T})}+\|\nabla u_{4}^{\varepsilon}\|_{L^{p_{3}}(Q_{T})}\big{)} \leq C(T,\|u_{40}\|_{W^{2,p_{0}}(\Omega)}).\]
Finally, the expression of \(p_{3}\) in (56) is obtained by straight computations.
**Lemma 3.7**.: _Assume (10), \(d_{2}\neq d_{3}\), and (16) with \(p_{0}\) satisfying (15). Then_
\[\begin{split}\sup_{0<\varepsilon<\varepsilon_{0}}\left(\|\nabla \hskip-1.0ptu_{3}^{\varepsilon}\|^{2}_{L^{2}(Q_{T})}+\frac{1}{\varepsilon} \iint_{Q_{T}}|(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})u_{2}^{ \varepsilon}-(k_{2}+l_{1})u_{3}^{\varepsilon}|^{2}\right)\\ \leq C(T,\|u_{0}\|_{W^{2,p_{0}}(\Omega)\times L^{p_{0}}(\Omega)^ {2}\times W^{2,p_{0}}(\Omega)}).\end{split} \tag{60}\]
Proof.: We use Lemma 2.8 (a) with \(p=2\). Due to (16) and Lemma 3.6,
\[\|u_{2}^{\varepsilon}\|_{L^{p_{0}}(Q_{T})}+\|\partial_{t}A_{2}\|_{L^{p_{2}}(Q_ {T})}+\|\nabla A_{2}\|_{L^{p_{3}}(Q_{T})}\leq C(T,\|u_{0}\|_{W^{2,p_{0}}( \Omega)\times L^{p_{0}}(\Omega)^{2}\times W^{2,p_{0}}(\Omega)})\]
for \(p_{2}\) and \(p_{3}\) are in (56). We show that
\[\frac{2}{p_{0}}+\frac{1}{p_{2}}\leq 1. \tag{61}\]
Note that, by (15), \(p_{0}>4\) for \(N\geq 3\). If \(p_{0}<\frac{N+2}{2}\) with some \(N>6\), we have \(p_{2}=\frac{(N+2)p_{0}}{2(N+2-p_{0})}\) as (56) and therefore (61) is equivalent to \(p_{0}\geq\frac{4(N+2)}{N+4}\), which holds since \(4>\frac{4(N+2)}{N+4}\). In the case \(p_{0}\geq\frac{N+2}{2}\), we can take \(p_{2}=p_{0}^{-}\) and (61) is obvious since \(p_{0}>4\).
Due to (61), we can use Holder inequality to obtain
\[\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2}\partial_{t}A_{2}\leq C(T)\|u_{2}^{ \varepsilon}\|^{2}_{L^{p_{0}}(Q_{T})}\|\partial_{t}A_{2}\|_{L^{p_{2}}(Q_{T})} \leq C(T). \tag{62}\]
For the second term on the right hand side of (34), we write for any \(\sigma\in[0,1)\)
\[\begin{split}\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2}\frac{| \nabla A_{2}|^{2}}{A_{2}}&=\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2 }\frac{|\nabla A_{2}|^{\frac{2}{1+\sigma}}}{A_{2}}|\nabla A_{2}|^{\frac{2 \sigma}{1+\sigma}}\\ &\leq\left(\iint_{Q_{T}}|u_{2}^{\varepsilon}|^{p_{0}}\right)^{ \frac{2}{p_{0}}}\left(\iint_{Q_{T}}\frac{|\nabla A_{2}|^{2}}{A_{2}^{1+\sigma} }\right)^{\frac{1}{1+\sigma}}\left(\iint_{Q_{T}}|\nabla A_{2}|^{\frac{2 \sigma}{1+\sigma}\cdot\beta}\right)^{\frac{1}{\beta}}\end{split} \tag{63}\]
where \(\beta>1\) satisfies
\[\frac{2}{p_{0}}+\frac{1}{1+\sigma}+\frac{1}{\beta}=1. \tag{64}\]
Since \(p_{0}>4\), such a \(\beta>1\) exists for \(\sigma\) close to \(1\). It remains to check that, with \(\beta\) being computed from (64),
\[\frac{2\sigma}{1+\sigma}\cdot\beta=\frac{2\sigma p_{0}}{p_{0}\sigma-2(1+\sigma) }\leq p_{3} \tag{65}\]
for some \(\sigma\in[0,1)\), with \(p_{3}\) are in (56).
* If \(p_{0}<\frac{N+2}{2}\) with some \(N>6\), then by (56) \(p_{3}=\frac{(N+2)p_{0}}{2(N+2)-3p_{0}}\) and (65) becomes \[p_{0}\geq\frac{2(3\sigma+1)(N+2)}{\sigma(N+8)}.\] (66) In view of (15), \(p_{0}>\frac{8(N+2)}{N+8}\) for all \(N\geq 7\). Therefore, we can find a constant \(\sigma\in[0,1)\) sufficiently close to \(1\) such that \(p_{0}>\frac{2(3\sigma+1)(N+2)}{\sigma(N+8)}\), which implies (66).
* If \(\frac{N+2}{2}\leq p_{0}<N+2\) with some \(N\geq 3\), then by (56) we can take \(p_{3}=\left(\frac{(N+2)p_{0}}{N+2-p_{0}}\right)^{-}\). The
condition (65) can be followed from \[p_{0}>\frac{2(2\sigma+1)(N+2)}{\sigma(N+4)}.\] (67) In view of (15), \(p_{0}>\frac{6(N+2)}{N+4}\) and therefore there exists \(\sigma\in[0,1)\) close to \(1\) such that \(p_{0}>\frac{2(2\sigma+1)(N+2)}{\sigma(N+8)}\), which is exactly equivalent to (67).
* The case \(p_{0}\geq N+2\) is certain.
Thanks to (65), it follows from (63) that
\[\iint_{Q_{T}}(u_{2}^{\varepsilon})^{2}\frac{|\nabla A_{2}|^{2}}{A_{2}}\leq C(T)\]
which, in combination with (62), when inserting into Lemma 2.8 (a) with \(p=2\) gives the desired estimate (60). Here we note that the embedding \(W^{2,p_{0}}(\Omega)\hookrightarrow L^{\infty}(\Omega)\) holds since \(2p_{0}>N\) for all \(N\geq 1\), which allows to apply Lemma 2.8 with \(u_{0}\in W^{2,p_{0}}(\Omega)\times L^{p_{0}}(\Omega)^{2}\times W^{2,p_{0}}(\Omega)\).
**Lemma 3.8**.: _Assume (10), \(d_{2}\neq d_{3}\), and (16) with \(p_{0}\) satisfying (15). Then_
\[\sup_{0<\varepsilon<\varepsilon_{0}}\|\nabla u_{2}^{\varepsilon}\|_{L^{2}(Q_ {T})}\leq C(T,\|u_{0}\|_{W^{2,p_{0}}(\Omega)\times L^{p_{0}}(\Omega)^{2}\times W ^{2,p_{0}}(\Omega)}).\]
Proof.: The proof is the same as lemma 3.3.
We are ready now to prove Theorem 1.3.
Proof of Theorem 1.3.: From
\[\partial_{t}u_{1}^{\varepsilon}-d_{1}\Delta u_{1}^{\varepsilon}=-k_{1}u_{1}^{ \varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon}\]
where the right hand side is bounded in \(L^{p_{2}}(Q_{T})\) uniformly in \(\varepsilon>0\) with \(p_{2}\) is in (56) and \(u_{0}\in W^{2,q_{0}+}(\Omega)\times L^{q_{0}+}(\Omega)^{2}\times W^{2,q_{0}+}(\Omega)\), it follows that \(\{u_{1}^{\varepsilon}\}_{\varepsilon>0}\) is relatively compact in \(L^{p_{2}}(Q_{T})\). Therefore, up to a subsequence,
\[u_{1}^{\varepsilon}\to u_{1}\quad\text{a.e. in}\quad Q_{T}.\]
Thanks to the fact that \(\{u_{1}^{\varepsilon}\}\) is bounded in \(L^{p_{1}}(Q_{T})\) we get the strong convergence, again up to a subsequence,
\[u_{1}^{\varepsilon}\to u_{1}\quad\text{in}\quad L^{p}(Q_{T})\]
for any \(1\leq p<p_{1}\). The same argument gives \(u_{4}^{\varepsilon}\to u_{4}\) in \(L^{p}(Q_{T})\) for all \(1\leq p<p_{1}\). Due to the boundedness of \(\{\nabla u_{2}^{\varepsilon}\}\) and \(\{\nabla u_{3}^{\varepsilon}\}\) in \(L^{2}(Q_{T})\), we obtain from \(\partial_{t}v^{\varepsilon}=d_{2}^{\varepsilon}\Delta v^{\varepsilon}+(d_{3}^ {\varepsilon}-d_{2}^{\varepsilon})\Delta u_{3}^{\varepsilon}\) that \(\{v^{\varepsilon}\}_{\varepsilon}\) is relatively compact in \(L^{2}(Q_{T})\). This implies that, up to a subsequence,
\[\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_{ 1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\to\frac{(k_{1}u_{ 1}+l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\quad\text{ in }\quad L^{2}(Q_{T}).\]
thanks to
\[\left|\frac{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}}{k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\right|\leq 1.\]
Therefore,
\[u_{3}^{\varepsilon}\to u_{3}=\frac{(k_{1}u_{1}+l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u_{4}+ k_{2}+l_{1}}\quad\text{in}\quad L^{2}(Q_{T}).\]
Thanks to the above convergence of \(u_{j}^{\varepsilon}\), \(j=1,\ldots,4\) and \(v^{\varepsilon}\), it is easy to see that
\[-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon}= (k_{1}u_{1}^{\varepsilon}+l_{1})\bigg{(}u_{3}^{\varepsilon}- \frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_{1 }u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\bigg{)}-\frac{(k_{1 }k_{2}u_{1}^{\varepsilon}-l_{1}l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_{1 }u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\] \[\qquad\longrightarrow-\frac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v} {k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\quad\text{in $L^{2}(Q_{T})$}.\]
Therefore, passing to the limit in the weak formulation
\[\iint_{Q_{T}}\varphi\partial_{t}u_{1}^{\varepsilon}+d_{1}^{\varepsilon}\iint _{Q_{T}}\nabla u_{1}^{\varepsilon}\cdot\nabla\varphi=\iint_{Q_{T}}(-k_{1}u_{ 1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon})\varphi\]
we get the weak formulation for \(u_{1}\) in Definition 2.3 (a). The equations for \(u_{4}^{\varepsilon}\) and \(v\) follow the same way, with a remark that for the diffusion term of \(v\), it is sufficient to integrate by parts only once since it holds that \(v\in L^{2}(0,T;H^{1}(\Omega))\). The proof of Theorem 1.3 is finished.
### Proof of Theorem 1.5
**Lemma 3.9**.: _Assume (10). \(\bullet\) Let \(N=1,2\). Then there exists \(\varepsilon_{0}>0\) such that_
\[\sup_{0<\varepsilon<\varepsilon_{0}}\sum_{j=1,4}\Big{(}\|\partial_{t}u_{j}^{ \varepsilon}\|_{L^{2+}(Q_{T})}+\|\Delta u_{j}^{\varepsilon}\|_{L^{2+}(Q_{T})} +\|u_{j}^{\varepsilon}\|_{L^{\infty}(Q_{T})}+\|\nabla u_{j}^{\varepsilon}\|_{ L^{r_{0}+}(Q_{T})}\Big{)}\leq C(T), \tag{68}\]
_where \(r_{0}=2(N+2)/N\) and the constant depends on \(\|u_{0}\|_{W^{2,2+}(\Omega)\times L^{2+}(\Omega)^{2}\times W^{2,2+}(\Omega)}\)._
\(\bullet\) _Let \(N\geq 3\). If \(d_{2},d_{3}\) satisfy (19) with \(p_{0}\) in (18) then there exists \(\varepsilon_{0}>0\) such that_
\[\sup_{0<\varepsilon<\varepsilon_{0}}\sum_{j=1,4}\Big{(}\|\partial _{t}u_{j}^{\varepsilon}\|_{L^{s_{2}+}(Q_{T})}+\|\Delta u_{j}^{\varepsilon}\|_{ L^{s_{2}+}(Q_{T})}\Big{)}\leq C(T),\quad s_{2}=\frac{3(N+2)}{2N+2}, \tag{69}\] \[\sup_{0<\varepsilon<\varepsilon_{0}}\sum_{j=1,4}\Big{(}\|u_{j}^{ \varepsilon}\|_{L^{s_{3}+}(Q_{T})}+\|\nabla u_{j}^{\varepsilon}\|_{L^{s_{4}+ }(Q_{T})}\Big{)}\leq C(T),\quad s_{3}=\frac{3(N+2)}{N-2},s_{4}=\frac{3(N+2)}{ N+1}, \tag{70}\]
_where the constant depends on \(\|u_{0}\|_{W^{2,p_{0}}(\Omega)\times L^{p_{0}}(\Omega)^{2}\times W^{2,p_{0}}( \Omega)}\)._
Proof.: For \(N=1,2\), it follows from Lemma 2.10 that \(\partial_{t}u_{1}^{\varepsilon}-d_{1}^{\varepsilon}\Delta u_{1}^{\varepsilon} \leq l_{1}u_{3}^{\varepsilon}\in L^{2+}(Q_{T})\), and
\[\sup_{0<\varepsilon<\varepsilon_{0}}\|u_{1}^{\varepsilon}\|_{L^{\infty}(Q_{T} )}\leq C(T,\|u_{0}\|_{W^{2,2+}(\Omega)\times L^{2+}(\Omega)^{2}\times W^{2,2+ }(\Omega)})\]
due to Lemma 2.5. Consequently, \(\partial_{t}u_{1}^{\varepsilon}-d_{1}^{\varepsilon}\Delta u_{1}^{\varepsilon} =-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon}\in L^{ 2+}(Q_{T})\), which leads to the desired estimate of \(\partial_{t}u_{1}^{\varepsilon}\), \(\Delta u_{1}^{\varepsilon}\), and \(\nabla u_{1}^{\varepsilon}\) thanks to another application of Lemma 2.5. The estimates for \(u_{4}^{\varepsilon}\) follow the same way.
The remaining estimates can be shown similarly as the proof of Lemma 2.11 by making uses of the heat regularisation (see Lemma 2.5) and the comparison principle.
**Lemma 3.10**.: _Assume (10) with \(d_{2}\neq d_{3}\). Let \(N=1,2\), or \(N\geq 3\) and assuming (19) with \(p_{0}\) in (18). Then_
\[\sup_{0<\varepsilon<\varepsilon_{0}}\left(\frac{1}{\varepsilon^{\frac{1}{2(3- \delta)}}}\bigg{\|}u_{3}^{\varepsilon}-\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_ {4}^{\varepsilon})v^{\varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon}+k_{2}+l_{1}}\bigg{\|}_{L^{r}(Q_{T})}\right)\leq C(T) \tag{71}\]
_for some \(\delta>0\) small enough, where \(r=4/3\) for \(N=1,2\) and \(r=6/5\) for \(N\geq 3\)._
Proof.: When \(N=1,2\), we write \(p=1+\delta\) for some \(0<\delta<1\). Then we have the following estimates
\[\left|\iint_{Q_{T}}(u_{2}^{\varepsilon})^{1+\delta}(A_{2}+1)^{\delta}\right| \leq\|A_{2}+1\|_{L^{\infty}(Q_{T})}\iint_{Q_{T}}|u_{2}^{\varepsilon}|^{1+ \delta}\leq C(T),\]
\[\left|\iint_{Q_{T}}(u_{2}^{\varepsilon})^{1+\delta}|\partial_{t}A_{2}|\right| \leq\left(\iint_{Q_{T}}|u_{2}^{\varepsilon}|^{2+\delta}\right)^{\frac{1+ \delta}{2+\delta}}\left(\iint_{Q_{T}}|\partial_{t}A_{2}|^{2+\delta}\right)^{ \frac{1}{2+\delta}}\leq C(T)\]
for \(\delta>0\) small enough thanks to Lemmas 2.10 and 3.9. Finally,
\[\left|\iint_{Q_{T}}(u_{2}^{\varepsilon})^{1+\delta}|\nabla A_{2}|^{2}\right| \leq\left(\iint_{Q_{T}}|u_{2}^{\varepsilon}|^{2+\delta}\right)^{\frac{1+ \delta}{2+\delta}}\left(\iint_{Q_{T}}|\nabla A_{2}|^{2(2+\delta)}\right)^{ \frac{1}{2+\delta}}\leq C(T)\]
for \(\delta>0\) small enough, where we used (68) taking into account \(r_{0}=6\) for \(N=1\) and \(r_{0}=4\) for \(N=2\). Note that the above constants depend on \(\|u_{0}\|_{W^{2,2+}(\Omega)\times L^{2+}(\Omega)^{2}\times W^{2,2+}(\Omega)}\). We can now apply Lemma 2.8 (b) with \(p=1+\delta\) to get in particular
\[\frac{1}{\varepsilon^{\frac{1}{3-\delta}}}\iint_{Q_{T}}\frac{\left|(A_{2}+ \varepsilon^{\frac{1}{3-\delta}})u_{2}^{\varepsilon}-A_{3}u_{3}^{\varepsilon }\right|^{2}}{\left((A_{2}+\varepsilon^{\frac{1}{3-\delta}})u_{2}^{ \varepsilon}+A_{3}u_{3}^{\varepsilon}\right)^{1-\delta}}\leq C(T,\|u_{0}\|_{L^ {\infty}(\Omega)\times L^{1+\delta}(\Omega)^{2}\times L^{\infty}(\Omega)}).\]
By virtue of Lemmas 2.10 and 3.9, we can apply the Holder inequality to see that
\[\iint_{Q_{T}}\left|u_{3}^{\varepsilon}-\frac{(A_{2}+\varepsilon^{ \frac{1}{3-\delta}})v^{\varepsilon}}{A_{2}+\varepsilon^{\frac{1}{3-\delta}} +A_{3}}\right|^{\frac{4}{3-\delta}}\leq\frac{1}{A_{3}^{\frac{4}{3-\delta}}} \iint_{Q_{T}}\left|(A_{2}+\varepsilon^{\frac{1}{3-\delta}})u_{2}^{ \varepsilon}-A_{3}u_{3}^{\varepsilon}\right|^{\frac{4}{3-\delta}}\] \[\leq C\bigg{(}\iint_{Q_{T}}\frac{\left|(A_{2}+\varepsilon^{\frac{ 1}{3-\delta}})u_{2}^{\varepsilon}-A_{3}u_{3}^{\varepsilon}\right|^{2}}{\left( A_{2}+\varepsilon^{\frac{1}{3-\delta}}\right)u_{2}^{\varepsilon}+A_{3}u_{3}^{ \varepsilon}\right|^{1-\delta}}\bigg{)}^{\frac{2}{3-\delta}}\times\bigg{(}\iint _{Q_{T}}\left|(A_{2}+\varepsilon^{\frac{1}{3-\delta}})u_{2}^{\varepsilon}+A_{ 3}u_{3}^{\varepsilon}\right|^{2}\bigg{)}^{\frac{1-\delta}{3-\delta}}\] \[\leq C\varepsilon^{\frac{2}{(3-\delta)^{2}}}\Big{(}\|A_{2}+1\|_{L^ {\infty}(Q_{T})}^{2}\|u_{2}^{\varepsilon}\|_{L^{2}(Q_{T})}^{2}+A_{3}^{2}\|u_{3} ^{\varepsilon}\|_{L^{2}(Q_{T})}^{2}\bigg{)}^{\frac{1-\delta}{3-\delta}}\] \[\leq C(T,\|u_{0}\|_{W^{2,2+}(\Omega)\times L^{2+}(\Omega)^{2}\times W ^{2,2+}(\Omega)})\,\varepsilon^{\frac{2}{(3-\delta)^{2}}}\]
which yields
\[\left\|u_{3}^{\varepsilon}-\frac{(A_{2}+\varepsilon^{\frac{1}{3-\delta}})v^{ \varepsilon}}{A_{2}+\varepsilon^{\frac{1}{3-\delta}}+A_{3}}\right\|_{L^{\frac {4}{3-\delta}}(Q_{T})}\leq C(T)\varepsilon^{\frac{1}{2(3-\delta)}}.\]
On the other hand, one can observe
\[\left|\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon} }{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}-\frac{(A_{2}+ \varepsilon^{\frac{1}{3-\delta}})v^{\varepsilon}}{A_{2}+\varepsilon^{\frac{1}{3 -\delta}}+A_{3}}\right|\leq\frac{1}{k_{2}+l_{1}}v^{\varepsilon}\,\varepsilon^{ \frac{1}{3-\delta}}. \tag{72}\]
Therefore, by the triangle inequality,
\[\left\|u_{3}^{\varepsilon}-\frac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4 }^{\varepsilon})v^{\varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon}+k_{2}+l_{1}}\right\|_{L^{\frac{4}{3-\delta}}(Q_{T})} \tag{73}\] \[\leq\left\|u_{3}^{\varepsilon}-\frac{(A_{2}+\varepsilon^{\frac{1} {3-\delta}})v^{\varepsilon}}{A_{2}+\varepsilon^{\frac{1}{3-\delta}}+A_{3}} \right\|_{L^{\frac{4}{3-\delta}}(Q_{T})}+\frac{\varepsilon^{\frac{1}{3-\delta} }}{k_{2}+l_{1}}\|v^{\varepsilon}\|_{L^{\frac{4}{3-\delta}}(Q_{T})}\leq C(T) \varepsilon^{\frac{1}{2(3-\delta)}} \tag{74}\]
which finishes the proof of (71) in the case \(N=1,2\).
For \(N\geq 3\), thanks to (18) and (19) we have
\[\|u_{2}^{\varepsilon}\|_{L^{p_{0}}(Q_{T})}+\|u_{3}^{\varepsilon}\|_{L^{p_{0}} (Q_{T})}\leq C(T,\|u_{20}\|_{L^{p_{0}}(\Omega)},\|u_{30}\|_{L^{p_{0}}(\Omega)}). \tag{75}\]
By choosing \(\delta>0\) small enough such that
\[p_{0}>\frac{3\delta(N+2)+3(N+2)}{N+4}\quad\text{or equivalently}\quad\frac{p_{0}}{p_{0}-1-\delta}\leq\frac{3(N+2)}{2N+2}, \tag{76}\]
we can estimate using Holder inequality
\[\left|\int\!\!\!\!\!\int_{Q_{T}}(u_{2}^{\varepsilon})^{1+\delta}( A_{2}+1)^{\delta}\right|\leq \left(\int\!\!\!\!\!\int_{Q_{T}}(u_{2}^{\varepsilon})^{p_{0}} \right)^{\frac{1+\delta}{p_{0}}}\left(\int\!\!\!\!\!\int_{Q_{T}}(A_{2}+1)^{ \frac{\delta p_{0}}{p_{0}-1-\delta}}\right)^{\frac{p_{0}-1-\delta}{p_{0}}} \leq C(T), \tag{77}\] \[\left|\int\!\!\!\!\!\int_{Q_{T}}(u_{2}^{\varepsilon})^{1+\delta} \partial_{t}A_{2}\right|\leq \left(\int\!\!\!\!\!\int_{Q_{T}}(u_{2}^{\varepsilon})^{p_{0}} \right)^{\frac{1+\delta}{p_{0}}}\left(\int\!\!\!\!\!\int_{Q_{T}}|\partial_{t}A _{2}|^{\frac{p_{0}}{p_{0}-1-\delta}}\right)^{\frac{p_{0}-1-\delta}{p_{0}}} \leq C(T). \tag{78}\]
Similarly,
\[\left|\int\!\!\!\!\!\int_{Q_{T}}(u_{2}^{\varepsilon})^{1+\delta}|\nabla A_{2}| ^{2}\right|\leq\left(\int\!\!\!\!\int_{Q_{T}}(u_{2}^{\varepsilon})^{p_{0}} \right)^{\frac{1+\delta}{p_{0}}}\left(\int\!\!\!\!\int_{Q_{T}}|\nabla A_{2}|^{ \frac{2p_{0}}{p_{0}-1-\delta}}\right)^{\frac{p_{0}-1-\delta}{p_{0}}}\leq C(T) \tag{79}\]
by using (70), (76). We note \(C(T)=C(T,\|u_{0}\|_{W^{2,p_{0}}(\Omega)\times L^{p_{0}}(\Omega)^{2}\times W^{2,p_{0}}(\Omega)})\) in the above estimates. Therefore, inserting (77)-(79) into Lemma 2.8 (b) with \(p=1+\delta\), \(\delta>0\) small enough, leads to
\[\frac{1}{\varepsilon^{\frac{1}{3-\delta}}}\int\!\!\!\!\!\int_{Q_{T}}\frac{ \left|(A_{2}+\varepsilon^{\frac{1}{3-\delta}})u_{2}^{\varepsilon}-A_{3}u_{3}^{ \varepsilon}\right|^{2}}{\left((A_{2}+\varepsilon^{\frac{1}{3-\delta}})u_{2}^{ \varepsilon}+A_{3}u_{3}^{\varepsilon}\right)^{1-\delta}}\leq C(T,\|u_{0}\|_{W^ {2,q}(\Omega)\times L^{q}(\Omega)^{2}\times W^{2,q}(\Omega)})\]
where \(q=\max\{N,p_{0},(N+2)/2\}\). By applying the Holder's inequality,
\[\iint_{Q_{T}}\left|u_{3}^{\varepsilon}-\frac{(A_{2}+\varepsilon^{ \frac{1}{3-\delta}})v^{\varepsilon}}{A_{2}+\varepsilon^{\frac{1}{3-\delta}}+A_ {3}}\right|^{\frac{6(N+2)}{5N+8}} \leq C\iint_{Q_{T}}|(A_{2}+\varepsilon^{\frac{1}{3-\delta}})u_{2}^ {\varepsilon}-A_{3}u_{3}^{\varepsilon}\big{|}^{\frac{6(N+2)}{5N+8}}\] \[\leq C\bigg{(}\iint_{Q_{T}}\frac{\left|(A_{2}+\varepsilon^{\frac{1 }{3-\delta}})u_{2}^{\varepsilon}-A_{3}u_{3}^{\varepsilon}\right|^{2}}{\left|(A _{2}+\varepsilon^{\frac{1}{3-\delta}})u_{2}^{\varepsilon}+A_{3}u_{3}^{ \varepsilon}\right|^{1-\delta}}\bigg{)}^{\frac{3(N+2)}{5N+8}}\] \[\qquad\times\bigg{(}\iint_{Q_{T}}\left|(A_{2}+\varepsilon^{ \frac{1}{3-\delta}})u_{2}^{\varepsilon}+A_{3}u_{3}^{\varepsilon}\right|^{(1- \delta)\frac{3(N+2)}{2N+2}}\bigg{)}^{\frac{2N+2}{5N+8}}\] \[\leq C(T)\left(\varepsilon^{\frac{1}{3-\delta}}\right)^{\frac{3( N+2)}{5N+8}}\]
thanks to (75), (70), and
\[\begin{split}\left\|(A_{2}+\varepsilon^{\frac{1}{4-p_{0}}})u_{2 }^{\varepsilon}+A_{3}u_{3}^{\varepsilon}\right\|_{L^{\frac{3(N+2)}{2N+2}}(Q_{ T})}\\ \leq\|A_{2}+1\|_{L^{\frac{3(N+2)}{N-2}}(Q_{T})}\|u_{2}^{ \varepsilon}\|_{L^{\frac{3(N+2)}{N+4}}(Q_{T})}+A_{3}\|u_{3}^{\varepsilon}\|_{L ^{\frac{3(N+2)}{2N+2}}(Q_{T})}.\end{split} \tag{80}\]
Thus
\[\left\|u_{3}^{\varepsilon}-\frac{(A_{2}+\varepsilon^{\frac{1}{4-p_{0}}})v^{ \varepsilon}}{A_{2}+\varepsilon^{\frac{1}{3-\delta}}+A_{3}}\right\|_{L^{\frac {6(N+2)}{5N+8}}(Q_{T})}\leq C(T)\varepsilon^{\frac{1}{2(3-\delta)}}.\]
Using (72) again we obtain finally
\[\begin{split}&\left\|u_{3}^{\varepsilon}-\frac{(k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\right\|_{L^{\frac{6(N+2)}{5 N+8}}(Q_{T})}\\ &\leq\left\|u_{3}^{\varepsilon}-\frac{(A_{2}+\varepsilon^{\frac{ 1}{4-p_{0}}})v^{\varepsilon}}{A_{2}+\varepsilon^{\frac{1}{3-\delta}}+A_{3}} \right\|_{L^{\frac{6(N+2)}{5N+8}}(Q_{T})}+\frac{\varepsilon^{\frac{1}{3-\delta }}}{k_{2}+l_{1}}\|v^{\varepsilon}\|_{L^{\frac{6(N+2)}{5N+8}}(Q_{T})}\leq C(T) \varepsilon^{\frac{1}{2(3-\delta)}}\end{split}\]
which proves (71) in the case \(N\geq 3\).
Proof of Theorem 1.5.: The convergence of the critical manifold was shown in Lemma 3.10. From the the equation \(\partial_{t}u_{1}^{\varepsilon}-d_{1}^{\varepsilon}\Delta u_{1}^{\varepsilon} =-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon}\) in which the right hand side is bounded in \(L^{1}(Q_{T})\) we have, up to a subsequence, \(u_{1}^{\varepsilon}\to u_{1}\) in \(L^{1}(Q_{T})\). Combining this with the boundedness of \(\{u_{1}^{\varepsilon}\}_{\varepsilon>0}\) (as Lemma 3.9) yields
\[u_{1}^{\varepsilon}\to u_{1}\quad\text{ in }\quad\left\{\begin{array}{ll}L^{p}(Q_{T}),1\leq p <\infty&\text{if }N=1,2,\\ L^{\frac{3(N+2)}{N-2}}(Q_{T})&\text{if }N\geq 3.\end{array}\right.\]
Similarly,
\[u_{4}^{\varepsilon}\to u_{4}\quad\text{ in }\quad\left\{\begin{array}{ll}L^{p}(Q_{T}),1 \leq p<\infty&\text{if }N=1,2,\\ L^{\frac{3(N+2)}{N-2}}(Q_{T})&\text{if }N\geq 3.\end{array}\right. \tag{81}\]
From (38), it holds
\[u_{j}^{\varepsilon}\rightharpoonup u_{j}\quad\text{weakly in }\quad L^{2}(Q_{T}),\,j=2,3. \tag{82}\]
By using (38), (71), and the fact that \(\dfrac{k_{1}k_{2}u_{1}^{\varepsilon}-l_{1}l_{2}u_{4}^{\varepsilon}}{k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\) is bounded uniformly in \(L^{\infty}(Q_{T})\), we obtain
\[-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{ \varepsilon}=(k_{1}u_{1}^{\varepsilon}+l_{1})\bigg{(}u_{3}^{\varepsilon}- \dfrac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})v^{\varepsilon}}{k_ {1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\bigg{)}-\dfrac{k_ {1}k_{2}u_{1}^{\varepsilon}-l_{1}l_{2}u_{4}^{\varepsilon}}{k_{1}u_{1}^{ \varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}v^{\varepsilon} \tag{83}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\dfrac{ (k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}}\ \text{ weakly in }L^{2}(Q_{T}).\]
From (81), (82) and (83), it is enough to pass to the limit in (very) weak formulation of the system involving \((u_{1}^{\varepsilon},v^{\varepsilon},u_{4}^{\varepsilon})\) to conclude that \((u_{1},v,u_{4})\) is a very weak solution to the reduced system (12) according to Definition 2.3 (b). This finishes the proof of Theorem 1.5.
### Proof of Theorem 1.6
Proof of Theorem 1.6.: Thanks to (38), we can use the equation of \(u_{1}^{\varepsilon}\) and \(u_{4}^{\varepsilon}\) to obtain that
\[u_{j}^{\varepsilon}\to u_{j}\ \text{ in }\ L^{2}(Q_{T}),\quad\nabla u_{j}^{ \varepsilon}\rightharpoonup\nabla u_{j}\text{ weakly in }L^{2}(Q_{T}),\quad j=1,4.\]
We have
\[\left|\iint_{Q_{T}}[(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon})u_{2}^{\varepsilon}-(k_{2}+l_{1})u_{3}^{\varepsilon}]\,\psi\right| =\varepsilon\left|\iint_{Q_{T}}(\partial_{t}u_{2}^{\varepsilon}- d_{2}^{\varepsilon}\Delta u_{2}^{\varepsilon})\psi\right|\] \[=\varepsilon\left|\iint_{Q_{T}}(\partial_{t}\psi+d_{2}^{ \varepsilon}\Delta\psi)u_{2}^{\varepsilon}\right|\] \[\leq\varepsilon\|\partial_{t}\psi+d_{2}^{\varepsilon}\Delta\psi \|_{L^{2}(Q_{T})}\|u_{2}^{\varepsilon}\|_{L^{2}(Q_{T})}\] \[=O(\varepsilon)\]
for all \(\psi\in C_{c}^{\infty}(Q_{T})\). Note that \((k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})u_{2}^{\varepsilon}-(k_{2} +l_{1})u_{3}^{\varepsilon}\) is bounded in \(L^{1+}(Q_{T})\) uniformly w.r.t. \(\varepsilon>0\), we obtain
\[(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon})u_{2}^{\varepsilon}-(k_{2 }+l_{1})u_{3}^{\varepsilon}\to 0\quad\text{ in distributional sense}. \tag{84}\]
We have \((k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1})^{-1}\to(k_{1} u_{1}+l_{2}u_{4}+k_{2}+l_{1})^{-1}\) in \(L^{q}(Q_{T})\) for all \(1\leq q<+\infty\) since \((k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1})^{-1}\) is uniformly bounded in \(L^{\infty}(Q_{T})\). Therefore
\[u_{3}^{\varepsilon}-\dfrac{(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon} )v^{\varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1 }}=\dfrac{1}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}} \left[(k_{2}+l_{1})u_{3}^{\varepsilon}-(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon})u_{2}^{\varepsilon}\right]\to 0 \tag{85}\]
in distributional sense, which is the convergence of critical manifold in (21). By rewriting
\[-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon}=-\dfrac {k_{1}u_{1}^{\varepsilon}+l_{1}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon}+k_{2}+l_{1}}\left[(k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{ \varepsilon})u_{2}^{\varepsilon}-(k_{2}+l_{1})u_{3}^{\varepsilon}\right]- \dfrac{(k_{1}k_{2}u_{1}^{\varepsilon}-l_{1}l_{2}u_{4}^{\varepsilon})v^{ \varepsilon}}{k_{1}u_{1}^{\varepsilon}+l_{2}u_{4}^{\varepsilon}+k_{2}+l_{1}}\]
and using (84) we get
\[-k_{1}u_{1}^{\varepsilon}u_{2}^{\varepsilon}+l_{1}u_{3}^{\varepsilon}\to- \dfrac{(k_{1}k_{2}u_{1}-l_{1}l_{2}u_{4})v}{k_{1}u_{1}+l_{2}u_{4}+k_{2}+l_{1}} \quad\text{in distributional sense},\]
which is enough to pass to the limit in the equation of \(u_{1}^{\varepsilon}\). The same holds for the equation of \(u_{4}^{\varepsilon}\). From (85) we also obtain
\[u_{3}^{\varepsilon}\longrightarrow\frac{(k_{1}u_{1}+l_{2}u_{4})v}{k_{1}u_{1}+l_{ 2}u_{3}+k_{2}+l_{1}}\quad\text{weakly in}\quad L^{1+}(Q_{T}),\]
which allows to pass to the limit \(\varepsilon\to 0\) in the equation of \(v^{\varepsilon}\) to ultimately conclude that \((u_{1},v,u_{2})\) is a very weak solution to (12). The proof of Theorem 1.6 is finished.
**Acknowledgement**.: This research is funded by the FWF project "Quasi-steady-state approximation for PDE", number I-5213.
|
2301.11908
|
A modular perspective to the jet suppression from a small to large
radius in very high transverse momentum jets
|
In this work, we extend the scope of the JETSCAPE framework to cover the jet
radius ($R$) dependence of the jet nuclear modification factor, ${R_{AA}}$, for
broader area jet cones, going all the way up to $R$ = 1.0. The primary focus of
this work has been the in-depth analysis of the high-${p_{T}}$ inclusive jets
and the quenching effects observed in the quark-gluon plasma formed in the
Pb-Pb collisions at ${\sqrt{\rm s_{NN}}}$= 5.02 TeV for the most-central
(0-10\%) collisions. The nuclear modification factor is calculated for
inclusive jets to compare with the experimental results collected at the ATLAS
and the CMS detectors in the jet transverse momentum (${p_{T}}$) ranging from
100 GeV up to 1 TeV. The results predicted by the JETSCAPE are consistent in
the high ${p_{T}}$ range as well as for extreme jet cone sizes, i.e. within
10-20\%. We also calculate the double ratio
(${R^{\mathrm{R}}_{\mathrm{AA}}/R^{\mathrm{R=small}}_{\mathrm{AA}}}$) as a
function of jet radius and jet-${p_{T}}$, where the observations are well
described by the JETSCAPE framework which is based on the hydrodynamic
multi-stage evolution of the parton shower. The calculations are then
replicated for different low-virtuality based evolution models like the MARTINI
and the AdS/CFT, which is followed by a rigorous comparison between the
predictions from the former model combinations to the measurements at the CMS
experiment.
|
Om Shahi, Vaishnavi Desai, Prabhakar Palni
|
2023-01-27T18:48:35Z
|
http://arxiv.org/abs/2301.11908v2
|
# A modular perspective to the jet suppression from a
###### Abstract
In this work, we extend the scope of the JETSCAPE framework to cover the jet radius (\(R\)) dependence of the jet nuclear modification factor, \(R_{AA}\), for broader area jet cones, going all the way up to \(R=1.0\). The primary focus of this work has been the in-depth analysis of the high-\(p_{T}\) inclusive jets and the quenching effects observed in the quark-gluon plasma formed in the Pb-Pb collisions at \(\sqrt{\mathrm{s}\mathrm{s}\mathrm{m}}\)\(=5.02\) TeV for the most-central (0-10%) collisions. The nuclear modification factor is calculated for inclusive jets to compare with the experimental results collected at the ATLAS and the CMS detectors in the jet transverse momentum (\(p_{T}\)) ranging from 100 GeV up to 1 TeV. The results predicted by the JETSCAPE are consistent in the high \(p_{T}\) range as well as for extreme jet cone sizes, i.e. within 10-20%. We also calculate the double ratio (\(R_{\mathrm{AA}}^{\mathrm{k}}/R_{\mathrm{AA}}^{\mathrm{k}\mathrm{s}\mathrm{m} \mathrm{m}\mathrm{m}\mathrm{m}}\)) as a function of jet radius and jet-\(p_{T}\), where the observations are well described by the JETSCAPE framework which is based on the hydrodynamic multi-stage evolution of the parton shower. The calculations are then replicated for different low-virtuality based evolution models like the MARTINI and the AdS/CFT, which is followed by a rigorous comparison between the predictions from the former model combinations to the measurements at the CMS experiment.
## I Introduction
The extremely hot and dense conditions created at the start of the big bang which we now understand as the soup of deconfined state of the partons, the quark-gluon plasma (QGP) [1; 2; 3]. The QGP is of such great interest that, to study the properties of this state of matter, particle physicists have spent decades building up the equipment which could recreate this extremely dense soupy state. The Relativistic Heavy-Ion Collider (RHIC) [4; 5] and the Large Hadron Collider (LHC) [6; 7; 8; 9] conduct heavy-ion collisions where the QGP is created for very short instants of time, the parton shower propagation and modification are greatly influenced by the QGP medium. The high-\(p_{T}\) jets produced in these heavy ion collisions undergo strong yield suppression and medium modification which are together referred to as jet quenching phenomena [10; 11; 12]. Therefore, we study the jet modification in nucleus-nucleus collisions relative to proton-proton collisions to probe the properties of the QGP via constraints from model-to-data comparison [13; 14; 15; 16]. The measurement of the nuclear modification factor for jets as well as charged particles has revealed many important characteristics of the quark-gluon plasma [17; 18; 19; 20]. It provides strong confirmation of the interaction of partons with the deconfined plasma, the respective medium modifications, and the eventual hydrodynamization with the medium [8]. Since there are many conclusive studies based on jet-\(R_{AA}\)[21; 22; 23; 24; 25], our effort here is to push the limits of the current event generators and various energy loss modules for a better description of jet quenching phenomena at very high transverse momenta and broader jet cones with the multi-stage evolution of the parton shower using the Jet Energy-loss Tomography with a Statistically and Computationally Advanced Program Envelope (JETSCAPE) framework (version 3.5.1) [26].
Along with these measurements, we study inclusive jet spectra for p-p and Pb-Pb by varying resolution parameters in the anti-\(k_{T}\) algorithm which is realized in the FASTJET software package [27]. The inclusive jet spectrum is of significant interest because the sensitivity of hadronization effects is far less than the observables involving individual final-state hadrons. Here, the area of the reconstructed jet cone is defined by the jet radius \(R\). Thus, by varying \(R\), the reconstructed jet will include different proportions of energy from the medium response and the quenched jet. We also acknowledge the observations for a new sensitivity to QGP properties and underlying jet quenching mechanism in a study for jet yield suppression versus \(R\)[28]. Specifically, different dependences of the jet suppression on \(R\), are predicted by theoretical models based on anti-de Sitter/conformal field theory correspondence [29] and perturbative QCD [30].
We begin with calculating the jet \(R_{AA}\) for the Pb-Pb collisions in Section II and compare with the experimental data from the ATLAS as well as the CMS detector for a robust test of configuration and the overall JETSCAPE framework. The calculations include the (2+1)D MUSIC [31] model for hydrodynamics which is ideal to study many aspects of heavy ion collisions. Most notably, this work covers the high-\(p_{T}\) range of jets that is up to 1 TeV, which enables us to probe the QGP medium at much shorter distance scales.
We further exploit the advantages that the JETSCAPE framework offers in Section II.1, that is the coupling of several different energy loss modules such as MARTINI [32] and AdS/CFT [33] with the MATTER [34] module to explore the quenching effects in a multi-stage manner. This approach provides insight into the interaction and energy loss mechanism in the
medium with respect to the virtuality of the partonic jet. Comparing the experimental data with the trends from these different models which handle the low virtuality phase allows us to develop a lucid understanding of the physics governing the above models.
In Section II.2, we proceed with concrete results from the above combinations of several successful models towards the jet radius dependence studies. An important measurement done by the CMS collaboration recently was the jet-\(R_{AA}\) for Pb-Pb collisions at the LHC and recording the data for jet cone area covering the radii from 0.2 up to 1 [35]. This analysis gives us an intricate understanding of the behaviour and interaction strength of the high-\(p_{T}\) inclusive collimated jets in proximity to the jet axis, for \(R=0.2\) and the energy distribution around the cone for values of radii up to 1. The current JETSCAPE framework is based on calculations of perturbative QCD and evolution in a dense medium, thus allowing us to go up to a radius of order 1. We also report the ratios of \(R_{AA}\) for a given \(R\) with respect to \(R=0.2\) as a function of jet-\(p_{T}\) and jet radius \(R\). This study provides a vivid picture of the energy transactions with the QGP as the jet cone size increases and the trends that the JETSCAPE framework predicts.
We conclude this work in Section III, with a concise account of the current JETSCAPE framework's potential to explain the jet-\(R_{AA}\) for larger area jet cones. We also shed light on the ability of the distinct combination of modules to describe the jet-\(R_{AA}\).
### Simulation with multi-stage energy loss perspective in JETSCAPE
The JETSCAPE framework provides an ideal environment for carrying out multi-stage energy loss. The hard scattering is generated by Pythia 8 [36] with initial state radiation (ISR) and multiparton interaction (MPI) [37] turned on, and final state radiation (FSR) turned off. For the event wise simulations, the TRENTo model [38] sets up the initial conditions and the viscous hydrodynamic evolution is described by the (2+1)D MUSIC [31] model followed by Cooper-Frye prescription [39; 40] which converts the fluid cells to hadrons on an isothermal hypersurface [41] at \(T_{\rm SW}\)= 151 MeV, where \(T_{\rm SW}\) is the temperature of the plasma below which particlization [42] occurs at a certain hypersurface. The jet energy loss induced by scattering is calculated in a succession of two stages: MATTER [34; 43] which takes care of the highly virtual phase (the first stage) while the low virtuality phase (the second stage), is handled concurrently by the LBT model [44; 45; 46]. We have also employed the MATINI [32; 47; 48] and the AdS/CFT model [33] in combination with the MATTER model to explore the low virtuality phase. The virtuality of the parton is defined as \(Q^{2}=p^{\mu}p_{\mu}-m^{2}\). The parton undergoes energy loss in two stages, when the virtuality of the parton, \(Q^{2}>Q_{\rm SW}^{2}\), where \(Q_{\rm SW}\) is the switching virtuality, the MATTER model handles the energy loss and the parton is transferred to the LBT model once \(Q^{2}\leq Q_{\rm SW}^{2}\). In these calculations, the jet medium interaction includes inelastic medium-induced gluon radiation and a medium recoil. An extensive account of the comparison of the model to the existing jet-\(R_{AA}\) is already done [49], showing that the contribution of medium recoil is quite significant in the modification of jet-\(R_{AA}\) for a complete set of centrality classes ranging from the most central collisions to the peripheral collisions. This version of the JETSCAPE framework [26] encompasses the modifications of a hard thermal-loop (HTL) [50] for fixed coupling (\(\alpha_{\rm s}^{\rm fix}\)), running coupling (\(\alpha_{\rm s}\)), and with a virtuality dependent factor, \(f(Q^{2})\), that modulates the effective value of jet transport coefficient (\(\hat{q}\)). It also accounts for the reduced medium-induced emission in the high virtuality phase, due to coherence effects. The calculations based on the Hard Thermal Loop (HTL) considering weak-coupling approximation and the limits of high-temperature yield for a \(\hat{q}\) is given as [45],
\[\hat{q}_{\rm HTL}=C_{a}\frac{42\zeta(3)}{\pi}\alpha_{\rm s}^{2}T^{3}\ln\left[ \frac{ET}{3\pi T^{2}\alpha_{\rm s}^{\rm fix}}\right]. \tag{1}\]
where \(C_{a}\) is the representation specific Casimir, \(E\) is the energy of the hard parton, and \(T\) is the local temperature of the medium. The coherence effects which reduce the interaction strength of the parton with the medium are also taken into consideration as the virtuality-dependent modulation factor, which regulates the effective value of \(\hat{q}\) in the high virtuality MATTER event generator. The parameterization of the virtuality-dependent modulation factor is given as [49]
\[\hat{q}\cdot f\!\equiv \hat{q}_{\rm HTL}^{\rm run}f(Q^{2}), \tag{2}\] \[f(Q^{2})= \left\{\begin{array}{ll}\frac{1+10\ln^{2}(Q_{\rm w}^{2})+100 \ln^{4}(Q_{\rm w}^{2})}{1+10\ln^{4}(Q^{2})+100\ln^{4}(Q^{2})}&Q^{2}>Q_{\rm sw }^{2}\\ 1&Q^{2}\leq Q_{\rm sw}^{2}\end{array}\right. \tag{3}\]
here \(Q^{2}\) is the running virtuality of the hard parton. Finally, the partons undergo colorless hadronization according to the default Lund string fragmentation from Pythia 8 [36; 51]. The contributions in final jets are from the hard jet shower part and the effect from the soft medium response, the latter is calculated via the Cooper-Frye formula [39; 40]. We reconstruct jets for several radius selections using the anti-\(k_{T}\) algorithm which is realized in the FASTJET and compare with experimental data. The Underlying Events (UE) [52; 53] are removed by implementing a minimum track requirement of \(p_{\rm T}^{\rm track,min}>4\) GeV. All the parameters involved in the tuning of the constituent modules follow the standard set of tunes released by the JETSCAPE Collaboration in an elaborate recent study [49].
## II Results
This work covers the collision energy: \(\sqrt{s_{\rm NN}}=5.02\) TeV for the most central (0-10%) Pb-Pb collisions and
shows a comparison with selected experimental data available from the ATLAS and the CMS collaborations. The energy loss depicted in this work is achieved by the coupling of MATTER with LBT module. The secondary LBT module (which handles the low virtuality phase) remains the same throughout until and unless specified. Fig. 1 (top panel) shows the \(p+p\) collision results for inclusive jet spectra at \(\sqrt{s}=5.02\) TeV for \(|y_{jet}|<2.8\), which follow the JETSCAPE PP-19 tune [54], are then compared to the experimental data from the ATLAS [24; 25]. The ratio of inclusive jet cross-section using the JETSCAPE to the ATLAS data is shown in the bottom panel of Fig. 1, which shows that the results are in the acceptable range up to \((\leq 20-30\%)\). We report the 0-10% most central Pb-Pb jet spectra for \(R\) = 0.4 in Fig. 2. The ratio of the differential cross-section for inclusive jets in Pb-Pb collisions using JETSCAPE to the ATLAS [25] data are shown in Fig. 2. Here, \(R_{\rm AA}\) is defined as
\[R_{\rm AA}=\frac{\frac{1}{N_{\rm evt}}\frac{d^{2}N_{\rm jet}}{dy_{jet}d\rho_{ T}^{\rm jet}}\Big{|}_{\rm AA}}{\frac{d^{2}\rho_{\rm jet}}{dy_{jet}d\rho_{T}^{ \rm jet}}\Big{|}_{pp}}, \tag{4}\]
Where \(\sigma_{jet}\) and \(N_{jet}\) are the inclusive jet cross-section in \(p+p\) collisions and the jet yield in Pb+Pb collisions, respectively, which are measured as a function of transverse momentum, \(p_{T}\), and rapidity, \(y\). Moreover, \(N_{evt}\) is the number of Pb+Pb collisions within a specific rapidity interval. The inclusive jet-\(R_{AA}\) is calculated as the ratio of the Pb-Pb and p-p spectra, which is shown in Fig. 3 in comparison to the ATLAS data. The JETSCAPE (MATTER + LBT) results agree with the ATLAS data quite nicely. However, we see an enhancement \((\leq 10-20\%)\) in the low \(p_{T}\) region below 180 GeV, which is followed by marginal suppression above 300 GeV, as we ascend in the \(p_{T}\) range.
### Exploring the MARTINI and AdS/CFT models
Both the MARTINI [32] and the AdS/CFT [33] models are designed to handle the low virtuality phase and carry forward the energy loss once the parton is transferred from the MATTER. We replace the successful LBT model with the MARTINI, to carry out the simulations for the same conditions [49] and compare the jet-\(R_{AA}\) with the experimental data from the ATLAS [25] and the CMS [35] collaboration. Similarly, the calculations are done by replacing the LBT with AdS/CFT model and compared with the experimental data.
Figure 1: (Color online) Differential cross-section of inclusive jets for p+p collisions with cone size R=0.4, with a minimum track requirement of \(p_{T}^{\rm track}>4\) GeV. Bottom panel shows the JETSCAPE ratio to the ATLAS data.
Figure 2: Differential cross-section of inclusive jets in Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\)TeV with cone size \(R\) = 0.4, with a minimum track requirement of \(p_{T}^{\rm track}>4\) GeV. Bottom panel shows the ratio of the JETSCAPE to the ATLAS data.
We further present the intensive comparison between the low virtuality parton evolution models, where the inclusive jet-\(R_{AA}\) is calculated for three models, MATTER coupled with LBT (which is our default), MATTER coupled with MARTINI, and MATTER coupled with AdS/CFT. The simulations are altogether staged in contrast with the experimental data from the CMS [35], for Pb-Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV and for a range of jet cone sizes up to an order of 1 and \(p_{T}^{jet}\) range up to 1 TeV.
The measurements for different models are now manifest in Fig. 4, which shows the jet-\(R_{AA}\) for \(|y_{jet}|<2\) and jet cone radius \(R=0.2\), in comparison with the experimental data from the CMS Collaboration [35]. Similarly, Fig. 4 shows the jet-\(R_{AA}\) which is also measured for \(R\) = 0.3, \(R\) = 0.4, \(R\) = 0.6, \(R\) = 0.8, and \(R\) = 1.0 over a range of jet-\(p_{T}\) from 300 GeV up to 1 TeV. The predictions made by the JETSCAPE are consistent even as we advance to the larger area jet cones. The JETSCAPE predicts marginally more suppression from a small to intermediate to large radius for all the modular combinations discussed above. The results predicted by the (MATTER + LBT) and (MATTER + MARTINI) are congruous with the experimental observations throughout the \(p_{T}\) range and across the different jet radii. While the (MATTER + AdS/CFT) model shows more suppression in the intermediate \(p_{T}\) and significant enhancement at high \(p_{T}\) region as compared to the other modular combinations. At very high transverse momentum, the virtuality of the dominant partons is very high and due to the coherence effects, the interaction strength of the parton with the medium is significantly reduced. Since the radiative energy loss in AdS/CFT is more dominant than the elastic jet energy loss as compared to LBT and MATTINI, there is a considerable enhancement of jet-\(R_{AA}\) in the high \(p_{T}\) due to the minimized elastic jet energy loss. Also, the effect of the recoil partons in LBT on the total energy loss is not very significant. It is very articulate that there is an appreciable reduction in net elastic and radiative jet energy loss when the jet cone includes the recoil partons.
With the above comparisons to both ATLAS and CMS experimental data (Fig. 1-4), the results from the JETSCAPE are substantially credible for further jet radius-dependent investigations using the current models, which is carried out in the next section.
### Jet radius (\(R\)) and jet-\(p_{\rm T}\) dependence of the \(R_{aa}\)
In this section, we emphasize the significant contribution from the hydrodynamic medium response already highlighted in previous studies [55; 8], which is observed in the jet-\(R_{\rm AA}\) as a function of jet radius. Through radius-dependent studies, we develop a clear picture of the role played by radiation and collisions in energy loss.
A recent study by the CMS collaboration [35] enlightens the rigorous comparisons of predictions from quenched jet event generators like HYDJET++ [56], PYQUEN [57] and theoretical models used to replicate relativistic heavy ion collisions to the experimental data for the jet-\(R_{\rm AA}\). The article concludes with the impression that although most state-of-the-art models have progressed, significant uncertainty remains for the large-area jets.
The large jet radius also implies that the jet retains a significant proportion of the extensively distributed momentum and energy deposited in the plasma. Since the JETSCAPE framework has pioneered the realization of a multi-stage approach for modular-based energy loss, the motivation here is to challenge the JETSCAPE model that has so far met our expectations in describing the variety of data observed and to investigate the limitations of the current models.
This quest is executed by calculating the jet-\(R_{\rm AA}\) double ratio (\(R_{\rm AA}^{\rm R}/R_{\rm AA}^{\rm R=0.2}\)) and as a function of jet radius which is in comparison with the CMS data for the most central (0-10%) Pb-Pb collisions over a range of jet-\(p_{T}\) from 300 GeV up to 1 TeV. The plots are further sub-categorized by three jet-\(p_{T}\) intervals.
For \(~{}300~{}{\rm GeV}\leq p_{T,jet}\leq 400~{}{\rm GeV}\), Fig. 5 (left panel) shows the predictions by the different combination of energy loss models. We observe a consistent trend for all three models, i.e. (MATTER + LBT), (MATTER + MARTINI, and (MATTER + AdS/CFT). The results are within the uncertainty limits. The trend predicted by all three modular combinations with a different low-virtuality handling model is pretty similar to a considerable extent. This also indicates that the energy loss of the partons in the intermediate \(p_{\rm T}\) range throughout the various jet cone sizes is comparable among LBT, MATTINI, and AdS/CFT.
For \(~{}400~{}{\rm GeV}\leq p_{T,jet}\leq 500~{}{\rm GeV}\), in Fig. 5 (center panel), we observe that (MATTER + LBT) is in fine agreement with the data while both (MATTER + MART
Figure 3: (Color online)the jet-\(R_{AA}\) as a function of jet-\(p_{T}\) for inclusive jets in the most central (0-10%) Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV for the jet cone radius \(R=0.4\).
TINI) and (MATTER + AdS/CFT), tend to slightly over-predict the jet-\(R_{\rm AA}(\leq 15\%)\).
In the extreme high \(p_{T,jet}\) region, that is for \(500~{}{\rm GeV}\leq p_{T,jet}\leq 1~{}{\rm TeV}\), in Fig. 5 (right panel), shows that all the models significantly over-predict the jet-\(R_{\rm AA}\), with the best description provided by the (MATTER + LBT) model (\(\leq 20\%\)). All the models show saturation around jet radius \(R=0.8\) and \(R=1.0\), which fits well for a realistic approach. Altogether, the JETSCAPE framework nicely describes the full evolution of the parton shower by adopting a virtuality-based multi-stage approach for energy loss.
We also measure the jet-\(R_{\rm AA}\) double ratio (\(R_{\rm AA}^{\rm R}/R_{\rm AA}^{\rm R=0.2}\)) as a function of jet-\(p_{T}\) in Fig. 6. We observe that as the jet is scattered in the medium, the final state partons of the medium interacting with the jet are also considered a constituent of the jet. As more and more gluons fall into the larger jet cone and prohibit from contributing to the jet energy loss. Therefore, energy lost by the jets is partially gained as
Figure 4: The jet-\(R_{AA}\) as a function of jet-\(p_{T}\) for inclusive jets in the most central (0-10%) Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02~{}{\rm TeV}\) for the jet cone radius R = 0.2, 0.3, 0.4, 0.6, 0.8 and 1.0 with \(|y_{jet}|<2\) and a minimum track requirement of \(p_{\rm T}^{\rm track}>4~{}{\rm GeV}\). The plot shows the comparison between the models by red, magenta, and blue markers for (MATTER + LBT), (MATTER + MARTINI), and (MATTER + AdS/CFT), respectively. The predictions are in comparison with CMS data shown with a black marker.
Figure 5: The double ratio (\(R_{\rm AA}^{\rm R}/R_{\rm AA}^{\rm R=0.2}\)) as a function of jet radius for inclusive jets for (\(300~{}{\rm GeV}\leq p_{T,jet}\leq 400~{}{\rm GeV}\)) (left panel), (\(400~{}{\rm GeV}\leq p_{T,jet}\leq 500~{}{\rm GeV}\)) (centre panel), and (\(500~{}{\rm GeV}\leq p_{T,jet}\leq 1~{}{\rm TeV}\)) (right panel), in the most central(0-10%) Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02~{}{\rm TeV}\) for different jet radii with \(|y_{jet}|<2\) and a minimum track requirement of \(p_{T}^{\rm track}>4~{}{\rm GeV}\). The plot shows the comparison between the models by magenta, red and blue markers for (MATTER+LBT), (MATTER+MARTINI), and (MATTER+AdS/CFT), respectively. The predictions are in comparison with CMS data shown with a black marker.
the area of the jet cone increases. Here we observe a non-monotonous rise in the double ratio for jet-\(p_{T}\leq 200\) GeV, followed by a fall in the intermediate \(p_{T}\) to high \(p_{T}\) region.
## III Conclusion
In this paper, we present the comparisons of the jet-\(R_{AA}\) predictions from the JETSCAPE framework incorporating the (2+1)D MUSIC model for viscous hydrodynamic evolution to the ATLAS data for Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV in high jet transverse momentum interval \(100~{}{\rm GeV}\leq p_{T,jet}\leq 1~{}{\rm TeV}\) for anti-\(k_{T}\) jets of radius R=0.4. These results put us in a strong stance to conclude that the MUSIC model is adequate and also successful in speculating the experimental observations even at higher jet-\(p_{T}\) for the most central collisions as well as mid-central collisions.
This work also elucidates the predictions made by low virtuality-based evolution models like MARTINI and AdS/CFT in a hydrodynamic medium generated by the MUSIC. We observe overall similar trend anticipations as compared to the unfolding in other hydrodynamic models like (2+1)D VISHNU [58]. The MUSIC model is less sensitive to coherence effects which are prevalent at high transverse momentum, compared to VISHNU. We observe more suppression in the intermediate and extreme \(p_{T}\) regions, which provides a better description of the observed data (\(\leq 15\%\)).
We advance the current JETSCAPE calculations and compare with the data of wider jet cones ranging from \(R=0.2\) to \(R=1.0\) recorded at the CMS detector for Pb+Pb collisions at \(\sqrt{s_{\rm NN}}=5.02\) TeV in high jet transverse momentum interval \(300~{}{\rm GeV}\leq p_{T,jet}\leq 1~{}{\rm TeV}\) for anti-\(k_{T}\) jets of radii \(R=0.2\), \(0.3\), \(0.4\), \(0.6\), \(0.8\), and \(1.0\). Although the JETSCAPE framework is still under improvisation to define the jet medium interactions at wide angles, this work highlights the current standing of the model to describe the energy loss and medium response phenomenon for broad area jet cones.
## IV Acknowledgements
We express our gratitude to the JETSCAPE Collaboration for making the state-of-the-art framework publicly available for extensive research use. The authors would like to especially acknowledge the members of the JETSCAPE Collaboration, Yasuki Tachibana and Chun Shen for their useful discussion and valuable feedback. The authors would also like to thank Goa University Param computing facility, SPAS local cluster facility, and seed money grant support.
|
2302.01093
|
Energy savings under performance constraints via carrier shutdown with
Bayesian learning
|
By shutting down frequency carriers, the power consumed by a base station can
be considerably reduced. However, this typically comes with traffic performance
degradation, as the congestion on the remaining active carriers is increased.
We leverage a hysteresis carrier shutdown policy that attempts to keep the
average traffic load on each sector within a certain min/max threshold pair. We
propose a closed-loop Bayesian method optimizing such thresholds on a sector
basis and aiming at minimizing the power consumed by the power amplifiers while
maintaining the probability that KPI's are acceptable above a certain value. We
tested our approach in a live customer 4G network. The power consumption at the
base station was reduced by 11% and the selected KPI's met the predefined
targets.
|
Lorenzo Maggi, Claudiu Mihailescu, Qike Cao, Alan Tetich, Saad Khan, Simo Aaltonen, Ryo Koblitz, Maunu Holma, Samuele Macchi, Maria Elena Ruggieri, Igor Korenev, Bjarne Klausen
|
2023-02-02T13:30:24Z
|
http://arxiv.org/abs/2302.01093v3
|
# Energy Savings under Performance Constraints via Carrier Shutdown with Bayesian Learning
###### Abstract
By shutting down frequency carriers, the power consumed by a base station can be considerably reduced. However, this typically comes with traffic performance degradation, as the congestion on the remaining active carriers is increased.
We leverage a hysteresis carrier shutdown policy that attempts to keep the average traffic load on each sector within a certain min/max threshold pair. We propose a closed-loop Bayesian method optimizing such thresholds on a sector basis and aiming at minimizing the power consumed by the power amplifiers while maintaining the probability that KPI's are acceptable above a certain value. We tested our approach in a live customer 4G network. The power consumption at the base station was reduced by \(11\%\) and the selected KPI's met the predefined targets.
Energy savings, sustainability, carrier shutdown, cell sleep, Bayesian learning
## I Introduction
As new mobile network generations are rolled out, the energy required to transmit over the air per unit of information (\(J/\mathrm{bit}\)) tends to decrease. This is mainly thanks to the increased energy efficiency of the hardware deployed at the base station, as well as to the design of better resource management algorithms. For instance, with respect to its predecessors, 5G better focuses transmitted energy towards users via analog beamforming, allows multiple transmissions to multiple users to occur at the same via massive MIMO (Multiple-Input-Multi-Output) spatial multiplexing, and reduces signaling overhead by lean carrier design [1]. However, such advances alone prove to be insufficient to curb energy consumption at the base station and keep up with the confluence of increased traffic volume, skyrocketing energy costs, and more stringent environmental regulations. Hence, the telecommunication industry is striving to find new ways to reduce the carbon footprint of its networks by using existing resources parsimoniously.
It is well known that power amplifiers (PA) are the main source (\(>65\%,\)[1]) of power consumption at radio frequency (RF) in a base station. Thus, a good practice for reducing consumption at the base station is to activate as few PA's as possible, while not (overly) degrading network performance.
Different resource management techniques leading to PA switch-off operate on different time scales and domains (frequency and/or antennas). One of such techniques is _symbol-level shutdown_ (also called _cell-DTX_ in LTE [2]) which deactivates BS hardware components in the absence of traffic and operates on a time scale of tenths of milliseconds. Its main advantage is its negligible impact on traffic performance; however, if the number of users is sufficiently high, the chance of observing periods with no traffic is small.
A second option to turn off hardware circuitry and reduce consumption is deactivating a certain number of antennas. By doing so, the rank of the transmission channel decreases as well as the number of available layers (i.e., the number of streams over which simultaneous communication can occur). This finally leads to a throughput decrease.
In this work we adopt a third option for energy savings, consisting in shutting down frequency carriers. This allows PA's to be switched off over longer time periods, in the order of tens of seconds to few minutes. Upon a carrier shutdown, user traffic and signaling transfer to the remaining active carriers(s). Thus, the load on the remaining carriers increases, and the traffic performance typically degrades (see Figure 3). It is known [3] that the power consumed by the PA can be well approximated by an affine function of the kind \(P_{\mathrm{RF}}(\ell)=a\ell+b\) of the resource utilization rate \(\ell\) for \(\ell>0\). However, \(P_{\mathrm{RF}}\) presents a discontinuity at \(\ell=0\) (\(P_{\mathrm{RF}}(0)=P_{\mathrm{sleep}}<b\), see Figure 1). Hence, the energy increment due to the load increase over active carriers is over-compensated by the PA switch-off, which eventually leads to energy savings. We remark that a single PA may be associated to different carriers, possibly across multiple technologies (e.g., 4G and NR). So, deactivating a carrier does not necessarily imply that the serving PA is also turned off.
**Our contribution**. We leverage a method that reduces energy consumption at the base station by shutting carriers down in a pre-defined order (e.g., in decreasing order of frequency). According to a hysteresis mechanism, the next carrier in line is switched off (on, respectively) if the load on the sector is lower (greater, resp.) than a certain min-threshold
Fig. 1: (Stylized) power consumption of power amplifier versus PRB utilization.
Fig. 2: Hysteresis carrier shutdown policy. The average load on the active carriers in the sector is compared against thresholds \(\rho_{\mathrm{min}}\) and \(\rho_{\mathrm{max}}\) to decide whether to shut down or reactivate a carrier, in a pre-determined order.
(max-threshold \(\rho_{\max}\), resp.). Thus, the load is maintained within the interval \([\rho_{\min};\rho_{\max}]\). By using an over-the-top architecture, we optimize such thresholds on a sector basis, with the aim of minimizing the energy consumed by the PA's while ensuring that certain KPI's meet pre-defined constraints with high confidence. We designed a parametric Bayesian algorithm converging to good threshold values in a handful of iterations and capable of adapting to varying channel conditions. We validated our method via a live customer 4G network trial during which we reduced the power consumption at the base station by 11\(\%\) while meeting the KPI constraints with the pre-defined confidence of \(89\%\).
### _Related works_
Carrier shutdown is mentioned as a promising technique for reducing the power consumption at the base station in several recent technological surveys such as [1, 4, 5] and industry white papers as [6, 7]. A similar approach allows the base station to adapt the bandwidth to the traffic needs via the concept of _bandwidth part_, without the need of powering off the whole carrier, as described in [8, 3]. The work in [9] proposes a method to switch off the entire base station (instead of just carriers) when the load on the base station is sufficiently low. In [10], the authors illustrate the challenges of base station deactivation, among which coverage loss is crucial. Finally, the work [11] investigates the impact of different level of hardware sleep state on network performance.
The sources above agree on the fact that carrier shutdown should not be performed at the expense of traffic performance over-degradation. Yet, to the best of our knowledge, we are the first to design an effective method achieving a satisfying (and configurable by the operator) trade-off between energy consumption and network performance via carrier shutdown.
## II Problem formulation
Let us consider a base station, where a set of frequency carriers \(\mathcal{C}\) is deployed to serve the mobile users in a specific sector. We assume that a subset of the carriers \(\mathcal{C}\) can be shut down at any time, and the corresponding attached users are redirected to the remaining active carriers, whose frequency/time resource utilization consequently increases. This typically leads to a degradation of traffic performance (see Figure 3) as measured by network Key Performance Indicators (KPI's). On the other hand, the power consumed by the radio units reduces: the increased consumption in the active carriers due to a higher resource utilization is typically over-compensated by the PA consumption reduction induced by carrier shutdown.
We now introduce some notation. We call \(\mathcal{A}_{t}\subset\mathcal{C}\) the set of active carriers at time \(t\). We assume that at least one carrier must be left active at any time, to ensure coverage; hence, \(|\mathcal{A}_{t}|\geq 1,\ \forall\,t\). We denote by \(w_{t}(\mathcal{A})\) the power consumed by the PA's serving carriers \(\mathcal{C}\) at time \(t\) when carriers \(\mathcal{A}\) are active. We assume that a list of \(K\) KPI's is constantly monitored on carriers \(\mathcal{A}^{\prime}_{t}\supset\mathcal{A}_{t}\) that include the active carriers \(\mathcal{A}_{t}\) in the sector, and possibly also carriers of neighboring cells that could be negatively impacted by our carrier shutdown policy.
We require that KPI's be jointly acceptable on a each carrier with a desired likelihood \(\xi\). To this aim, we define a Boolean function \(f(\{\mathrm{KPI}^{i,c}_{t}\}_{i=1}^{K})\) that returns 1 if KPI's are acceptable and 0 otherwise, where \(\mathrm{KPI}^{i,c}_{t}\) is the \(i\)-th KPI measured at time \(t\) on carrier \(c\in\mathcal{A}^{\prime}_{t}\). E.g., the most natural way to define \(f\) is to set a minimum target level \(y\) for each KPI and to require that _each_ KPI for a given carrier exceeds its target value, i.e.,
\[f\left(\{\mathrm{KPI}^{i,c}_{t}\}_{i=1}^{K}\right):=\bigcap_{i=1}^{K}\left( \mathrm{KPI}^{i,c}_{t}\geq y^{i}\right).\]
Our goal is to determine, for a specific sector and at any time \(t\geq 0\), which carriers \(\mathcal{A}_{t}\) should be activated to minimize the long-run average power consumed by the PA's whilst ensuring that the selected KPI's are acceptable for at least a portion \(\xi\) of the time. More formally, our objective writes:
\[\min_{\mathcal{A}_{t}\subset\mathcal{C}}\lim_{T\rightarrow\infty} \frac{1}{T+1}\sum_{t=0}^{T}\mathbb{E}\left[w_{t}(\mathcal{A}_{t})\right] \tag{1}\] \[\mathrm{s.t.}\ \lim_{T\rightarrow\infty}\frac{1}{\sum_{t=0}^{T} \lvert\mathcal{A}^{\prime}_{t}\rvert}\sum_{t=0}^{T}\sum_{c\in\mathcal{A}^{ \prime}_{t}}\mathbb{E}\left[f\left(\{\mathrm{KPI}^{i,c}_{t}\}_{i=1}^{K} \right)\right]\geq\xi. \tag{2}\]
where the expectation is with respect to the traffic fluctuations.
Examples of KPI's that one may want to preserve upon carrier shutdown are statistics (e.g., mean or percentile across connected users) of integrity KPI's (e.g., downlink/uplink throughput and traffic volume), mobility KPI's (e.g., inter/intra-frequency handover success rate), accessibility KPI's (e.g., setup-success/drop-call rate), availability KPI's (e.g., cell availability), or a combination of those.
We finally observe that, as the carrier shutdown activity on a sector may affect the performance on neighboring sectors, one should ideally rewrite (1)-(2) as a joint optimization problem across different sectors. We justify our choice to decouple the carrier shutdown problem across different sectors by claiming that our impact on inter-cell mobility is limited, since we ensure that at least one carrier (typically, the lowest frequency) is always active in each sector, which ensures good coverage.
## III Solution architecture
We here describe the computing architecture of our energy savings via carrier shutdown method. In Section IV we will
Fig. 3: (Live network data) probability that downlink throughput exceeds 6 Mbps on LTE layer E (freq. 800 MHz, bandwidth 10 MHz) and layer T (freq. 1800 MHz, band 20 MHz) versus PRB utilization \([\%]\) and CQI.
delve into its algorithmic details.
**Base station: Carrier shutdown policy implementation.** In our solution, the logic handling carrier shutdown is implemented at the base station. We first describe the rationale behind it. Typically, Quality of Service (QoS) is negatively correlated with the Physical Resource Block (PRB) utilization rate (also simply denoted here as _load_) at the base station: the higher the load, the worse the QoS, as shown, e.g., in Figure 3. Thus, in order to prevent QoS degradation, one should cap the average load of the active carriers to a certain upper value. On the other hand, energy savings are achieved by shutting carriers down, which eventually leads to a load increase on active carriers; thus, the load should not be kept too low either.
For such reasons, we use a carrier shutdown policy of hysteresis type, that attempts to keep the average load on active carriers in a sector comprised within \([\rho_{\min};\rho_{\max}]\). When the load is lower than \(\rho_{\min}\), then a carrier is shut down; conversely, a carrier is reactivated when the load exceeds \(\rho_{\max}\).
Upon a carrier shutdown decision, the base station gradually reduces its downlink power on the carrier, which forces users to attach to a different carrier or base station.
In our solution, carriers are switched off in a pre-defined order (and back on, in the reverse order) called \(\mathcal{G}\). For instance, a reasonable design choice that preserves network coverage is to shut carriers down in decreasing order of frequency. Indeed, it is known [12] that as the carrier frequency increases, path loss also increases, hence coverage reduces.
The general procedure we used for carrier shutdown is described in Algorithm 1, where \(\mathcal{A}_{t}=\{c_{1},\dots,c_{a_{t}}\}\) is the set of active carriers in time period \([t-1,t)\).
We remark that the carrier shutdown policy described here considers thresholds \(\rho\) as input parameters. In Sections III and IV we will describe how to optimize such thresholds.
```
Input: Sector carriers \(\mathcal{C}=\{c_{i}\}_{i=1}^{|\mathcal{C}|}\), sorted in order \(\mathcal{G}\). Initial set of active carriers \(\mathcal{A}_{0}\). Parameters: Load thresholds \(\rho_{\min},\rho_{\max}\) (\(\rho_{\min}\)\(<\)\(\rho_{\max}\)).
1fortime instants\(t=0,1,\dots\)do
2 Compute the average traffic load \(\ell_{t}\) on carriers \(\mathcal{A}_{t}\)
3if\((\ell_{t}<\rho_{\min})\wedge(a_{t}>1)\)then
4 Shut down carrier \(a_{t}\); set \(a_{t+1}=a_{t}-1\).
5
6else
7if\((\ell_{t}>\rho_{\max})\wedge(a_{t}<|\mathcal{C}|)\)then
8 Switch on carrier \(a_{t}+1\); set \(a_{t+1}:=a_{t}+1\)
9else Set \(a_{t+1}:=a_{t}\)
```
**Algorithm 1**(Vanilla) carrier shutdown policy
**Over-the-top node: Data collection and threshold update.** To optimize the load thresholds \(\rho=[\rho_{\min},\rho_{\max}]\), defining the carrier shutdown policy implemented at the base station, we use the Over-the-Top (OTT) architecture illustrated in Figure 4. At time instants indexed by \(t=0,1,\dots\), an OTT computing node retrieves the latest value of the KPI's of interest across the network. Then, based on the KPI values, the OTT node is responsible for updating the load thresholds of each sector and pushing the new values to the base stations at appropriate times. Thus, the frequency of threshold update must be lower than or equal to the KPI collection frequency.
As opposed to embedding the solution at the base station, the OTT architecture offers a higher computational power and the ability of having a global view of the network. On the other hand, its bottleneck is represented by the amount of data that can be transferred from the base stations to the OTT node. To cater for this, KPI's are retrieved by the OTT node at (relatively) low frequency, e.g., every 15-60 minutes. This has a decisive impact on the design of our threshold update algorithm, having to deal with a data scarcity issue, as described in the next section.
## IV Load threshold tuning algorithm
In this section we describe the technical details of the Bayesian algorithm implemented in the OTT node that optimizes the load thresholds \(\rho\) for a specific sector.
**Search region.** The load thresholds \(\rho=[\rho_{\min},\rho_{\max}]\) can take on any value between \(0\%\) and \(100\%\), under the condition that \(\rho_{\min}<\rho_{\max}\). To simplify our problem,
we restrict our threshold search to a restricted region called \(\mathcal{R}\), which we define as a line segment along which both \(\rho_{\min}\) and \(\rho_{\max}\) are _monotonically non-decreasing_. As \(\mathcal{R}\) is one-dimensional, it can be conveniently mapped to a parameter \(x\in[0,1]\) such that, as \(x\) increases, the corresponding pair \(\rho^{x}=(\rho_{\min},\rho_{\max})\) is element-wise non-decreasing (Fig. 5). E.g., \(\mathcal{R}\) can be set to the straight segment between \(\rho=[0,0]\) and \(\rho=[a,b]\), where \(a<b\). In this case, the parameter value \(x\in[0;1]\) corresponds to the threshold pair \(\rho^{x}=[xa,xb]\). However, in this paper
Fig. 4: Solution implementation architecture
Fig. 5: We consider monotonic threshold search regions \(\mathcal{R}\), along which energy consumption reduces and KPI’s degrade.
we do not discuss how one should specifically design \(\mathcal{R}\).
**Problem reduction.** As the parameter \(x\) increases, the expected number of active carriers decreases, since a higher value of \(\rho_{\min}\) translates into a higher chance of carrier shutdown, while a higher \(\rho_{\max}\) leads to a lower chance of reactivation. Hence, as \(x\) increases, we can safely assume that the power consumption at the base station reduces _and_ that KPI's degrade; in other words, the expectation of the KPI function \(f\) decreases. It stems from such considerations that the problem (1)-(2) under hysteresis policy (Algorithm 1) and with load thresholds restricted to \(\mathcal{R}\) boils down to finding the value \(x^{*}\) whose KPI performance is the closest to the target \(\xi\):
\[x^{*}=\operatorname*{arg\,min}_{x\in[0;1]} \tag{3}\] \[\Bigg{|}\lim_{T\to\infty}\frac{1}{\sum_{t=0}^{T}|\mathcal{A}_{t} ^{t}|}\sum_{t=0}^{T}\sum_{c\in\mathcal{A}_{t}^{t}}\mathbb{E}\left[f\left(\{ \mathrm{KPI}_{t}^{i,c}(x)\}_{i=1}^{K}\right)\right]-\xi\Bigg{|}\]
where \(\mathrm{KPI}_{t}^{i,c}(x)\) is the \(i\)-th KPI value measured at time \(t\) in carrier \(c\) when the threshold pair \(\rho^{x}\) is under use, and \(t\) indexes the instants at which the OTT node collects KPI's. By convention, if there exist multiple solutions to (3), then \(x^{*}\) is the highest of them, since it minimizes consumption.
We remark that, if the original problem (1)-(2) is unfeasible, then (3) still produces a solution, being the closest one to the feasibility region and such that KPI's are the best possible.
**Closed-loop paradigm.** To solve (3) we adopt the following general procedure. At round \(k\), upon the selection of value \(x_{k}\) for a specific sector, the carrier shutdown Algorithm 1 is deployed at the base station with threshold pair \(\rho^{x_{k}}\). Then, after a certain time, the resulting KPI values are collected by the OTT node which converts them into binary values--denoted by \(\mathcal{D}_{k}\)--via function \(f\). Then, a value for \(x_{k+1}\) is selected for the next round and the same process is repeated.
**Vanilla Bayesian algorithm.** We describe our threshold tuning method via a step-by-step approach. We first illustrate the vanilla version of our algorithm under some simplifying assumptions, that we lift in the next paragraphs where the full-blown solution is finally presented.
We first assume that the binary values \(f(\{\mathrm{KPI}_{t}^{i,c}(x)\}_{i=1}^{K})\), measured across different carriers \(c\in\mathcal{A}_{t}^{\prime}\) and time instants \(t\) and obtained for a specific \(x\in[0;1]\), are generated according to an _i.i.d._ Bernoulli random process, where the probability of a sample being 1 is the _unknown_ value \(p(x)\). In this case, expression (3) can be further simplified as follows:
\[x^{*}=\operatorname*{arg\,min}_{x\in[0;1]}\big{|}p(x)-\xi\big{|}. \tag{4}\]
To solve (4), one could use the _stochastic approximation_ (SA) algorithm that at iteration \(k\) chooses a value \(x_{k}\), observes samples \(\mathcal{D}_{k}\) with mean \(m_{k}\), and updates \(x\) by a quantity proportional to the excess of \(m_{k}\) with respect to the confidence level \(\xi\), i.e., \(x_{k+1}=x_{k}+\epsilon_{k}(m_{k}-\xi)\), where \(\{\epsilon_{k}>0\}_{k}\) must satisfy certain convergence properties [13].
Although it is widely used, its convergence properties are well understood and it requires little computational effort, SA is arguably _not_ a good fit for our problem. First, it typically converges within few thousands of iterations, which in our case would amounts to a few weeks' time. In fact, one iteration is typically performed every few hours due to the OTT architecture limitations (Section III). Moreover, during the first iterations, SA would tend to explore widely across the region \(\mathcal{R}\) before approaching \(x^{*}\), which may cause severe KPI performance drop occurrences. This is clearly unacceptable in most live deployments. Moreover, SA cannot exploit prior information collected via historical data which would help identifying reasonable threshold values from the start.
For such reasons, we turned our attention towards Bayesian approaches, able to deal with data scarcity and to naturally embed prior information extracted from historical data.
_Procedure._ We first parameterize the (unknown) function \(p(.)\) as \(p_{\theta}(.)\), where \(\theta\) are the parameters to be optimized. For instance, \(p_{\theta}\) can be defined as a bounded linear function:
\[p_{\theta}(x)=\min(\max(a-bx,0),1),\qquad\forall\,x\in[0;1] \tag{5}\]
where \(\theta=[a,b]\). Our main idea is to compute the most likely values of \(\theta\) given the observations and to select the next value of \(x\) accordingly. Suppose that at the beginning of iteration \(k\) we have a certain probabilistic _belief_ on \(\theta\), in the form of the probability density \(\Pr(\theta)\). Then, the likelihood of observing binary samples \(\mathcal{D}_{k}:=\{d_{1},\ldots,d_{j}\}\) given that threshold pair \(\rho^{x_{k}}\) is deployed and that the parameter value is \(\theta\), writes:
\[\Pr(\mathcal{D}_{k}|\theta)= p_{\theta}(x_{k})^{\sum_{i=1}^{J}d_{i}}\left(1-p_{\theta}(x_{k}) \right)^{J-\sum_{i=1}^{J}d_{i}}. \tag{6}\]
The _posterior_ belief on \(\theta\) is computed via the Bayes rule:
\[\Pr(\theta)\leftarrow\Pr(\theta|\mathcal{D}_{k})=\,\frac{\Pr(\mathcal{D}_{k}| \theta)\Pr(\theta)}{\Pr(\mathcal{D}_{k})},\quad\forall\,k \tag{7}\]
where \(\Pr(\mathcal{D}_{k}|\theta)\) is defined in (6). In the bounded linear case (5) where \(\theta\) is two dimensional, (9) can be computed directly via standard numerical techniques. Yet, if \(\theta\) has high dimensionality, then computing the denominator of (9) is intractable since it would require the solution of a complex multi-variable integral. In this case, advanced techniques such as Markov Chain Monte-Carlo [14] are needed.
Once the belief on \(\theta\) is updated, we determine the next value \(x_{k+1}\) as the one solving equation (4) where the true (unknown) value of \(p(x)\) is replaced by the expectation of its parametric version \(p_{\theta}\) with respect to the updated belief \(\Pr(\theta)\), i.e.,
\[x_{k+1}=\operatorname*{arg\,min}_{x\in[0;1]}\Big{|}\mathbb{E}_{\theta\sim\Pr( \theta)}\left[p_{\theta}(x)\right]-\xi\Big{|}. \tag{8}\]
**Dealing with _long_ time-scale traffic variations.** In practice, observed (binary) samples are not _i.i.d._ but they rather follow a distribution that varies along with the traffic characteristics. For instance, as the inter-cell interference increases, the KPI's in the sector typically degrade (see Figure 3), which increases the probability of observing a sample equal to 0.
We can account for this in our Bayesian model by assuming that the parameter \(\theta:=\theta_{k}\) varies across iterations \(k\) according to a certain Markovian transition law \(\Pr(\theta_{k}|\theta_{k-1})\). In light of this, the Bayes update rule in (7) can be augmented as:
\[\Pr(\theta_{k}) \leftarrow\Pr(\theta_{k}|\mathcal{D}_{k})=\frac{\Pr(\mathcal{D}_ {k}|\theta_{k})\Pr(\theta_{k})}{\Pr(\mathcal{D}_{k})} \tag{9}\] \[=\frac{\Pr(\mathcal{D}_{k}|\theta_{k})\int_{\theta_{k-1}}\Pr( \theta_{k-1})\Pr(\theta_{k}|\theta_{k-1})d\theta_{k-1}}{\Pr(\mathcal{D}_{k})}\]
where the updated belief \(\Pr(\theta_{k})\) is written as the convolution between the former belief \(\Pr(\theta_{k-1})\) and the transition rule \(\Pr(\theta_{k}|\theta_{k-1})\). If the parameter \(\theta\) is static, then \(\Pr(\theta_{k}|\theta_{k-1})=\mathbf{I}(\theta_{k}=\theta_{k-1})\) and we recover the original update (7).
We remark that the transition rule \(\Pr(\theta_{k}|\theta_{k-1})\) is unknown but there exist techniques (e.g., [15]) to estimate it from data.
**Dealing with _short_ time-scale traffic variations.** The technique described is able to effectively track the changes in the distribution of \(\theta\) when they occur on a relatively slow time scale, in the order of a few iterations.
However, a single iteration may span several hours, during which traffic may follow typical peaks and troughs causing abrupt temporal changes to the distribution of \(\theta\) over temporal scales not accounted for in the above approach. To tackle this, a practical shortcut is to pre-emptively split the 24 hours of the day into \(N\) windows during which traffic conditions are typically stable, and run independent Bayesian update instances on each window. For a given window, thresholds can be updated on a daily basis. Therefore, window splitting caters for short-time scale traffic variations within a single day, while the transition law \(\Pr(\theta_{k}|\theta_{k-1})\) deals with long-term variations, across multiple days. Such \(N\) windows can be then defined, e.g., as those during which CQI is the most stable, i.e.,
\[\min_{N,t_{0}<t_{1}<\dots<t_{N-1}}\frac{1}{N}\sum_{i=0}^{N-1}\mathrm{Std}( \mathrm{CQI}[h_{i},h_{\mathrm{mod}(i+1,N)}]) \tag{10}\]
where \(\mathrm{Std}(\mathrm{CQI}[h_{i},h_{i+1}])\) is the empirical standard deviation of CQI values within the hours of the day \([h_{i},h_{i+1}]\), computed on historical data collected in the sector to be optimized.
As windows get shorter, the amount of KPI data collected at each iteration reduces, which bears a negative impact on the convergence properties of our Bayesian approach. Thus, it is important to ensure a minimum duration of a few hours for each window, that can be added as a constraint to (10).
```
Input: Search region \(\mathcal{R}\)
1 Split the 24 hours into \(N\) windows via (10)
2forwindow \(n=1,\dots,N\)do
3 Collect historical data and initialize prior \(\Pr(\theta_{0})\)
4forday \(k=1,2,\dots\)do
5 Compute \(x_{k}\) via (8)
6 Deploy load thresholds \(\rho^{x_{k}}\) and collect KPI's
7 Compute \(\Pr(\theta_{k})\) via (9)
```
**Algorithm 2**Load threshold tuning algorithm
**Prior initialization.** In order to accelerate the convergence speed of the Bayesian search and avoid a cold start, it is good practice to properly initialize the _prior_ belief \(\Pr(\theta_{0})\), _before_ the online exploration phase begins [16]. First, by construction of the search region \(\mathcal{R}\), we know that \(p_{\theta}(x)\) is a _non-increasing_ function of \(x\). Thus, we start by assigning a null probability to all values \(\theta\) for which the monotonicity condition is not verified. The prior belief on \(\theta\) can be also refined via historical data--obtained from live network deployments or from simulation--reporting the KPI's of interest obtained for different values of load thresholds within \(\mathcal{R}\). Then, the Bayes update (9) is performed for each of the historical threshold values, as if the algorithm "discovered" them in online fashion.
Our threshold tuning procedure is resumed in Algorithm 2.
## V Live network trials
We tested our solution for carrier shutdown in a proof of concept (PoC) on a live customer 4G network, over a cluster comprising 19 sites (and 57 sectors). Most of the sites had 4 frequency layers (800, 1800, 2100 and 2600 MHz). _Baseline_ measurements were taken over periods spanning a few weeks immediately before and after the PoC trial, during which all carriers were kept active. Note that this corresponds to the extreme case where \(\rho=[0,0]\). Two (\(N=2\)) windows were identified for each sector, one during daytime and the other
Fig. 6: Bayesian update of parameters \(\theta\). The bounded linear approximation (5) is used. The red and gray shaded regions denote confidence intervals for the value of \(p_{\theta}(x)\) with respect to the prior and posterior distribution of \(\theta\), respectively. Blue dots are the average of previous observations \(\mathcal{D}\).
Fig. 7: Two (\(N=2\)) windows are identified here via (10). Shaded blue region is the confidence interval for CQI distribution.
during nighttime. The bounded linear parameterized function (5) was used. The prior \(\Pr(\theta_{0})\) was initialized by collecting 2 weeks data during the baseline period. Multiple instances of the threshold tuning algorithm were running in an OTT server for a duration of 4 weeks, where each instance optimized thresholds for a specific sector and window. The search region included the origin \(\rho=[0,0]\), hence guaranteeing the possibility to replicate the baseline behavior if needed. The parameter transition rule \(\Pr(\theta_{k}|\theta_{k-1})\) was set to a Gaussian distribution with zero mean and diagonal covariance matrix, which allowed the algorithm to adapt to traffic variations by gradually "forgetting" past observations. Remarkably, each Bayesian update (9) could be computed in less than 1 second. We chose the IP downlink throughput in QCI 8 as the KPI to be preserved, with an associated target of \(y=5\) Mbps and confidence level \(\xi=89\%\). To preserve coverage, 800 and 1800 MHz carriers were always left active.
During our PoC, we could reduce the energy consumption at the base station by 11\(\%\) with respect to baseline, which is a significant given that energy is up to \(40\%\) of an operator's OPEX [17]. Overall, carriers were shut down for around 30\(\%\) of the time. We detected no significant impact on cell congestion, PDCP traffic volume, or number of active users, neither on the cluster of optimized sites nor on neighboring ones. Figure 8 shows that our main principle (3) for energy savings was satisfied. Indeed, in the sectors where KPI was violating the constraint (i.e., the 11-th worst KPI value was lower than 5 Mbps) even in the baseline phase, no carriers were (rightly) ever shut down during the PoC. Conversely, for the sites where KPI's was above the target, carriers were put to sleep at a rate guaranteeing the KPI to meet the constraint with approximate equality. For a few sectors, KPI's were still above target even if all carriers--among those eligible for shutdown--were sleeping all the time.
## VI Conclusions
By shutting carriers down, the power consumption at the base station can be significantly reduced. However, this comes with the cost of degrading the user quality of service. We designed a practical solution that minimizes the power consumption at the base station while guaranteeing that pre-selected KPI's are acceptable with high confidence. A carrier shutdown policy depending on some threshold parameters is implemented at the base station. An over-the-top node optimizes the thresholds via a data efficient Bayesian procedure. During live networks trials our method could reduce the power consumed by the base stations by 11\(\%\) while fulfilling the KPI constraints in each sector.
|
2310.01282
|
Grasping AI: experiential exercises for designers
|
Artificial intelligence (AI) and machine learning (ML) are increasingly
integrated into the functioning of physical and digital products, creating
unprecedented opportunities for interaction and functionality. However, there
is a challenge for designers to ideate within this creative landscape,
balancing the possibilities of technology with human interactional concerns. We
investigate techniques for exploring and reflecting on the interactional
affordances, the unique relational possibilities, and the wider social
implications of AI systems. We introduced into an interaction design course
(n=100) nine 'AI exercises' that draw on more than human design, responsible
AI, and speculative enactment to create experiential engagements around AI
interaction design. We find that exercises around metaphors and enactments make
questions of training and learning, privacy and consent, autonomy and agency
more tangible, and thereby help students be more reflective and responsible on
how to design with AI and its complex properties in both their design process
and outcomes.
|
Dave Murray-Rust, Maria Luce Lupetti, Iohanna Nicenboim, Wouter van der Hoog
|
2023-10-02T15:34:08Z
|
http://arxiv.org/abs/2310.01282v1
|
# Grasping AI: experiential exercises for designers
###### Abstract
Artificial intelligence (AI) and machine learning (ML) are increasingly integrated into the functioning of physical and digital products, creating unprecedented opportunities for interaction and functionality. However, there is a challenge for designers to ideate within this creative landscape, balancing the possibilities of technology with human interactional concerns. We investigate techniques for exploring and reflecting on the interactional affordances, the unique relational possibilities, and the wider social implications of AI systems. We introduced into an interaction design course (n=100) nine 'AI exercises' that draw on more than human design, responsible AI, and speculative enactment to create experiential engagements around AI interaction design. We find that exercises around metaphors and enactments make questions of training and learning, privacy and consent, autonomy and agency more tangible, and thereby help students be more reflective and responsible on how to design with AI and its complex properties in both their design process and outcomes.
Keywords:Artificial Intelligence design prototyping design education experiential methods AI exercises more-than-human design
## 1 Introduction
Designers increasingly need to develop a facility with artificial intelligence, as it becomes part of the way that products and services function and appears in an increasing number of the contexts in which designers work (Benjamin et al, 2021; Dove et al, 2017). However, there are several challenges for design students in engaging with AI, from the broadness of the term AI and the fuzziness with which it is applied (Littman, 2021), to the difficulty of getting to grips with the technical and computational complexities of these systems (Yang et al, 2020; Nicenboim et al, 2022). These challenges around understanding and making sense of the new capabilities of AI become urgent as the technology emerges from it's latest winter into a new spring, developing at a fast pace (Littman, 2021; Samoli et al, 2020; Floridi, 2020).
The range of techniques for making creative use of AI has been rapidly growing: Runway offered easy access to generative spaces and now video (Runway, 2020); EdgeImpulse offers sound and gesture classification for microcontrollers with training through a web interface (EdgeImpulse, 2019); the current sets of generative image models such as DALL-E, Midjourney and StableDiffusion and language models (ChatGPT etc.) allow a natural language interaction through the use of prompts. Along with learning materials for more traditional toolkits (TensorFlow, 2015; OpenCV, 1999) and model development and exchange initiatives (e.g. HuggingFace, 2016) these form a downward pressure on the technical barrier to entry, even as the complexity of the underlying models increases. The conceptual barrier can remain high, though, reducing the possibility for designerly engagement and appropriation. There is a large jump from "my first ML model" to understanding the implications of ML technology, and designers often want - and need - to engage with these implications. Creating models in practice helps, but this needs conceptual framing to help direct and contextualise the activities - for example, in related fields, courses such as Creative Applications of Deep Learning (Mital, 2016) and the more provocative follow up Cultural Appropriation with Deep Learning (Mital, 2021) look at visual practice, or Machine Learning for Musicians and Artists (Fiebrink, 2022) unpack these systems for creative practitioners.
However, simply thinking about why it is hard for designers to appropriate AI technologies into their practices also misses a key question: what can design practice bring to the development and understanding of AI systems (Benjamin et al, 2021), especially as the technologies become more pervasive and more collaborative (Wang et al, 2020). Designers have their own strategies for making use of, critiquing and appropriating new technologies (Westerlund and Wetter-Edman, 2017), so there is an interest in understanding what designerly methods could reveal about human AI relations, particularly where it involves interactions between humans and technological systems - considering the "social, political, ethical, cultural, and environmental factors of implementing AI into daily human-to-computer interactions" (Wong, 2018). Design research methods, speculations (Auger, 2013; Kirman et al, 2022), fictioning (Forlano and Mathew, 2014; Wong et al, 2017; Troiano et al, 2021; Benjamin et al, 2023), probes and toolkits (Sanders and Stappers, 2014), more than human design (Coulton and Lindley, 2019) and the general practices of Research through Design (RtD) (Giaccardi, 2019; Stappers and Giaccardi, 2017), are all well suited to thinking into the socio-technical aspects (Holton and Boyd, 2021; Sartori and Theodorou, 2022; Theodorou and Dignum,
2020), possibilities, and implication of AI in everyday life, just as they have been applied to understanding digital sensing technologies (Pierce, 2021), blockchains (Murray-Rust et al, 2022), the future of automation (Cavalcante Siebert et al, 2022) and so on.
Our aim is to help students to design products and services that make use of AI technologies, while developing a critical understanding of its implications. This means articulating both the technical and relational aspects of AI so that meaningfully shape the development of products, services and systems even if they are not intimately familiar with the technical details of its operation. As such we are looking for ways to sensitize interaction designers to AI, to create experiences rather than explanations. In relation to the typology developed by Yang et al (2020) of ways to aid designers around AI, our work contributes to the early stages of 'creating AI-specific design processes' by probing with concrete exercises ways in which educators can support designers in ideating in a space mediated by the capabilities and implications of AI systems.
In order to explore this space, we created a set of methods for designing AI driven products and services (Section 3.2) that draw on theories about how people relate to technology and AI in particular. These methods take the form of short, autonomous, experiential exercises that can be used to develop and enrich the design of interactive technological products and services. We introduced these exercises partway through an interaction design course (Section 3.1), where, students (n=100) in small groups (n=28) are asked to design future products and services through iterative prototyping and testing. We collected an immediate written reaction from each group as to what the students had done with the materials, and the aspects they found useful or resonant. We interviewed a self-selecting subset of the students (n=12) and their coaches (n=7) to dig deeper into questions of how the methods had changed their understandings and relations to AI.
To explore the potential of these methods, we investigate the following research questions:
* **RQ1:** How do the exercises stimulate and modulate changes to the students' design process to accommodate AI, in particular the way that they are conceptualising and prototyping their projects?
* **RQ2:** How do these experiential exercises affect student's grasp of AI and ML, in particular in relation to interactional, relational and contextual qualities which are key points in the recent theoretical developments in AI within HCI?
* **RQ3:** How do the exercises help to develop a critical design perspective while engaging with AI technology as a socio-technical system?
Through investigating and discussing these research questions, the contributions of this work are:
1. A set of exercises that translate theoretical developments in design and AI, into experiential exercises for designers that can be carried out autonomously, with reflection on the experiential, pragmatic and reflective qualities that made the exercises effective. These exercises are available at [redacted] for future use and development.
2. Insights into how and for what to apply these exercises in a pedagogical context to support design processes for creating AI enabled products and services.
3. Insights about how these exercises affected student's reasoning and design activities, bringing agency, relationality and criticality alongside development of technical facility.
4. Methodological reflections around the possibilities afforded by the methods and how these contribute to nurturing a uniquely designerly AI culture that supports future design education.
## 2 Background
Working with AI presents particular challenges for designers. One of them is around engaging with emerging and complex technologies, with different behaviours from traditional design materials. Yang et al (2020) point at two key challenges: the uncertainty about the capability of AI systems, and the complexity of their outputs. The second challenge is around understanding AI, given that the metaphors and imaginaries around it, obscure the real processes that are needed for maintaining such a technology (Murray-Rust et al, 2022). Contrary to competing terms like 'complex information processing systems' or'machine intelligence,' the term AI fires the mind with ideas of human-like reasoning. While these imaginaries seem to be better for marketing, they are certainly no good for developing a grounded sense of the capabilities of AI as a technology. (Hildebrandt, 2020)
### Designing AI
Despite the challenges for designers to engage with AI, there are currently many areas where design and AI touch on each other.
At a low level, there is growing attention to the meeting of AI and user experience (UX), as the new possibilities offered by the technology allow new kinds of interaction, and are susceptible to new pitfalls. Techniques are emerging that help to create user interfaces that work with AI systems (Subramonyam et al, 2021, 2021), or support innovating AI-powered services and systems within enterprises (Yildirim et al, 2022). This can be seen in Microsoft's guidelines for human-AI interaction (Amershi et al, 2019), or Google PAIR's Guidebook (PAIR, 2020), as well as efforts to bring HCI together with AI (Inkpen et al, 2019). Recently, the identification of 'AI capabilities' (Yildirim et al, 2023) provides a concrete way to think about design spaces for the interactional aspects of computational system. Negotiation between AI and HCI can be deep and subtle: interactional affordances help to calibrate trust and reliance between humans and AI; conceptual metaphors sculpt the relations formed with conversational agents (Jung et al, 2022; Khadpe et al, 2020); and appropriate abstractions make AI qualities at hand for creative practitioners (Fiebrink, 2019; Tremblay et al, 2021). User experience, in its broader sense, goes beyond designing the immediate experiences, with work starting to consider how to develop frameworks for creating more or less personal, dependent and discretionary interfaces (Kliman-Silver et al, 2020), or at how to generate heuristic models of meaningful engagement with AI artworks (Hemment et al, 2022).
Zooming out slightly, a collection of theoretical issues around AI relate to emerging fields in the third (and fourth) wave HCI and the philosophy of design
communities. Scholars in those areas have grappled with how the concepts used in design and HCI practices might be tied to the industrial era, and how they might have to change and adapt to the new kinds of products and materials which are enabled by AI. This includes lines of research such as post-industrial design, more than human practices (Giaccardi and Redstrom, 2020), entanglement thinking (Frauenberger, 2020; Murray-Rust et al, 2019; Hodder, 2016), fluid assemblages and multi-intentionality (Redstrom and Wiltse, 2018; Wiltse, 2020). All these offer vibrant pictures of a new set of relationships between humans and the material world in which both entities 'co-constitute' each other. Along with reorienting the relationships between humans and non-humans, scholars within those fields have been rethinking what it is to 'do design', breaking with traditions focused on the subject-object dichotomy (Giaccardi and Redstrom, 2020), where design goes beyond a mere problem solving enterprise and becomes an ongoing and more inclusive practice. Although these theoretical developments seem to be gathering momentum, they are still not fully translated into practical tools for designers - the jump from Barad's agential realism (Barad, 2007) to configurations of bits and programs takes careful work (Scurto et al, 2021; Seymour et al, 2022; Sanches et al, 2022).
At a broader scale, beyond the immediate interactions, some of the theories and practices at play are oriented towards engineering particular system qualities and properties: value sensitive design can help to make sense of fluid and evolving systems (de Reuver et al, 2020), where many different human values may be at play (Yurrita et al, 2022; Fish and Stark, 2021; Shen et al, 2021); questions of meaningful human control modulate the relations of responsibility between humans and automated systems (Cavalcante Siebert et al, 2022), as does responsible AI design (Benjamins et al, 2019). Here, design is an instrumental part of making systems behave in certain ways. AI ethics is a broad field (Hagendorff, 2020), and well as directly affecting system properties, work from the Fairness Accountability and Transparency (FAccT) community looks to support documentation that helps maintain these properties in communities, such as documentation for models and datasets (Mitchell et al, 2019; Gebru et al, 2018), and the ethical aspects of system development (Mohammad, 2021; Murray-Rust and Tsiakas, 2022).
### AI and Design Education
The specificities and challenges of AI and ML technologies add up to an ongoing discourse at the intersection of design education and technological progress. Traditional formats and scopes for carrying out design are being questioned and revised, with canonical, linear, causal, and instrumental approaches being criticized in favor of novel models inspired by complexity theory, system science, and practical philosophy. This moves towards an aim of reconceptualizing design as a moral act (Findeli, 2001; Lin, 2014). Designers and design researchers, in fact, are increasingly recognized as actors whose decisions have ethical as well as political implications (Lloyd, 2019). Parallel to this, the societal implications of AI and ML are more clearly pervasive and unpredictable. However, when introduced in design education, these technologies are typically either approached as the ultimate tools to learn or used as 'context' for grounding alternative and critical design explorations. On the one hand, ML courses are increasingly offered to de
sign students for promoting ML/AI literacy but remain an addition to the main curricula, rather than being integrated into project courses (as in Jonsson and Tholander (2022); van der Vlist et al (2008)). These technologies are still rarely integrated in design education (Dove et al, 2017), and often approached with the believe that even just exposing students to cutting edge technology can stimulate the emergence of innovative and technologically advanced design solutions to real problems (McCardle, 2002). In other cases, however, the approach is diametrically different: AI and ML are conceptualised as phenomena to be understood and questioned, because of its potential impact in society. For instance, Auger (2014) used the theoretical lens of domestication for challenging students to ideate future domestic robots and reflect on their implications in everyday settings. The two perspectives on AI and ML, the focus of which we summarize as technical competency vs critique, tend to remain distinct approaches with apparently opposite scopes. Even when there is an explicit commitment to bridge the two approaches, existing pedagogy struggles to combine the ambition to build AI literacy while also fostering a critical mindset around AI/ML projects, and reflections do not lead to rich critiques about situated and contextual implications of AI and ML unless they are integrated into project development. For some counterexamples, Jonsson and Tholander (2022) purposefully crafted a course for students to approach and appreciate AI tools as creative partners, and learned that AI qualities, such as uncertainty, imperfection, and under-determination, can be a rich source of inspiration for generating creative expressions as well as powerful triggers of reflection. Mital's Cultural Appropriation with Deep Learning course (Mital, 2021) weaves together learning about the operation of deep networks with recognising their role in society. Fiebrink's work Fiebrink (2019) distinctively looks at ML as a design material and situates it within project development. Perhaps most similar to the work outlined here is 'Graspable AI' (Ghajargar et al, 2021, 2022; Ghajargar and Bardzell, 2023), which brings together tangibility and AI, using explanation as a path to understanding and form as a language for communicating AI affordances. Even in these cases, however, the emphasis is on one side of the spectrum, that is on how to teach ML effectively to any population and enable the emergence of new creative outputs.
The disciplinary call for exposing the design questions involved in making AI and ML systems - as well as the complexity and trade-offs that implementing these in the world implies (Bilstrup et al, 2022) - remains largely unanswered. Our work sits at the intersections of these experiences and aims to fill the gap between technical efforts and critical explorations. Specifically, we set out to integrate AI and ML explorations within the development of design projects, in a way that both enable students to build AI literacy, as well as to empower them to take a critical stand towards these technologies in society.
### Summary and research direction
Part of the work of design as a discipline is to mediate between these philosophies and actionable practices that can be brought to bear on particular situations. That is the starting point for the work presented in this paper: we are interested in how to bring conceptual developments from design theory and AI into something that
is at hand to design students, that can make a difference to how they go about conceptualising and prototyping interactions.
In order to bridge the gap between the practical and technical engagement with AI, we propose that three levels of engagement between AI and design are all potentially at play within design projects creating AI powered systems:
**Interactional Affordances of AI**: that allow new means of interaction between systems and people. At a low level, AI brings new possibilities for sensing, responding, recognising and classifying from which to build interactions. These interactional affordances and possibilities for action (Stoffregen, 2003) offered by machine intelligence can take the form of capabilities offered by the technology (see Yildirim et al (2023) for a comprehensive overview), but also of modulations of existing capabilities with AI specific qualities such as probabilistic outcomes.
**AI Rationality**: as it is brought into constellations and forms new relations between people and things. Beyond the immediate interaction, design with AI intervenes conceptually and materially in constellations (Coulton and Lindley, 2019) of humans and objects. Designers must navigate the increased agency and depth of interaction that intelligent systems bring, and the changes in the way that we understand and relate to technological systems.
**Wider Implications of AI**: as it affects social structures and people's lives outside the immediate interactions. Concerns about the implications of systems are not new, but AI and data driven systems that are built through processing large amounts of data about people bring new and subtle ways in which they can be unfair or unjust, more blurring of responsibility, and more potential unintended consequences at scale.
For the work at hand, we are interested in how these levels relate to design education, in particular how students start to engage with AI as a design material. To create a broad coverage, we looked at creating methods that could create engagement with the specifics of working with AI systems on these three levels, as well as balancing the educational concerns of developing a better understanding of and facility with the technology, and encouraging critique. Based on this, we created a set of 'design exercises' - rapid, experiential engagements that draw on the theoretical developments above but can be carried out productively in the context of conceptual development and prototyping interactions with AI systems (Figure 1).
## 3 Study
### Course and context
The context of this study is a one-semester (20 week) design and prototyping course for first year Masters students in the 'Design for Interaction' programme at TU Delft. All students in the course (n=100) have a design background with a mixed range of computational skills, from no technical knowledge to beginner level in software engineering. The students were grouped by the course coordinators into 28 teams, and coached by 7 experienced coaches from the Industrial Design Engineering faculty.
The course is structured in three stages, with student-teams of 3 or 4 students working 13 per week on their design project (Figure 2). They worked on design briefs that asked them to speculate about near future interactions supported by technology. Many of these briefs were provided by client companies, for example new forms of human-vehicle interactions, possibilities for more sustainable cooking through smart kitchen appliances and pervasive computing in hotel rooms. The students had little to no pre-course familiarity with Machine Learning and AI methods, theories and tools, however most of the coaches had at least some experience with these technologies.
The main learning objective of the course is to introduce students to various ways of prototyping with interactive technology. Students were asked to design within the context of the client company, creating and testing a new iteration of their prototype each week in discussion with their coach. They were prompted to draw on some form of AI or ML, although technical capabilities could be acted out rather than implemented in code. The course ran in three phases: a "First Shot" familiarisation with AI and technology, "Iterating Forward" to develop concepts and "Polishing Up" the final ideas and prototypes (Figure 2), with an exhibition at the end of each stage. Client companies were invited at the end of each stage to provide feedback to the student teams, organised in the form of an exhibition with interactive prototypes.
### Exercises
The intervention involved a set of 9 exercises (Table 1). Each design exercise was introduced on a single page, containing a title, a short description and instructions on how to execute the exercise, a background section describing the intent, usefulness and ideas behind the exercise and references to papers and related projects (Figure 3, see supplementary material for full set). The choice of this set of exercises was exploratory: we derived them from a combination of existing design
Figure 1: Situation of the methods underdiscussion across two axes: i) the level of consideration, from direct interactional affordances, through building relationality out to wider implications and ii) the balance between developing facility with the technology and critiquing it’s uses.
practices, emerging work from the researchers and the theories mentioned above, through extensive discussion between the researchers. We aimed to have a spread of exercises across immediate interactional affordances of AI, mid-level human-machine relations and concerns about the wider implications of AI, as well as across developing fluency and supporting critique (Figure 1, description in Table 1, details in Supplementary Material). Some of the methods were pre-existing explorations, some had been used extensively, and some were adaptations of existing techniques to fit the AI context or the autonomous format. There was a strong focus on activities that could be performed relatively simply by students, that were experiential, and that would work across a range of topics and levels of technical accomplishment. Each exercise was intended for application to an existing project, i.e. not a brainstorming or early ideation tool, but a way to develop existing work.
### Execution and Data Collection
Towards the start of the third stage, after feedback from the second exhibition (Figure 2), a half-day workshop was organised in which all student teams were introduced to the 9 design exercises with the aim of refining their project. This timing was chosen primarily for educational reasons - the methods here were designed to help develop and refine existing ideas, rather than generate new ones, so we waited until the final stage of the course. This timing was the subject of some discussion - see Section 4.1.2
This half day workshop allowed each group to execute one or two of the design exercises for their design project. Output of the half-day workshop was captured on A3 templates, including a questionnaire with some first prompts on the usefulness and effectiveness of the exercises applied. The half-day workshop was setup to be executed autonomously by the student teams, selecting the exercises themselves, with lecturers present to observe and assist when necessary. All output materials of the design exercises during the workshops were collected afterwards.
Figure 2: Course Structure; In the first stage (4 weeks) students were given context on AI and ML, and hands-on engagements with AI technology were provided through a series of workshops with existing tools (Edge Impulse, Teachable Machine and Voiceflow). At the end of the first stage, teams presented multiple ideas demonstrated in multiple early prototypes. The second stage introduced lectures covering AI capabilities, Human-Agent partnerships and the conceptual shifts mentioned above as the students developed their core concept, leading to a second exhibition of interactive prototypes. The third stage introduced the exercises discussed in this paper, as the students refined their projects towards a highly immersive final exhibition with one or more interactive prototypes.
Two weeks after the workshop we invited each team to select a representative to take part in a one-to-one semi-structured interview to discuss their experience and the effect it had on their project. In order to minimise educational disruption at a busy time and limit the possibility of coercion, we did not attempt to get full coverage, but allowed self selection by the students, in return for a \(\copyright\)20 contribution to the teams prototyping budget. This led to 12 out of 28 teams participating in the interview. We interviewed all coaches the week before the end of the course (\(n=7\)) to see what effect they had perceived on the students work. Interview questions and structure can be found in the Supplementary Material.
### Analysis and evaluation
The interviews were audio recorded and both the interviews as well as the output materials from the design exercises were transcribed and analysed by a team of 7 researchers. We inductively coded both the written materials and interviews with students and interviews with coaches. We conducted collaborative thematic analysis: the coding team collectively familiarized themselves with the data and defined a shared coding scheme. At least two members of the team coded each of the transcribed materials using this scheme. Finally, coded materials were collectively discussed to synthesize insights into key themes, framed by the three levels of engagement with AI discussed earlier.
Figure 3: Example of the exercises, showing 1) title, 2) expected time 3) suitable project types 4) process 5) custom illustration 6) background, 7) references and example projects (full set of exercises in Supplementary Material)
## 4 Findings
Our findings are structured in two parts which build on both the A3 worksheets (\(n=28\)) and the student and coach interviews (\(n=12\), \(n=7\) respectively). While the analysis of the A3 sheets revealed recurring topics and common themes, the interviews revealed in-depth insights about what the students took from the exercises. The first part (Section 4.1) covers the execution of the methods: which ones were chosen and how they were perceived and valued by the students. The second part (Section 4.2) describes the links made to AI and machine learning at the interactional and relational levels as well as wider implications. In all cases, comments from student interviews are marked as [p(id)] and those from coaches as [C(id)]; extra context about the project that the quote relates to is given in square brackets. The students who participated in interviews were working on projects around: comfort and behavioural encouragement while driving, as well as behaviour modelling and matchmaking (for Ford); collection of data while surfing and intelligent ski clothes (for O'Neill); smart objects and energy manifestation
\begin{table}
\begin{tabular}{p{85.4pt} p{284.5pt}} \hline \hline Method & Description and inspirations \\ \hline Uncertain Inter-actions & Look through the state diagram of your interaction; for each change of state, imagine replacing it with probabilistic, uncertain or in-between outcomes (Bowler et al, 2022; Benjamin et al, 2021) \\ \hline Be the ML & From a live view of the data inputs to your system, try to perform the activity yourself, then explain what you’re doing to someone else (Devendorf and Ryokai, 2015; Scurto et al, 2021) \\ \hline Poor Datasets & Iteratively remove examples from your dataset, decreasing diversity of the input and retraining the model until something problematic happens. (Elwes, 2019; Buolamwini and Gebru, 2018) \\ \hline Thing Ethno-Systems & Ethno-Systems & Collect data from the perspective an object in your system; use it make sense of the situation around the device; what does it experience that you didn’t know? Who does it interact with? (Giaccardi et al, 2016; Murray-Rust et al, 2019) \\ \hline Conversations with AI & Choose one of the AI powered objects in your interaction. One team member plays the character of the object, and others carry out an interview with that object. (Nicenboim et al, 2020; Reddy et al, 2021) \\ \hline Metaphor Shifts & Think about the metaphors use in describe your system, then think about designing purely for the metaphor; change metaphors and try again (Murray-Rust et al, 2022b; Lockton et al, 2019; Alves-Oliveira et al, 2021) \\ \hline Roleplaying AI Networks & Play an object, system or human role and collaboratively act out the interaction. Discover new actors, negotiations, relationships and interaction details (Pschetz et al, 2019; Reddy et al, 2020) \\ \hline Resisting/ Sup-verting AI & Recognise and act out moments in the interaction where someone might subvert the interaction; look for design opportunities (Lupetti et al, 2020; DiSalvo, 2015) \\ \hline Meaningful Human Control & Brainstorm places where the interaction might go wrong; rather than trying to fix it, figure out who or what is responsible and how they might be supported in setting their moral boundaries. (Siebert et al, 2022) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Name, key references and description of each method. The references given here are for the theoretical context and inspiration for the methods, and do not match exactly those given on the exercise cards which prioritise developing student understanding.
in hotel rooms (for Citizen M Hotels); photography for reconstructive surgery (for Erasmus Medical Centre); and speculating on spirituality and life coaching with AI (for the DCODE project).
### Method Execution
To build context about the way the exercises were carried out, we give a quantitative summary of students' opinions of their project before the workshop, and their evaluation of the clarity of the exercises. We then look more qualitatively at two themes: students' sense of the relevance and overall evaluation of the methods, and an analysis of the ways in which they found the methods useful. Table 2 summarises the number of times each method was and provides key quotes for their use in these four areas: concept development, detailing interactions, understanding AI and supporting reflection.
#### 4.1.1 Quantitative self-assessment and Clarity of the Methods
Analysing the worksheets, we looked into how many times each method was used as well as the perceived clarity of the instructions (Table 2). 20 of 28 groups carried out two exercises, with the remaining 8 carrying out only one. Counting responses to questions about their projects where a value greater than 0 was given (Figure 4), 16 groups (0.57) felt their project critically investigated technology; 12 (0.43) were solving real-world problems; 15 (0.54) made use of AI qualities; 22 (0.79) engaged with complex relationships and 16 (0.57) intended to consider the wider implications of their work.
All of the methods were rated as clear (Likert scale \(-3=\)"very unclear", \(3=\)"very clear", \(m=1.5,sd=1.0,m_{min}=1.0\)), with only a single instance being rated negatively. This indicates that students felt they understood the purpose and structure of each method.
#### 4.1.2 Relevance and situation in the course
Most groups chose exercises to address what they considered unexplored in their projects, or in some cases, even limitations of their concepts. For instance, Metaphor Shifts was picked to _"look for something else that better describes [their project]"
Figure 4: Student response (per group, n=28) to questions about their orientation and their project scope. Answers are on likert scales from ‘Not at all’ (-3) to ‘It’s the core of our project’ (+3).
\begin{table}
\begin{tabular}{p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt} p{42.7pt}} \hline \hline Method & Clarity (-3.3) & Count & Concept & Detailing & Understanding & Reflection \\ \hline Uncertain Inter- & 1.1 & \(\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0} \pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\definecolor[named]{ pgfstrok}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\
[P1] or _"finding a nice metaphor [to make] interaction with the AI more empathetic to the user"_[P15]. Roleplaying AI Networks offered the hope to be _"really precise and defined in the personality that we were gonna give the AI [that was deciding people's futures]"_[P27], and Uncertain Interactions was chosen to help _"map out all the responses and interactions that we were not considering before"_[P19] in a multi-object interaction. Other groups saw the exercises as a more general way to _"check if we were in the right direction"_[P1], _"plan ahead like the possible problems"_[P6], _"have a discussion point [...] instead of just everybody thinking in different directions"_[P15] or more radically _"just start again and we go somewhere else"_[P6]. Some methods were explicitly avoided as the groups felt they had experienced the methods before, e.g. _"role playing"_[P19], or because they didn't fully understand what a method entailed [P16]. Although the students reported that the methods were relatively clear, they suggested that the individual differences might have impacted to some level how students interpreted the exercises: _"we're all from different cultures. So we all interpreted some questions differently"_[P1].
There was a common response that this activity would have been more useful earlier in the course [P2,P5,P16,P19,P27,...] and it could have helped generating more prototype ideas [P23]. Part of this was due to the sense that the activities felt like _"an ideation - like an inspiration activity"_[P2]. As students saw the exercises as tools for divergent thinking, they would have liked to use them for ideation in close connection to the prototyping experimentations in the first period of the course [P5]. Others were concerned that the moment when they did the exercises was their time to _"optimize the prototype for the exhibition"_[P16] and wanted to spend all of their time in making. In contrast, feedback from coaches on the timing was more positive: _"To have sort of a zoomed out exercise at that point is I think a very powerful thing to do.[...] if you don't know where you're heading, then all these things, I don't think they will help you. [...] So I wouldn't move it. "_[CIa]. Several students echoed this perspective emphasising how it helped their process, e.g., _"because we were kind of stuck with our idea in general"_[P7], that it helped _"think of more details"_[P20] around a developed idea.
Overall, there was a positive attitude towards the activities, even from groups that were initially suspicious: _"we were quite surprised because we were thinking 'Ohh workshop again. [...] What's going [to come] out of it?' [...] And then in the end there was actually some things that really helped us."_[P1]. Some negative responses (4) revealed student's concerns on carrying out the exercise properly [P5], or spending too long on one interaction [P19]. Some (2), had a hard time finding usefulness in the experience [P15,P16], as they were already familiar with the methods, as _"wouldn't say it brought me a new understanding because the metaphor is something we [already] had"_[P15].
#### 4.1.3 Perceived Utility of the Exercises
Many students saw the exercises as a form of _"ideation, like an inspiration activity"_[P2], _" kind of a brainstorm"_[P7] that can help _"to get a better idea"_[P7]. Several groups noted that they came up with different and more interesting ideas [P5] and that they _"could use this new inspirations"_[P23]. The methods were seen as useful for sharpening projects and defining practical next steps, such as planning for when things went wrong, or checklists of common concerns. Metaphor Shifts
was particularly generative of new ideas [P5,P16]. Beyond this, the methods were seen to help in the following four areas:
Conceptual development:The methods supported articulation and _"helped us get our story right, like the overall purpose of the concepts"_[P2], to develop _"a better detailed new metaphor"_[P5] and to _"make [a] choice in what we wanted"_[P7]. The benefit of gaining more conceptual clarity was also mentioned by some coaches [CLu]. Uncertain Interactions was seen as useful for mapping out the edges of a concept, so that students could easily get into details and next steps [P6]. The methods also helped grounding ideas, asking whether the concept _"can also work on the AI or do we need some future technologies that are not there yet to make it real"_[P1]. In some specific instances, the exercise helped to _"start thinking about time"_[P15], or to _"find new ways of taking the same idea and spreading it"_[P5]. The exercises helped to get an overview of things that students should think about [P1], making implications concrete and graspabale, in a way that is _"so in your face that you don't even think about the fact that it will be in the future"_[P7].
Refining Interactions:Many groups came out with a more refined idea about how their conceptual interaction should play out, as the exercise _"asks you to go into parts that maybe you don't want to explore"_[P6] and make projects more well rounded. The activities also helped students define interaction contexts better. Groups felt invited to _"draw [AI] already in the context"_[P15], and to _"think about the interaction with some of the objects in the [interactive hotel room] scenario"_[P19]. Refinements also pushed them to account for the potential meaningfulness of the projects, to _"clarify intentions"_[P20], and anticipate outcomes, e.g., _"what happens if the user doesn't understand what [the smart objects are] talking about"_[P19]. The experiential nature of the exercises helped to _"translate something abstract as "being challenged" or "supporting" [good behaviour while driving] to something actually tangible"_[P2], to think into the _"aesthetic experience"_[P15] of AI where the _"metaphor [of ritual cleansing for data collection] helped to think about materials as well"_[P15].
Reflection:The workshops were seen as a moment of reflection, a break from the _"many layers in such a project"_[P25] to focus on particular aspects. This could be on a technical level for the groups who _"never really took the time to think about AI"_[P2] or more interactional when they _"stopped to think about this character sort of thing"_[P27]. There was developing a _"critical lens, in terms of moral responsibility"_[P25] and _"seeing how important this is, to acknowledge the mistakes, to be trustworthy"_[P6]. Beyond the initial designerly sense of responsibility, they engaged with broader factors contributing to _"moral responsibility for an AI system [that encouraged spirituality]"_[P25]. Overall, the moment for reflection was seen positively, developing aspects of their work that were not thought through, and a sense that _"confidence comes once you [...] manage the critical points"_[P6] of the interactions.
Understanding AI:The workshop improved the confidence of students about working with AI, as _" before the course it was just like 'I don't know how to use an algorithm to do something cool' [...] and this makes it kind of [makes] everything
just specific in one one workshop"_[p2]. This was often not based on a deeper technical understanding of algorithmic operation, but on a thinking about how the AI would relate to things around it. Some groups ended up _"actually using more AI because of this [workshop]"_[p7], with confidence coming from _"now that we know what's going wrong, and we know how to respond to that"_[p6].
### Key themes for engaging with AI
Now we discuss findings in relation to broader theoretical developments in HCI, according to the three levels we have identified earlier: interactional affordances,
Figure 5: Conceptual Map of students’ reflections on the benefits of the methods grouped across the three levels of AI engagement: interactional affordances, relationality and wider implications.
relational questions and wider implications. An overview of these findings can be seen in Figure 5
#### 4.2.1 AI Interactional Affordances
Students found that the workshops illustrated that _"there are actually a lot of possibilities with AI"_[P7], beyond the tutorials at the start of the course, and that working through the experiences left them with a _"whole list of things that [AI] could say or do"_[P2]. They already had some experience with particular topics, but this opened up a greater sense of how these possibilities could be deployed in relation to their work. This did not always change the concept of the interaction, but did provide a confidence that many interactional designs could potentially be realised.
Data and meaning:Role playing helped with sensitising the students to the role of data in AI driven systems, questioning _"where is the AI getting the information?"_[P27] both generally and through very detailed questions of _"where we're gonna put [the camera that understands human-vehicle interactions]"_[P5]. The coaches noticed the attention to physical detail as well, seeing development of a _"way of bringing the data and looking at it and experiencing it"_[CIA]. There were moves to think about how to work with people in wheelchairs, and what it would mean to _"recognise these things and build the dataset"_[CLU], as well as the broader question of how _"how [the collected data] can be meaningful for you as a person"_[CIA].
Character and expression:The experiential nature of the exercises was, unsurprisingly, suited to engaging with designerly questions of the character and expression of the autonomous parts of the system. Students notice the possibility that they could be _"really precise and defined in the personality that we were gonna give the AI"_[P27], questioning default assumptions about how the system might respond. With conversational agents, it was noted that _"there is a lot of space between the yes or no"_[P19], but also that working probabilistically could smooth out interactional challenges, so that humans _"don't have to become machines ourselves"_[P6]. The possibility arose to create pluralistic engagements that gave _"different answers based on different characters and based on different situations [for patients undergoing reconstructive surgery]"_[P23]. This opened the possibility of making stronger bonds with users, and working on an emotional level, which we will return to in the next session.
Interactional Limitations:In general, the coaches were more sensitive than the students to the potential limitations of technology, for example noticing when _"the way they acted out looks good on screen but it doesn't reflect the deeper issues with understanding [...] Whereas if you use a conversational AI model I think you will run into a lot of problems that are hard to act out"_[CGi]. It was clear to them that some of the enactments would require sophisticated behaviour that could easily be glossed over with WoZ techniques, and they questioned whether the exercises could also point to these moments of glossing, or help notice points of complexity. For the more technically realised groups, the coaches noticed students
working around limitations of the technology, where _" it was not very good at detecting facial expressions, but you made a hand gesture"_ [CGi] that conveys emotion purposefully, leading to a rethinking of the interaction schema.
#### 4.2.2 AI relationality
Students felt that _"[t]here are so many layers in such a project, where you are constantly building"_ [P25] and noted that the workshops took them into some of the complex, multilayered aspects of working with interactive AI systems.
Deeper relationships:Following the theme of character above, the workshops prompted students to think about the ways that humans related to the things being designed, giving an impetus to _"think more in an empathetic way"_ [P15] about the end users and what AI mediation would _"mean for a human to human relationships"_ [P25]. Roleplaying the situation with the device helped to look across some of the other people around the interaction, for example working with an system that was helping to take medical photos for reconstructive surgery and seeing _"the relationships between the AI [and] doctor, assistant to friends or to your family members"_ [P23]. This was partly driven by a sense that the AI systems could interact in increasingly human-like ways, with metaphors like _"a friend in your car"_ [P5], or a pet. There was a move to look at some of the longer-term relationships formed and the bonds that people made with AI systems. Students developed increasingly anthropic concerns from whether _" people feel at a loss after they need to give [their smart mirror] back"_ [P23] at the end of a process, to questions of developing care and love relations to the objects.
Creepiness and agency:Interestingly, some of the more than human metaphors helped students to about when agency was troubling, and were _"open to more scenarios that we didn't see"_ [P20]. Manifesting home energy use using a metaphor of 'fireflies' caused a concern that it _"will follow you through your room as a dog follows you. This might be kinda creepy. So what if they [users] don't want to be followed?"_ [P20]. The potential intimacy of relations with a vehicle raised concerns about _"how intimate your interaction with your decentralized car be"_ [P5], and how _"if you're driving and you're stressed and you somehow just get like this random unexpected hug from your car"_ [P5] it would cause emotional discomfort. Even when autonomous behaviour was not emotionally invasive, there were concerns that _"sometimes the [smart hotel room] objects want to speak for themselves but at the same time you don't want to scare the human that is the guest in this room"_ [P19].
More-than-human relations:Going beyond metaphors of caring for cars as one might care for a dog, coaches noticed that students would _"use design as a medium to amplify the voice of nature"_ [CIo] or _"activate [...] energy consumption in a different way than just a tool"_ [CLu] in their AI mediated interactions, making a shift to both non-human perspectives and the idea of technological mediation rather than tools for particular outcomes. The students looked into new relationships that might emerge, e.g designing _"clothes to learn from every person that wears them to [and] grow its own personality"_ [P16]. The coaches noticed the role-playing aspect of the exercises prompted critical reflection into the scenarios and
relationships at hand, including noticing _"that the setting that they were imagining and the role of the AI within that setting was not a very good fit"_ [CGi]. Students found the practice useful for articulating what their vision for the future of human-AI relationships at individual and societal levels ought to be, including questions of governance and democracy.
#### 4.2.3 AI and wider implications
Responsibility:While methods targeted at interrogating control (Meaningful Human Control) explored agency and control, other methods (Metaphor Shifts) still gave space for these questions to arise. Students reflected on _"considering moral responsibility for an AI system"_ [P25] within the creation process; and the coaches noticed that the workshops provided _"a way to create distance and look at the project from a different perspective"_ [CIa], to re-evaluate the project beyond the immediate concerns of development, with a sense that it was the designer's responsibility to make sure that purposes and potential issues were clear upfront. Some students found that the workshops made the idea that people might misuse their system concrete, so for a friendly car system they _"gave ourselves some guidance for the next steps, [not] for concepts [...], but more like OK, this is now a checklist that we need to put next through concept every time to make sure we think about this"_ [P1]. Responsibility often came through thinking through what might go wrong, with evidence of 'zooming out' through the exercises, to think about what would happen if these systems were widespread, and their failure modes constantly present for users.
Consent and Privacy:Several students mentioned issues around consent; while some felt this was a core part of their existing work, others found that discussion around the workshops was what they needed really understand the implications, and _"a solution for something that [is] difficult to think about"_ [P1]. Groups managed to _"dig deeper in that space"_ [CLu] and better manifest the issues that they were already dealing with, and in some cases this meant that _"[consent] was actually a very explicit part of their final concept and that was not at their departure, I think, was driven in part by going a bit more speculative than they were imagining at first"_ [CGi].
Vision and Criticality:A common point from the students was that these workshops helped to think beyond the initial concerns of prototyping and into the multi-layered nature of the projects, not just the around AI responsibility but that _"it asks you to go into parts that maybe you don't want to explore"_ [P6] and rethink the purpose and shape of the project itself. Coaches were mixed about whether they saw changes in the level of critical thinking around the workshop, with some noticing no change, some a progression, and some seeing a strong difference where critical thought was brought in. Some of these were tradeoffs: _"They became more critical. They were focussing more on the experience, but I'm not sure they were more engaged with the AI"_ [CIo]. However, others noticed engaged with the human AI relationships, questions of datasets and the role of the project as critique, and _"really thought about it, how you negotiate with the machine and how much freedom you should have and how much agency you just have"_ [CMa].
## 5 Discussion
In this discussion, we address some potentials for developing the exercises, and reflect on our initial research questions. We discuss how the AI exercises address the current methodological gaps and, more broadly, how this work contributes to a larger program around design, HCI and AI, nurture a distinctively designerly AI culture.
### Effectiveness and Future Work
The exercises were seen as effective overall, although they could further be improved through use, observation and iteration. They produced thoughtful, socially engaged responses, but to a large extent remained far from the rapid and technically grounded results generated at the beginning of the course, when students were provided with tutorials focused on learning a particular AI technologies. As an example, despite the deep technical grounding of Uncertain Interactions, most student responses did not get deep into the specifics of model output and how to make use of it. Future versions of the exercises could look to bridge this gap, as could their use in more technical contexts, were models were really being trained and deployed. There could also be support to help students to decide which concerns to prioritise - for example, worries about people falling in love with their AI devices might not be the key problematics of the technology as created. While this prioritisation is arguably a part of general design practice, having concrete examples to contextualise the discoveries would be helpful. Practically, most students were relying on pre-built models and 'Wizard of Oz' setups (Browne, 2019; Dahlback et al, 1993) that used human action to simulate complex behaviour. This limited the utility of data driven exercises (e.g. Poor Datasets) and fed into a focus on the anthropomorphic possibilities of AI. This also led to less engagement with the possibilities of new forms of human-machine interaction than we might have hoped for.
#### 5.1.1 Timing and Situation
The time that the students had to execute the exercises was short, which may have limited the potential for deep reflection and thoughtful practice. While the students still had access to the methods, few groups chose to make use of them, so there is space to explore more prolonged engagement. The positioning in the course was somewhat contentious, with many students feeling the methods had been introduced too late (Section 4.1.2) - this is coupled to their assumption that the methods were there for concept development and ideation. However, the overall feeling from the coaches was that the timing was sensible - it provided a way to zoom out around existing concepts and add richness. Part of this divergence of opinion is part and parcel of process based education - there are often different views from within the process than outside it. However, it does point to the need for a stronger sense of what one can expect from the methods, and an indication of when and how they could be productively deployed.
#### 5.1.2 Choice and Range of Methods
This initial set of exercises was based on a particular set of theoretical ideas; it is clear that other theories and concepts could prompt additional methods, and other methods could be derived from the theories used. There is certainly no shortage of candidates, whether agential cuts (Shotter, 2014) provide techniques to divide up complex systems and consider multiple boundaries through more or less embodied encounters (Vagg, 2022), or ideas of cyborg intentionality (Verbeek, 2008) lead us to enact parings with composite possibilities (Rapp, 2021), introspection provides a lens to think about relations between AI and lived experience (Brand et al, 2021). Methods with a clear technical genesis would offer immediate experiences that are deeply embedded in and shaped by the technology, for example deliberately misusing vision algorithms (van der Burg et al, 2022) or using computer vision as a site of enquiry (Malsattar et al, 2019). We see this as the start of a collection of ways to engage in this area, which will grow over time. Additional exercises might emphasize different parts of the design process and different modalities of experience as well as introducing new theories or grappling with particular qualities of AI.
#### 5.1.3 Applicability
In terms of subject matter, the exercises were applicable to a range of projects across autonomous cars, robots, Internet of Things, hospitality and so on. They also helped with a range of issues, from shaping overall concepts to detailing important parts of the interaction. The application here was somewhat particular: the middle stages of an exploratory, creative prototyping brief. We would expect that the methods can be used in other processes and different levels of technical fidelity. In fact several of the methods, such as roleplaying AI networks and Thing Ethnography of AI systems are likely to give better results as the project is more developed and the context is stronger. Others, such as Poor Datasets are likely to be more useful with a developed technical implementation, while Uncertain Interactions could help with ways to create interfaces around probabilistic models in deployment.
### RQ1: Conceptualising and prototyping practices with AI
The exercises illustrated some of the issues that students have when carrying out prototyping and conceptualisation with AI: the need to deal with uncertainty, the possibilities of more human-like interactions but less clearly defined capabilities, the need to hold multiple levels together. This clearly asks a lot from designers, especially in this case, where many of them did not have strong electronics and coding skills before the course. The experiential (Hemment et al, 2022) and enacted (Elsden et al, 2017) aspects of the workshops were helpful to navigate this terrain, as the subjects of discussion could be played out in the group, adding to the sense of tangibility and refining how interactions should unfold. The interactional focus of this work makes it distinct from ideation tools such as AIxDesigns ideation cards (AIxDesign, 2022) which focus on conceptual innovation, or work on
developing user experiences (Subramonyam et al, 2021b, a) which makes the interface the primary subject of design. In line with open ended, critical and speculative prototyping methods (Malsattar et al, 2019; van der Burg et al, 2022; Nicenboim et al, 2020) the exercises took the students into the relational and interactional possibilities of AI.
From the feedback, it was important to give students exercises that were concrete enough that they could follow the steps. Several of the exercises were close to relatively standard design practices - Uncertain Interactions drew on the creation of state diagrams as a design articulation tool and the idea of acting out interactions as a form of prototyping is well established (Van Der Helm and Stappers, 2020). However, they were adapted to bring AI qualities into the familiar interaction design practices, emphasizing aspects like uncertainty, interface capabilities, distributed responsibilities and so on. It is clear that for some of the students, the simple forms of the exercises would have been enough - simply asking 'what might go wrong' and drawing a state machine to deal with it, rather than getting into the idea that machine learning systems produce probabilistic outputs produced useful results. None of the students chose to work directly with datasets; this may be a feature of their projects, as there was not much training and learning happening, or simply a lack of attraction to the particular exercise.
There was a tendency with many of the groups to drift into anthropomorphism, to imagine relations as overly human (Marenko and van Allen, 2016), and be diffuse about the capabilities of the technology. This relates to thinking into some of the particular AI characteristics that we will discuss in the next section, it is clear that prototyping will start to take different forms. The evolution of prompt engineering as a discipline (Liu and Chilton, 2022) and the potential to generate working systems from prompts (e.g. aptly aptly, 2022) indicates that new forms of prototyping are emerging. Here, the constraints are less well defined than working with code on Arduino, but no less present - training a TeachableMachine (Carney et al, 2020) to detect a gesture has just as many concerns as using the electronic gesture sensor built into the Arduino BLE Sense the students were using, but the failure modes play out differently, and a different set of prototyping practices are brought to bear. The multiple viewpoints contained in the exercises here - people, things, datastreams, algorithms, networks - help to tease out the parts of interactions to prototype. Enacting these possibilities makes it easy to fall into broad, fuzzy, anthropomorphic thinking about what systems might do; the challenge for developing new forms of prototyping is to temper this with a grounding in the capacities of the systems being designed, and to engage with the human-like affordances of technology, without missing the new machine possibilities. The exercises here helped students to clarify their concepts, move forward with their prototyping, and develop ideas about the responsibilities of creating AI systems, while maintaining designerly concerns of materials, aesthetics, function, fit to context and engagements with multiple actors.
### RQ2: Grasping interactional, relational and contextual qualities of AI
Our analysis of the student responses in relation to the current paradigm shifts in HCI, shows that interactional, relational and contextual qualities of AI could be important elements of design and AI educational programs. To unpack this,
we look at our findings in relation with wider notions of agency, human-machine relations, and understandings of AI.
#### 5.3.1 Agency
As noted above, AI is a tricky term, but a lot is contained in the ideas of agency which it can develop, in particular around 'non-humanesque agencies' (Hildebrandt, 2020). Some of these agencies were clear prompts for our work: a failure to recognise certain kinds of people as being human (Buolamwini and Gebru, 2018) shapes the inter-agences between vision systems and people; the ability to make decisions rapidly and constantly gives a sense of autonomy, but one which differs both in character and meaningfulness from the one of humans. Several of the exercises were aimed at interrogating these questions: 'Be the AI' prompted a reflection on exactly what the machines were doing, the interviewing and roleplaying exercises asked the participants to feel into what the agential possibilities were, and the more conceptual exercises questioned what agencies and responsibilities humans had around the systems. This helped participants to think about _"what are the actual choices that we are gonna make or what part of the interaction are we gonna do ourselves and what part is the machine gonna do?"_ [p16] - a key part of getting past the myths about AI capabilities (Natale and Ballatore, 2020).
Much of the thinking reported around issues of agency had to do with _"how it's gonna be alive for people"_ [p20] - the clear, animate, characterful side of agency. This surely has roots in some of the roleplaying methods. The shape of the collective roleplaying exercises was informed by an increased emphasis on co-performance, as humans were brought in to act out what the smart technologies would do, and notice possibilities for more shared agency, whether co-learning with the AI or finding ways for objects to speak for themselves.
#### 5.3.2 Human Machine Relations
Some of the exercises prompted students to position AI in relation to humans and non-humans. Thing Ethnography of AI systems instructed the students to map the ecosystem of the thing and its touchpoints, and reflect on who and what interacts with the concept. In Conversations with AI, the students were asked to enact an AI agent. Metaphor Shifts asked students to design systems based on a particular metaphor and then compare it to others. These exercises highlighted the relations of humans and non-humans within AI systems (Coskun et al, 2022; Nicenboim et al, 2020). They did that by decentering the designers perspective, to consider more actors and interactions that go beyond one user and one device (Verbeek, 2020). They also invited students to relate to AI not only as a tool, but as a social agent that shapes people's lives. In students' prototypes, intelligence, as well as responsibility, were not seen as a properties of machines alone, but shared between humans and artificial partners. Similarly, uncertainty and unpredictability were 'collaboratively curated' to 'imagine forms of digital interaction' (Marenko and van Allen, 2016).
The findings show that the exercises helped students to expand their concepts to account for the ecologies around them, especially when their projects were centred on particular embodiments, for example extending from a plant pot to a
community of plants, and looking at the plant-plant relations as well as the plant-human ones. The exercises also helped them acknowledge other people beyond the immediate users, thinking into how would they relate to the system, and what are the responsibilities that the user, system and designer have towards them. Furthermore, thinking of their concepts in relation to humans and non-humans, created awareness within the students on the human labour that is implicated in sustaining AI systems (Sinders and Ahmad, 2021). The kind of metaphorical social relationships the AI had with others ultimately influenced the designs: when the system was cast as a friend, it was seen, designed and conceptualised differently from when it was cast as a pet.
#### 5.3.3 From Explanations to Understandings of AI
One of the current challenges in the design of AI systems is how to support people in understanding them, especially when used to make autonomous decisions or create knowledge. AI explainability is especially challenging when based on deep learning models, given that some of the paths that AI systems use to give recommendations are not interpretable (Ehsan and Riedl, 2020), and the source of many generative outputs is complex (e.g. Kovaleva et al, 2019). While understanding ML in its technical sense is important, recent approaches in the explainability of AI have pointed at other ways of understandings which are not based on technical explanations and instead, promote experimentation, challenging boundaries, or promoting respect (Nicenboim et al, 2022; Hemment et al, 2022; Seymour et al, 2022). The findings expand the agenda of Explainability of AI by illustrating and unpacking particular design engagements with AI that go beyond mastering ML technical capabilities. This points at particular aspects that are important in the kind of understandings that designers might need to gain AI. Some particular aspects that helped designers understand AI are exercises that could prompt reflection into the affordances, relations and wider implications that those systems might have. Those engagements were not based on learning how to code ML models, but on experimenting with changing perspectives, provoking failures, enacting behaviours, and drawing schemas. These tactics could become part of a new agenda for supporting designers in understanding AI, especially one that is aligned with theoretical developments in HCI such as the posthuman turn (Lindgren and Holmstrom, 2020) as well as practical developments in design (such as methods used in critical, speculative and adversarial design (DiSalvo, 2015; Irani and Silberman, 2014; Bozic Yams and Aranda Munoz, 2021)).
### RQ3: Critical design perspectives while engaging with AI as a socio-technical system
While the exercises helped the students to develop their projects (from ideation to conceptualisation and detailing) they especially illuminated and modulated changes in the students design processes in relation to the sociotechnical aspects of AI systems (Crawford, 2021). The exercises supported students in reflecting on the role of AI within their concepts, in being more specific in what kind of aspects of AI are present, and developing a critical design perspective on AI around values of responsibility and agency.
Designing with AI as a socio-technical system, means acknowledging that it is not only a technical domain, but also entangled with social practices, institutions and infrastructures, politics and culture. AI "is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications" (Crawford, 2021). This is not an entirely new perspective - AI has long been considered a material practice (Agre, 1997), but there is a need to consider the interaction between humans and machines as part of broader societal contexts, and the broader discursive settings in which AI is socially constructed as a phenomenon with related hopes and fears (Lindgren and Holmstrom, 2020).
From the findings, it seems the exercises provided a space for students to go beyond the immediate concerns of a rapid prototyping session and engage in reflective practices that can position AI within its broader societal contexts. It is clear from many of the responses that the workshop carried out here provided a moment to reflect. Some of this can be simply ascribed to the sole fact of having the intervention - a space that prompted further thought. However, some of the exercises were more specific triggers for reflective engagement, with Meaningful Human Control and Resisting/Subverting AI asking students to explicitly consider critical perspectives. There was evidence of'reflection-in-action' (Yanow and Tsoukas, 2009), moments where in the midst of carrying out experiential exercises around the prototypes they'ma[de] previously implicit assumptions about the work explicit' (Wegener et al, 2019). Where it had previously seemed that an insect swarm would give a warm sense of companionship, looking at the technological sense of surveillance and following revealed a darker possibility for the user; the idea that a supportive huge came from technical rather than human agency was found on reflection to be disturbing.
Some of the exercises prompted a sense of 'zooming out' (Nicolini, 2009), to consider wider networks of things and people, and this zooming out was part of the students move towards more empathic design. This touched on the temporal (Pschetz and Bastian, 2018) aspects of their design, as they thought not just about interactional moments, but the slower unfolding of relationships over time. Overall, there were many moves to develop a sense of criticality within their design: the AI oriented methods created viewpoints for considering the role of technology and its possible overreaches from pragmatic and whimsical perspective, in line with the 'ongoing practical, critical, and generative acts of engagement' (Suchman, 2020) that build a responsibility for the things being designed.
### Beyond Education: Nurturing designerly AI cultures
We started with the aim of helping students to grasp enough qualities of AI to adapt their processes and conceptualisations accordingly. The study highlighted that there are distinct ways to engage with AI that are appropriate to our setting, where the culture and practices of designers centre particular ways of working. In this section we discuss the aspects of our work that nurtures this designerly AI culture.
The use of AI technologies varies by field. If we look at AI in terms of the enabling technology and the culture that surrounds it (Caramiaux and Alaoui, 2022), some of the differences and parallels with the use of AI in design become clear. There are common moves to cast AI as a creative partner (Llano et al,
2022; McCormack et al, 2020) within music, as a solution for optimisation (Noor, 2017) within engineering, as a formalisation and purification of human thought (Chiusi, 2020; Singh et al, 2019) in decision making organisations and so on. Within design, the places that AI might sit are being negotiated. Do we bring it into the process as a sparring partner for ideation (Simeone et al, 2022) or a source of creative inspiration (Yun et al, 2022)? Do we use it to re-understand the world through divergent practices (Malsattar et al, 2019)? Is it a new computational capacity for which we have to develop new UX practices (Subramonyam et al, 2021b)? Or a boundary object whose politics need critique (Crawford and Paglen, 2019; Lyons, 2020). All of these are within the remit of design. What we are interested in accenting here is the possibility for a designerly culture around the use of AI technologies, whether in processes, outcomes or critique. Just as a shift from explanation to shared understanding (Nicenboim et al, 2022) speaks to a relational, experiential mode of engagement, the exercises here create those experiences, and give ways to pick up, tangle and hold those relations. We suggest there are three key features of the methods that support this: experientiality, pragmatism and reflection.
The experiential nature of the methods appeared to be key in bringing in different perspectives on existing work, from noticing potential implications to uncovering new actors and interrogating positive ideas of agency. In this prototyping oriented style of working, enacting and dramatising possibilities helped to grasp concepts. This was particularly relevant to working with AI systems, where the level of agency expected of the technology is high, so vitalising it makes intuitive sense.
Secondly, the pragmatic nature of the exercises, distilling complex ideas down to a set of steps to explore supported critical discussion. Rather than starting from the theory, students were able to develop grounded experiences and respond to them. This led to practices such as developing their own checklists for responsibility as well as rethinking interactions based on new metaphors for the relations between technologies and humans.
Finally, the exercises all point to building the skills that a reflective designer in AI might need - _"Perhaps the thing that they have in common is that they make you reconsider what your intention was and how that intention has manifested itself into the concept"_(P16). As such, they are distinct from technical support, even technical support tailored to creative practitioners (AIxDesignComm, 2020), but look to build bridges from more than human thinking (Coskun et al, 2022; Coulton and Lindley, 2019; Giaccardi and Redstrom, 2020; Nicenboim et al, 2020) towards technical practice.
By providing this kind of multiple toolbox, we contribute to shaping the emerging AI-Design culture as something distinct from the technical, scientific, artistic and socio-legal cultures that are relatively well established. Further, we believe that this practice of grasping AI can be useful beyond the classroom, a powerful and versatile support for design professionals to meaningfully engage with the development of intelligent systems.
## 6 Conclusions
There is a growing need for designers to engage with artificial intelligence and machine learning in their practice as it becomes integrated into the functioning of the physical and digital systems that they design. A particular challenge here is how to carry out ideation and early stage prototyping around AI/ML, when the exploratory nature of the work makes it impossible to invest much time in detailed technical understanding of particular algorithms or systems. At the same time, the technical possibilities of emerging algorithms can exert an overly large pull on designs, artificially narrowing the solution space and drawing away from the needs and qualities of the interaction.
To develop the potential for designers to engage meaningfully in this space, working from an educational perspective, this paper introduced a series of 'AI exercises' informed by recent theoretical developments in third wave HCI to help students grasp AI as a socio-technical system. We developed three levels of consideration for designing AI systems: interactional affordances, relational possibilities, and the wider social implications of AI systems; and provided methods for working at each level. Through qualitative analysis of these exercises with a group of students, we build up a picture of what kind of impact the interventions had on their understanding of AI and their project development. Through the exercises, the students refined their designs and clarified their concepts, and were able to move forwards with their prototyping with a greater sense of confidence in their designs and responsibility around the process. The experiential, pragmatic aspects of the exercises helped to make theoretical ideas concrete and generative of new possibilities, while keeping a sense of materiality and interaction with humans. The space for reflection provided by the exercises helped the students to develop a wider perspective on their work within the bounds of a rapid prototyping project.
The study findings highlight ways in which experimental design exercises could support students in understanding AI, especially considering that such understanding needs to go beyond mastering ML technical qualities. The exercises here helped illuminate and modulate changes to the students design processes in relation to the interactional, relational and contextual qualities of AI, helping students develop a reflective and critical design perspective while responding to the key theoretical developments that are discussed in the AI community within HCI. Through the discussion, we raise questions of how a socio-technical view of AI, through ideas of agency and relationality can support a designerly culture around the development of AI.
## 7 Acknowledgements
We would like to thank our course collaborators for assistance in both running and analysing the course, in particular Ianus Keller, Aadjan van der Helm, Tomasz Jaskeiwicz, Dieter Vandoren, Gijs Huisman, Nazli Cila and Martin Havranek, as well as Seowoo Nam for graphic design and data collection around the methods. Thanks to the others in the Human Centred Design department who helped us to think about and frame this work, in particular Elisa Giaccardi. This work was partly supported by the Microsoft Research PhD fellowship awarded to Iohanna
Nicenboim. Finally, as an piece of educational research we would like to thank the students on the course for their hard work, thoughtfulness, creativity and boldness.
## 8 Additional Information
On behalf of all authors, the corresponding author states that there is no conflict of interest. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
|
2305.05252
|
Distilling Script Knowledge from Large Language Models for Constrained
Language Planning
|
In everyday life, humans often plan their actions by following step-by-step
instructions in the form of goal-oriented scripts. Previous work has exploited
language models (LMs) to plan for abstract goals of stereotypical activities
(e.g., "make a cake"), but leaves more specific goals with multi-facet
constraints understudied (e.g., "make a cake for diabetics"). In this paper, we
define the task of constrained language planning for the first time. We propose
an overgenerate-then-filter approach to improve large language models (LLMs) on
this task, and use it to distill a novel constrained language planning dataset,
CoScript, which consists of 55,000 scripts. Empirical results demonstrate that
our method significantly improves the constrained language planning ability of
LLMs, especially on constraint faithfulness. Furthermore, CoScript is
demonstrated to be quite effective in endowing smaller LMs with constrained
language planning ability.
|
Siyu Yuan, Jiangjie Chen, Ziquan Fu, Xuyang Ge, Soham Shah, Charles Robert Jankowski, Yanghua Xiao, Deqing Yang
|
2023-05-09T08:19:32Z
|
http://arxiv.org/abs/2305.05252v5
|
# Distilling Script Knowledge from Large Language Models for
###### Abstract
In everyday life, humans often plan their actions by following step-by-step instructions in the form of goal-oriented scripts. Previous work has exploited language models (LMs) to plan for abstract goals of stereotypical activities (_e.g._, "_make a cake_"), but leaves more specific goals with multi-facet constraints understudied (_e.g._, "_make a cake for diabetics_"). In this paper, we define the task of constrained language planning for the first time. We propose an over-generate-then-filter approach to improve large language models (LLMs) on this task, and use it to distill a novel constrained language planning dataset, CoScript, which consists of 55,000 scripts. Empirical results demonstrate that our method significantly improves the constrained language planning ability of LLMs, especially on constraint faithfulness. Furthermore, CoScript is demonstrated to be quite effective in endowing smaller LMs with constrained language planning ability. 1
Footnote 1: Resources of this paper can be found at [https://github.com/siyuyuan/coscript](https://github.com/siyuyuan/coscript).
## 1 Introduction
To accomplish everyday goals, humans usually plan their actions in accordance with step-by-step instructions. Such instructions are discovered as _goal-oriented scripts_Schank and Abelson (1975); Schank and Abelson (1975), involving a set of prototypical event sequences to achieve goals. For the example in Figure 1, to achieve the goal (_make a cake_), one usually has to follow certain steps of instructions, _e.g._, _gather ingredients_, _preheat the oven_, etc. The planning for such step-by-step scripts chains up reasoning toward complex goals Abelson (1976); Wei et al. (2022). Therefore, the automation of planning envisions more intelligent and reasonable AI systems in various domains, such as executable robotic systems Kovalchuk et al. (2021); Huang et al. (2022) and reasoning systems for problem-solving Wei et al. (2022); Wang et al. (2022).
Recent studies have identified that language models (LMs) can be used to plan scripts Sanchez and Rudinger (2022). Previous work Huang et al. (2022) has shown that large language models (LLMs), such as GPT-3 Brown et al. (2020) InstructGPT Ouyang et al. (2022) and PaLM Chowdhery et al. (2022), can effectively decompose goals into procedural steps in a zero-/few-shot manner. To train specialized models, researchers have proposed datasets for the automatic understanding and generation of script knowledge Schank and Abelson (1975); Regneri et al. (2010); Wanzare et al. (2016); Lyu et al. (2021); Sak
Figure 1: A list of steps InstructGPT generates to plan for the goal “_make a cake for diabetics_”. InstructGPT mistakenly adds sugar to the cake, which is unfit for diabetic patients. This example shows that InstructGPT sometimes cannot effectively and faithfully script for a _specific_ goal with fine-grained constraints.
aguchi et al., 2021). However, previous work mainly focuses on planning for the abstract goals of stereotypical activities (Abelson, 1976). Planning for goals with specific constraints (_e.g._, _for diabetics_) still remains under-studied.
In this paper, we define the problem of _constrained language planning_, which imposes different constraints on the goals of planning. An _abstract goal_, for example, _make a cake_, can be inherited by different real-life _specific goals_ with multi-faceted _constraints_. A cake can be made for _1)_ different ingredients (_e.g._, _chocolate_ or _vanilla_); _2)_ various tools (_e.g._, with a _microwave_ or an _oven_); or _3)_ different purposes (_e.g._, for a _wedding_ or a _birthday party_). A good planner should write scripts that are reasonable and faithful to constraints. However, LLMs sometimes do not plan faithfully toward the constraints. As showcased in Figure 1, InstructGPT suggests adding sugar to the cake for diabetic patients. Also, due to a shortage of datasets for constrained language planning, the ability of smaller but specialized models to plan with specific constraints has been underexplored.
In this paper, we aim to evaluate and improve the constrained language planning ability of LLMs, while distilling a dataset from LLMs to train specialized models. Our empirical study finds that LLMs tend to plan fluently but unfaithfully to the constraints. Thus, we employ an over-generate-then-filter approach (Wiegreffe et al., 2022) to satisfy the quality of the generated scripts to constraints. The main idea is to select high-quality ones from multiple generated scripts. Then, we use LLMs (_e.g._, InstructGPT) with this approach to generate a dataset for constrained language planning, which inherits the idea of symbolic knowledge distillation from models (West et al., 2022). We thus arrive at a **C**onstrained **Script** dataset, _i.e._, CoScript, which consists of 55,000 high-quality scripts with specific goals and steps. Experiments show that, when trained on CoScript, smaller models such as T5 (Raffel et al., 2020) can achieve good performance, even surpassing that of LLMs.
Our contributions are summarized as follows: _1)_ To our knowledge, we are the first to establish the constrained language planning problem, which advances language planning toward more specific goals. _2)_ We evaluate the few-shot constrained language planning ability of LLMs and develop an over-generate-then-filter method for LLMs, resulting in a 26% increase in accuracy. _3)_ Based on our method, we use LLMs to generate a high-quality script dataset (CoScript) for constrained language planning. By leveraging the CoScript, we endow specialized and smaller models with constrained language planning ability, which achieves comparable performance to that of LLMs.
## 2 Related Work
Language PlanningLanguage planning aims to decompose a goal into sequences of steps (Kaplan and Baldauf, 1997), which is widely used in robotics (Kaiser et al., 2014; Paxton et al., 2019; Berg et al., 2022) and procedural text generation (Goldfarb-Tarrant et al., 2020; Hu et al., 2022). Early studies approach language planning with syntactic parsing for the context (Koller and Stone, 2007; Garoufi and Koller, 2010). Recently, researchers have investigated the planning capability of language models in various domains (Olmo et al., 2021; Valmeekam et al., 2022). However, they mainly focus on generating scripts for stereotypical activities toward abstract goals. For example, Huang et al. (2022) proposes to plan for the general-typed tasks for embodied agents, while Yang et al. (2021) edits actions for abstract goals to video retrieval. In contrast, we explore planning for specific goals (_e.g._, "_make a cake for diabetics_"). Collins et al. (2022) has benchmarked LLMs for planning with included/excluded objects, but they merely study this problem in a limited scope (only dozens of cases) without further in-depth analysis.
ScriptsA structure describing a sequence of events in a particular scenario is _script_(Schank and Abelson, 1975), consisting of two types: _1) Narrative script_: a narrative chain of events describing a particular scenario derived from narrative texts such as recipes (Fang et al., 2022) or stories (Tandon et al., 2020); _2) Goal-oriented script_(Regneri et al., 2010; Wanzare et al., 2016): an appropriate sequence of steps as instructions to achieve a goal. In this work, the steps for achieving a given goal in language planning can be categorized into the second class. Many datasets for goal-oriented scripts have been proposed to improve the language planning ability of LMs (Sakaguchi et al., 2021; Lyu et al., 2021). However, they mainly consist of abstract goals with prototypical instructions and thus are not built to train LMs for planning with more specific goals.
In-Context LearningWith the great success of LLMs Brown et al. (2020); Ouyang et al. (2022); Chowdhery et al. (2022), _in-context learning_Brown et al. (2020); Min et al. (2022) has established its great task-solving potentials with a textual task instruction and a few examples. Moreover, when being used for dataset construction, the data samples that LLMs generate can sometimes outperform crowd-sourced human-authored data in factuality and fluency Lu et al. (2022); Min et al. (2022). This shows a promising alternative to costly large-scale crowd-sourcing to construct datasets using LLMs Wiegreffe et al. (2022); Liu et al. (2022); West et al. (2022). Inspired by these studies, in our work, we adopt the in-context learning for LLMs not only for better language planning, but also as a reliable _crowd-worker_ to scale up the planning data into a reusable dataset to train smaller models.
## 3 Definitions
Before diving into technical details, we first clarify some important terms used in the paper.
ScriptsA goal-oriented _script_ is _a list of steps_ (\(\mathbf{S}=\{s_{1},s_{2},\cdots,s_{|\mathbf{S}|}\}\)) that fulfill a certain _goal_ (\(\mathcal{G}\)) (_e.g._, "_make a cake_") Suddendorf and Corballis (2007); Schank and Abelson (2013). The language planning task is defined as \(\mathcal{M}:\mathcal{G}\rightarrow\mathbf{S}\), where \(\mathcal{M}\) is the planning model.
GoalsDifferent from previous studies that focus mostly on abstract goals with prototypical scripts, we define a taxonomic structure of goals by extending the derivatives of abstract goals. We define a _specific goal_ that inherits from an _abstract one_ but with new information as a constraint to limit the scope. An _abstract goal_, denoted as \(\mathcal{G}_{a}\), refers to stereotypical activities, _e.g._, "_make a cake_". A _specific goal_, denoted as \(\mathcal{G}_{c}\), is derived from the corresponding \(\mathcal{G}_{a}\) with various constraints, _e.g._, "_make a chocolate cake_".
Constraints and Constrained Language PlanningTo enrich the semantics of specific goals, we define three types of _constraints_, _i.e._, _modifier_, _method_ and _intent_, as shown in Table 1. They express different angles of extending an abstract goal and can be further instantiated and concreted. _Constrained language planning_ denotes generating a constraint-faithful script \(\mathbf{S}:\mathbf{S}=\mathcal{M}(\mathcal{G}_{c})\) toward specific goals (\(\mathcal{G}_{c}\)) with various constraints (\(\mathcal{C}\)).
## 4 Constrained Language Planning with LLMs
In this section, we evaluate and enhance the constraint language planning ability of LLMs. The overall workflow is illustrated in Figure 2. We first extend the specific goals \(\mathcal{G}_{c}\) from the abstract ones \(\mathcal{G}_{a}\) using a human-in-the-loop acquisition approach with LLMs (SS 4.2, Step 1), and propose an over-generate-then-filter framework to
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline \multicolumn{2}{|c|}{**Constraint Type 1:**_Modifier_} \\ \hline
**Definition**: A word, an adjective or a phrase that modifies or constrains an abstract goal. \\ \hline
**Ex.1**: Make a chocolate cake. \\
**Ex.2**: Make a pink cake. \\ \hline \hline \multicolumn{2}{|c|}{**Constraint Type 2:**_Method_} \\ \hline
**Definition**: A tool or specified mode that controls the process for achieving the goal. \\ \hline
**Ex.1**: Make a cake with an oven. \\
**Ex.2**: Make a cake by using cake mix. \\ \hline \hline \multicolumn{2}{|c|}{**Constraint Type 3:**_Intent_} \\ \hline
**Definition**: An additional purpose or demand when completing the goal. \\ \hline
**Ex.1**: Make a cake for wedding. \\
**Ex.2**: Make a cake for diabetics. \\ \hline \end{tabular}
\end{table}
Table 1: Three types of constraints and their definitions that are used to prompt for new instances of specific goals. In the examples (**Ex.**), upon the abstract goal, we give two instances for each type of constraint by combining the goal with constraints into specific goals. The constraint within each example is highlighted.
Figure 2: The workflow of using InstructGPT to generate specific goals (Step 1) and planning for the goals with the over-generate-then-filter framework (Step 2-3).
obtain scripts (SS 4.3, Step 2-3). Then, we reveal that LLMs (_e.g._, GPT-3 (Brown et al., 2020), InstructGPT (Ouyang et al., 2022)) are prone to be unfaithful to the constraints in \(\mathcal{G}_{c}\), and our approach can alleviate this problem (SS 4.4). We use text-davinci-002 as the default InstructGPT variant, which has \(\geq\)175B parameters.2
Footnote 2: Code names and approximated parameters of GPT-3 models are based on [https://blog.eleuther.ai/gpt3-model-sizes/](https://blog.eleuther.ai/gpt3-model-sizes/) and [https://beta.openai.com/docs/models](https://beta.openai.com/docs/models). Note that OpenAI does not release detailed information about later versions of GPT-3, and thus for brevity, we default its size to 175B.
### In-Context Learning for LLMs
We deploy LLMs for constrained language planning via in-context learning (Brown et al., 2020; Ouyang et al., 2022). Given a task input (\(X\)), we first write a task prompt (\(T\)) describing the task, and then provide several examples (\(E=\{E_{i}\}_{i=1}^{|E|}\), where \(E_{i}=(X_{i},Y_{i})\) are used for few-shot learning). An LLM generates output (\(Y\)) by completing the prompt (\(Y=\mathcal{M}(T,E,X)\)). The whole process does not require any gradient update, allowing LLMs to generate new specific goals and scripts without massive training data.
Data Source for ExamplesWe adopt wikiHow (Koupaee and Wang, 2018), a data source of instructional articles on various topics, as the initial dataset for providing examples. The articles are titled as "_how to...?_", describing abstract goals, and consist of steps to achieve them. We use the titles (\(\mathcal{G}_{a}\)) and steps (\(\mathbf{S}\)) as examples.
### Acquisition of Specific Goals
Since no dataset of specific goals exists to support our study, we have to acquire these goals first. As elaborated in Table 1, we extend the abstract goals with multi-faceted constraints for human-in-the-loop data acquisition using InstructGPT.
First, we manually prepare a pool of examples that derive specific goals from an abstract one with constraints.3 Each example is attached to a constraint type (_i.e._, modifier, method or intent), and contains more than one constraint and specific goal so that InstructGPT is prompted to generate multiple \(\mathcal{G}_{c}\) for one \(\mathcal{G}_{a}\). Next, given an abstract goal from wikiHow, we enumerate each constraint type to ensure data diversity. Then, we sample several examples of the constraint type from the pool. Finally, we input the task prompt, examples and the \(\mathcal{G}_{a}\) into InstructGPT for the completion of \(\mathcal{G}_{c}\).
Footnote 3: Complete examples can be found in Appendix B.1.
An example in Table 2 (I) shows InstructGPT generates constraints "_chocolate_" and "_vanilla_" for \(\mathcal{G}_{a}\) ("_make a cake_") given the constraint type _modifier_ and some examples, and completes the specific goals ("_make a chocolate cake_" and "_make a vanilla cake_").
### Acquisition of Scripts
After getting specific goals with constraints, we can test the ability of LLMs to fulfill them.
Planning with InstructGPTWe first write a task prompt \(T\). Given the \(\mathcal{G}_{c}\), we back-trace its \(\mathcal{G}_{a}\) and extract the verbs ("_make_") and nouns ("_cake_") from \(\mathcal{G}_{a}\). Then we use the verbs and nouns as keywords to retrieve two similar goals as examples \(E\) from the wikiHow dataset. Finally, the task prompt \(T\), examples \(E\) and \(\mathcal{G}_{c}\) with constraint \(\mathcal{C}\) are fed into InstructGPT. As shown in Table 2 (II), we adopt the scripts, _i.e._, "_make a cake_" and "_make a cupcake_" to prompt InstructGPT to generate a script for "_make a chocolate cake_".
Over-Generation and FilteringUsing the above-mentioned approach, generated scripts by
\begin{table}
\begin{tabular}{l} \hline \hline
**1: Specific Goal Generation** \\ \hline _/* Task prompt */_ \\ Create possible Specific Goals according to the Abstract \\ Goal when the Constraint type is _Modifier_. \\ _/* Examples_ * \\ \hline
**Abstract Goal**: Say Goodbye in Different Language \\
**Constraint**: French; **Specific Goal**: Say Goodbye in French \\
**Constraint**: English; **Specific Goal**: Say Goodbye in English \\ _/* Auto completion of constraints and specific goals */_ \\
**Abstract Goal**: Make a cake \\
**Constraint**: _Chocolate; Specific Goal: Make a chocolate cake_ \\ _Constraint_: _Vanilla; Specific Goal: Make a vanilla cake_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of prompt for InstructGPT for specific goal generation and script generation via in-context learning. Generated texts are _highlighted_.
InstructGPT are reasonable and fluent. However, they sometimes are not faithful to the constraints under closer examination (SS 4.4). Previous studies have shown that the output quality of LLMs falls in high variance Wiegreffe et al. (2022), leading to bad performance. Thus, we adopt the idea of over-generate-then-filter to improve generation quality, which is shown to be effective in previous work Wiegreffe et al. (2022); Liu et al. (2022). We over-generate \(K\) sampled from InstructGPT.4
Footnote 4: In practice, \(K=2\) is sufficient, as shown in Appendix B.3. Intuitively, the reason this approach works is that the generation accuracy can be improved from \(1-p\) to \(1-p^{K}\) (at least one is correct), where \(p\) is the probability that InstructGPT generates a wrong script.
Next, a filter model is developed to select the faithful scripts. Due to the diverse expressions of language, we rely not on rules and patterns (_i.e._, constraint words must appear in the script), but on the semantic similarity between goals and scripts for filtering. For example, "_decorating the cake with candles_" could be a faithful step to make a cake "_for a birthday party_". Motivated by this, we first collect a set of goals, consisting of the target goal (\(\mathcal{G}_{c}^{+}\)) as a positive sample and others (\(\{\mathcal{G}_{c}^{-}\}\)) generated from the same abstract goal (\(\mathcal{G}_{a}\)) as negative samples. In the previous case, the negatives include "_make a cake in the microwave_" and "_make a cake for a wedding_". We convert scripts and goals into InstructGPT embeddings (text-embedding-ada-002) and calculate cosine similarity as similarity scores to measure semantic similarity. Additionally, we reward the script that explicitly contains the keywords of the target constraint. We only keep the script if \(\mathcal{G}_{c}^{+}\) scores the highest in the goal set.
### Evaluation
We randomly collect 100 abstract goals (_e.g._, "_make a cake_") from wikiHow and conduct manual evaluations on the generated specific goals and their scripts. We compare our methods with instruction tuning methods, T0 Sanh et al. (2022) and FlanT5 Chung et al. (2022), vanilla GPT-3 Ouyang et al. (2022) with different sizes, Codex Chen et al. (2021) and InstructGPT Ouyang et al. (2022) with different sizes. We also add "_Let's think step by step_" before each answer for script generation, which is a simple but effective trick to improve zero-shot reasoning for LLMs Kojima et al. (2022). For a retrieval baseline, we directly use the goals to search and retrieve the most relevant scripts from the wikiHow website5 as results.
Footnote 5: [https://www.wikihow.com/Main-Page](https://www.wikihow.com/Main-Page)
Are specific goals generated by LLMs of high quality?We ask InstructGPT to generate 300 (\(3\times\)) specific goals for 3 constraint types based on the 100 abstract goals from wikiHow. For evaluation, we recruit annotators on Amazon Mechanical Turk to check whether these goals are correct. Each case is examined by three annotators, who reach an inter-rater agreement at Fleiss's \(\kappa=0.86\) Fleiss et al. (1981). InstructGPT achieves 98.00% accuracy, indicating that LLMs can derive specific goals of rather high quality.
Can LLMs write scripts for specific goals?To answer this question, we first let InstructGPT generate scripts for the 100 abstract goals from wikiHow and ask three annotators to check the correctness of the scripts (with Fleiss's \(\kappa=0.79\)). The correctness is decided by both the fulfillment of the goal and the completeness of the semantics. InstructGPT achieves 97.00% accuracy, proving that LLMs can plan for abstract goals very well. However, it is not the case for specific goals. We sample 100 specific goals from 300 generated ones (mentioned above) and evaluate the scripts generated from baselines and our method.
Table 3 reports the overall accuracy of the results.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Modifier** & **Method** & **Intent** & **All** \\ \hline Retrieval & 26.67 & 38.89 & 35.71 & 34.00 \\ \hline To (11B) & 30.00 & 21.12 & 25.00 & 24.00 \\ Flan-T5 (11B) & 50.00 & 42.25 & 31.25 & 42.00 \\ \hline GPT-3 (1.3B) & 13.33 & 12.96 & 18.75 & 14.00 \\ GPT-3 (6.7B) & 23.33 & 7.40 & 25.00 & 15.00 \\ GPT-3 (175B) & 30.00 & 22.22 & 25.00 & 25.00 \\ \hline Codex (175B) & 46.67 & 55.56 & 18.75 & 47.00 \\ InstructGPT (1.3B) & 20.00 & 22.22 & 28.57 & 22.00 \\ InstructGPT (6.7B) & 60.00 & 42.25 & 43.75 & 47.00 \\ InstructGPT (175B) & 73.33 & 74.08 & 42.86 & 69.00 \\ +_let’s think step..._” & 70.00 & 75.92 & 50.00 & 68.00 \\ + Our Method & **96.67** & **98.15** & **92.86** & **95.00** \\ \(\text{w}\,f_{\text{sin}}=\text{SBERT}\) & 86.66 & 74.89 & 81.25 & 78.00 \\ \(\text{w}\,f_{\text{sin}}=\text{SimCSE}\) & 73.33 & 78.73 & 75.00 & 75.00 \\ \(\text{w}\,f_{\text{sin}}=\text{None}\) & 93.33 & 94.44 & 87.50 & 93.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Accuracy (%) of generated scripts for different constraint types by manual evaluation. \(f_{\text{sin}}\) denotes the choice for similarity function during filtering, _i.e._, replacing InstructGPT embedding with that of SimCSE Gao et al. (2021) and Sentence-BERT Reimers and Gurevych (2019). \(f_{\text{sin}}=\text{None}\) denotes we only reserve the scripts that contain constraint words.
We find that: _1)_ Overall, all baselines achieve unsatisfactory results on planning for specific goals, with InstructGPT outperforming others. Especially, the scripts with _intent_-type constraints have the worst accuracy, and adding "_let's think step-by-step_" does not help much; _2)_ The retrieval from wikiHow does not lead to the desired script; _3)_ With our method, InstructGPT can generate scripts of higher quality by a large margin; _4)_ Replacing the similarity function with embeddings from other pre-trained models results in performance drops.
What types of errors do LLMs usually make in this task?To respond to the motivations of our methods, we conduct detailed analyses to investigate why LLMs fail. We evaluate the model planning performance in two aspects: _1) Semantic completeness_ (SE): whether the steps in the script are missing, repeated or in the wrong order; _2) Faithfulness to the constraints_ (FE): whether the script is faithful to the constraints and the steps are coherent (related) within the script. We define six types of errors upon the two, _i.e._, _i)_ SE: missing, repeated step(s) and wrong order and _ii)_ FE: no constraint, unrelated step(s) or incoherent step(s).6 Annotators are asked to review 100 scripts generated by InstructGPT and mark the error types.7 Results in Figure 3 show that: _1)_ The semantic completeness in generated scripts is acceptable, but the faithfulness to the constraints can not be guaranteed; _2)_ Our method greatly improves the planning quality both in semantic completeness and faithfulness.
Footnote 6: The detailed definitions can be found in Appendix B.4.
Footnote 7: The case study of how InstructGPT fails at planning for specific goals is shown in Appendix B.5.
What kinds of goals do InstructGPT typically fail?By far, we already know that LLMs fail at specific goals, especially for intent-type constraints. We dig into more fine-grained topic categories of constraints defined in wikiHow. The heat map in Figure 4 shows that the planning performance of InstructGPTs varies considerably for goals of different categories, and the planning accuracy for each category improves greatly with our method.
## 5 Script Distillation from LLMs
Since LLMs are costly to deploy, it is essential to enable language planning ability for smaller, specialized models. Creating datasets is an inevitable step to this end. However, previous datasets do not enable planning for specific goals Sakaguchi et al. (2021); Lyu et al. (2021), and manual dataset annotation is expensive and highly demanding. Thus, we follow the idea of _symbolic knowledge distillation_West et al. (2022) to distill constrained language planning datasets from LLMs.
### CoScript: A Dataset for Constrained Language Planning
We now apply our method for building a first-of-its-kind **C**onstrained **Script** dataset of language planning, named as CoScript. Experiments in SS 4 show that LLMs can generate high-quality specific goals and scripts with our over-generating-then-filter framework. We now scale up the experiments
Figure 4: The heat-map depicts the human-evaluated script accuracy of different methods in different topic categories for specific goals.
Figure 3: Errors of the generated scripts by human evaluation. The axis of the radar chart is in _log-scale_. Notably, ours reduces to virtually one dot in the graphic because it does not have many errors (0-1%). SE and FE denote semantic completeness and faithfulness error.
for a large-scale dataset. We collect 14,945 article titles as seed abstract goals and retrieve 34,260 similar goals with scripts from wikiHow as examples to prompt InstructGPT (175B) for data generation. Following SS 4, dataset construction process consists of three steps, as in Figure 2: 1) We first enumerate constraint types with examples for InstructGPT and obtain specific goals (after de-duplication) based on the seed abstract goals. 2) Then, InstructGPT over-generates \(K\) scripts for the specific goals and 3) our filter framework selects the faithful scripts as the final data.8
Footnote 8: Details about hyper-parameters and costs can be found in Appendix C.1.
In total, we generate 55,000 specific goals with corresponding scripts. We randomly choose 2,000 data as the validation set and 3,000 data as the test set. To ensure the quality of the validation and test set, we ask crowd-sourced workers to find and revise the incorrect samples. By collecting the annotation data for error identification of these 5,000 samples, we estimate to achieve 97.80% accuracy for specific goals and 94.98% for constrained script generation, consistent with the results in Table 3.
### Dataset Analysis
Script Diversity AnalysisAs shown in Table 4, despite the larger scale of wikiHow, CoScript has more specific goals than wikiHow and thus is valuable for the constrained language planning task. Besides, previous studies Fu et al. (2021); Narayan et al. (2022) find that the texts generated by LMs may be too repetitive and less diverse. For this concern, we compare our CoScript with a recent goal-oriented script dataset proScript Sakaguchi et al. (2021) created by crowd-sourcing. As reported in Table 4, _1)_ CoScript is much larger than proScript, with more scripts and a higher number of steps per script; _2)_ CoScript exhibits high lexical diversity, with more unique words than human-written proScript.
Constraint AnalysisFigure 5 shows the constraint distribution of CoScript. We compute the proportions of constraint types with their representative categories obtained from Probase Wu et al. (2012), and the initial words of constraint instances. We find CoScript shows high heterogeneity and pluralism in the generated specific goals. Interestingly, InstructGPT tends to start with the word "_if_" or "_when_" for hypothetical constraints (_e.g._, "_if someone is lactose intolerant_" for "_make a cake_"), suggesting the potential for future research on counterfactual reasoning in language planning. We also analyze the domain distribution of CoScript in the Appendix C.2
## 6 Constrained Language Planning with Specialized Models
With CoScript, we can train smaller but specialized models for constrained language planning.
### Experimental setup
BaselinesWe use GPT-2 (causal LM) Radford et al. (2019) and T5 (encoder-decoder LM) Raffel et al. (2020) as baselines. Given goals, the models are trained to generate a list of steps \(\mathbf{S}\) for planning. Moreover, we adopt the idea of retrieval-augmented text generation Lewis et al. (2020) and add retrieved examples in the input to improve generation quality.
MetricsWe use BLEU Papineni et al. (2002), ROUGE-L Lin (2004) and BERTScore Zhang et al. (2020) as automatic metrics to measure semantic completeness. We also train a binary classification model to decide whether the generated texts are faithful to the constraints. Specifically, we
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Dataset** & **\# Size** & **\# UT** & \(\mathbf{Avg}_{\varnothing_{c}}\) **\#** & **Avgs \#** \\ \hline proScript & 6,414 & 8,826 & 0 & 5.45 \\ wikiHow & **112,111** & **158,117** & 0.42 & 5.93 \\ CoScript & 55,000 & 76,317 & **4.13** & **5.96** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Statistics of CoScript and previous script datasets proScript and wikiHow, w.r.t. data size, number of unique tokens (**# UT**), the average number of specific goals for each abstract ones (\(\mathbf{Avg}_{\varnothing_{c}}\) **#**), and the average number of steps in scripts (\(\mathbf{Avgs}\) **#**).
Figure 5: Statistics of constraint types in CoScript dataset, with representative topic categories or the first words for each constraint type.
collect 50,000 data from CoScript as positive samples, and shuffle the goals and scripts to construct 50,000 negative ones. Then, we fine-tune a DeBERTa (v3 large) model Khashabi et al. (2020) for classification, achieving 91.53% accuracy on the test set.
Training DataTo gain a fine-grained perspective on planning toward specific goals, we train LMs on both wikiHow (\(\mathbb{D}_{\tt tr}^{\tt wi}\)) and CoScript (\(\mathbb{D}_{\tt tr}^{\tt co}\)), and test them on CoScript test set (\(\mathbb{D}_{\tt te}^{\tt Co}\)). Both datasets share _similar_ scripts, but the goals in wikiHow are mostly abstract ones. For wikiHow, we also randomly collect 50,000 goals with scripts as \(\mathbb{D}_{\tt tr}^{\tt wi}\).
### Results
The comparison for models trained on wikiHow and CoScript are shown in Table 5. In general, LMs trained on CoScript outperform that on wikiHow. T5 outperforms GPT-2 in faithfulness, possibly due to its encoder-decoder framework being better at handling input information. However, GPT-2 outperforms T5 on other text generation metrics for scripts. This could be because CoScript is distilled from InstructGPT, leading to a biased data distribution that favors decoder-only causal language models, _e.g._, the GPT family.
Based on Table 5, we find that augmenting models with retrieved examples can improve semantic completeness. However, the constraint faithfulness could be undermined as models tend to mimic the retrieved examples. To further understand the role of retrieval augmentation, we conduct a manual evaluation that based on 100 random samples generated by T5 (3B) with and without retrieval augmentation. We discover that 57% of T5's results are correct, and the number goes up to 70% with retrieval augmentation. Thus, although we observe a slight drop in faithfulness score (\(93.00\to 92.53\) from Table 5), retrieval augmentation still brings much improvement over the base model.
Faithfulness of Constraints of Different TypesWill LLMs' planning preferences for constraint types pass to the specialized models? We find the results in Table 6 are consistent with that of LLMs (Table 3). Specialized models are also the worst at specific goals with intent-typed constraints.
CoScript vs. wikiHowWe mix two datasets together with a hyper-parameter \(\alpha\) to control the proportion of two datasets, where the new training
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Modifier** & **Method** & **Intent** & **All** \\ \hline T5 (large) & **91.54** & **92.57** & **90.21** & **91.81** \\ +_retrieval_ & 87.39 & 85.86 & 84.44 & 86.03 \\ GPT-2 (large) & **78.78** & **78.77** & 69.48 & **76.73** \\ +_retrieval_ & 77.33 & 78.28 & **70.97** & 76.30 \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Faithfulness** scores of specialized models for each constraint type on the test set of CoScript.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Size** & **Modifier** & **Method** & **Intent** & **All** \\ \hline GPT-3 & 175B & 30.00 & 22.22 & 25.00 & 25.00 \\ Codex & 175B & 46.67 & 55.56 & 18.75 & 47.00 \\ InstructGPT & 175B & 73.33 & **74.08** & 42.86 & 69.00 \\ T5 (wikiHow) & 3B & 20.00 & 12.96 & 6.25 & 14.00 \\ T5 (CoScript) & 3B & 63.33 & 55.55 & 43.75 & 56.00 \\ +_retrieval_ & 3B & **76.66** & 66.66 & **75.00** & **71.00** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Accuracy (%) of scripts generated by different models. We fine-tune a T5 (3B) on wikiHow and CoScript while deploying LLMs via few-shot in-context learning.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Faithful** & **ROUGE** & **BLEU** & **BERTScore** \\ \hline \multicolumn{5}{c}{_Trained on wikiHow_} \\ GPT-2 & 64.93 & 20.28 & 17.91 & 80.74 \\ GPT-2 (large) & 62.20 & 23.74 & 24.69 & 83.63 \\ T5 (base) & 86.13 & 20.30 & 15.48 & 79.02 \\ T5 (large) & 85.13 & 22.95 & 20.60 & 82.27 \\ T5 (3B) & 77.90 & 20.72 & 16.95 & 81.01 \\ \hline \multicolumn{5}{c}{_Trained on CoScript_} \\ GPT-2 & 74.60 & 28.09 & 26.75 & 84.72 \\ GPT-2 (large) & **76.73** & 30.60 & 30.22 & 85.77 \\ +_retrieval_ & 76.30 & **32.78** & **32.92** & **86.41** \\ T5 (base) & 91.53 & 26.53 & 22.06 & 83.14 \\ T5 (large) & 91.87 & 29.40 & 29.14 & 83.48 \\ +_retrieval_ & 86.03 & 35.91 & 36.10 & 87.39 \\ T5 (3B) & **93.00** & 45.68 & 43.83 & 90.18 \\ +_retrieval_ & 92.53 & **46.54** & **47.62** & **90.84** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Overall script generation performance for models trained on different training sets. Note that the test set is the same for all models.
Figure 6: The faithfulness curves when altering the proportions of CoScript (\(\alpha\)) and wikiHow (\(1-\alpha\)) in a fixed-size training set.
set \(\mathbb{D}_{\mathbf{tr}}=\alpha\mathbb{D}_{\mathbf{cr}}^{\odot}+(1-\alpha) \mathbb{D}_{\mathbf{tr}}^{\mathrm{x}\,\mathrm{i}}\). By altering \(\alpha\) (constant data size), the faithfulness curves in Figure 6 shows that adding more data from CoScript consistently improves model performance in constraint faithfulness. Thus, training on CoScript contributes to more faithful planners.
Specialized Models vs. LLMsWe further fine-tune a T5 (3B) on CoScript and wikiHow to generate scripts for the specific goals in SS 4.4, which are held out from the training set. Table 7 shows that T5 fine-tuned on CoScript with retrieval augmentation can generate scripts of higher quality than most LLMs in Table 3, indicating that smaller models can surpass larger models when properly trained on suitable datasets.
## 7 Conclusion
In this paper, we define planning toward specific goals with constraints. We propose a better prompting method for LLMs, and distill a novel dataset from LLMs (CoScript) to improve the constrained language planning ability of specialized models. Experiments show that our method improves the planning quality of LLMs for specific goals, and smaller models trained on CoScript even outperform LLMs. We hope the CoScript dataset will be a valuable resource to advance the research on language planning with more complex and diverse goals and constraints.
### Limitations
The proposed method for improving LLMs is a post-hoc re-ranking approach, and we do not improve LLMs themselves due to the difficulty of fine-tuning LLMs. Besides, we improve the ability of constrained language planning for smaller models from the perspective of building task-related datasets, but do not consider investigating the model itself, other than adopting retrieval augmentation. In addition, because automatic metrics for generated text are limited, the automatic evaluation of this paper may result in an overestimation or underestimation of the mentioned methods, though we attempt to mitigate this by incorporating a moderate amount of human evaluation. Despite the advanced planning capabilities of newer language models, our work remains significantly valuable to the knowledge distillation of these LLMs into smaller and more cost-effective models.
We also discover several limitations of the proposed CoScript datasets. First, the specific goal explored in this work only inherits from an abstract one with one extra constraint. However, in real-life situations, complex planning may involve multiple constraints, which we do not investigate in this work. Another limitation of CoScript is that our dataset is generated from InstructGPT, and thus the data distributions may be biased to favor causal language models. This is a common issue with machine-generated datasets, which we address by manually curating CoScript's validation and test sets. Furthermore, there are still some incorrect samples (about 5%) in the training data without manual correction due to the limits of budget and time. Last but not least, we only consider whether the script can be executed at the human level. The script execution for robots Huang et al. (2022); Lu et al. (2022) is unstudied in our work, and there still exist huge gaps in transferring complex human language to one that is understandable and executable by robots.
### Ethics Statement
Use of Human AnnotationsWe protect the privacy rights of crowd-sourced workers and pay them above the local minimum wage. We use Amazon Mechanical Turk (AMT) and require 300 annotators to be located in the U.S. as a proxy for English competency. We pay at a rate of $6/hour for 20 samples. We acknowledge that constructing datasets from large language models may suffer from toxic language and cause severe risks for social society Ousidhoum et al. (2021); Baldini et al. (2022). Therefore, we ask the annotators to discard the offensive and harmful data when reviewing the CoScript. However, there may still be prejudicial data in our final dataset that goes unnoticed.
wikiHow SourceThe content available on wikiHow is shared under a Creative Commons License (CC-BY-NC-SA) 9, which permits others to share, copy, distribute, and adapt the content for non-commercial purposes. In our research, we use wikiHow as an initial dataset for providing examples to construct our dataset. Our dataset is released on GitHub and is only used to advance academic research on language planning with more complex and diverse goals and constraints. Therefore, we emphasize that our usage aligns with the requirements under the license.
Footnote 9: [https://creativecommons.org/licenses/by-nc-sa/3.0/](https://creativecommons.org/licenses/by-nc-sa/3.0/)
Covered Domains in CoScriptCoScript is derived from wikiHow and encompasses 19 daily life goal categories (as illustrated in Figure 8). These categories cover a wide range of practical topics of everyday life. However, as shown in Figure 8, we emphasize that sensitive and high-risk domains, including medical, legal, and high-stakes financial advice, are excluded from the dataset to minimize potential risks related to inaccurate or misleading information. We encourage researchers and developers to leverage this dataset to build models that accurately understand and respond to user queries on various non-sensitive, non-critical topics.
Factuality, Toxicity and BiasesWe recognize that the factuality of generated content is crucial, especially in high-stakes scenarios. Therefore, annotators are asked to verify the consistency between generated scripts and goals with constraints for validation and test sets. They also assess and revise the content to minimize hallucinations, factual errors, and any inappropriate or misleading information.
Previous work found that LLMs may generate toxic contents Cao et al. (2022); Liu et al. (2022). We highlight that our dataset is not intended for safety-critical applications or as a substitute for expert advice in such domains. Annotators are specifically instructed to discard offensive and harmful data during the review of the validation and test sets in CoScript. However, despite these precautions, there may still be some prejudicial data that goes unnoticed in our final dataset.
## Acknowledgement
We thank the anonymous reviewers for their valuable comments, and Wei Shi and Shuang Li from Fudan University for their useful suggestions for the manuscript. This work is supported by the Chinese NSF Major Research Plan (No.92270121), Shanghai Science and Technology Innovation Action Plan (No.21511100401) and the Science and Technology Commission of Shanghai Municipality Grant (No. 22511105902).
|
2304.12407
|
Exploring Models of Running Vacuum Energy with Viscous Dark Matter from
a Dynamical System Perspective
|
Running vacuum models and viscous dark matter scenarios beyond perfect fluid
idealization are two appealing theoretical strategies that have been separately
studied as alternatives to solve some problems rooted in the $\Lambda$CDM
cosmological model. In this paper, we combine these two notions in a single
cosmological setting and investigate their cosmological implications, paying
particular attention in the interplay between these two constituents in
different cosmological periods. Specifically, we consider a well-studied
running vacuum model inspired by renormalization group, and a recently proposed
general parameterization for the bulk viscosity $\xi$. By employing dynamical
system analysis, we explore the physical aspects of the new phase space that
emerges from the combined models and derive stability conditions that ensure
complete cosmological dynamics. We identify four distinct classes of models and
find that the critical points of the phase space are non-trivially renewed
compared to the single scenarios. We then proceed, in a joint and complementary
way to the dynamical system analysis, with a detailed numerical exploration to
quantify the impact of both the running parameter and the bulk viscosity
coefficient on the cosmological evolution. Thus, for some values of the model
parameters, numerical solutions show qualitative differences from the
$\Lambda$CDM model, which is phenomenologically appealing in light of
cosmological observations.
|
Norman Cruz, Gabriel Gomez, Esteban Gonzalez, Guillermo Palma, Angel Rincon
|
2023-04-24T19:38:04Z
|
http://arxiv.org/abs/2304.12407v1
|
# Exploring Models of Running Vacuum Energy with Viscous Dark Matter
###### Abstract
Running vacuum models and viscous dark matter scenarios beyond perfect fluid idealization are two appealing theoretical strategies that have been separately studied as alternatives to solve some problems rooted in the \(\Lambda\)CDM cosmological model. In this paper, we combine these two notions in a single cosmological setting and investigate their cosmological implications, paying particular attention in the interplay between these two constituents in different cosmological periods. Specifically, we consider a well-studied running vacuum model inspired by renormalization group, and a recently proposed general parameterization for the bulk viscosity \(\xi\). By employing dynamical system analysis, we explore the physical aspects of the new phase space that emerges from the combined models and derive stability conditions that ensure complete cosmological dynamics. We identify four distinct classes of models and find that the critical points of the phase space are non-trivially renewed compared to the single scenarios. We then proceed, in a joint and complementary way to the dynamical system analysis, with a detailed numerical exploration to quantify the impact of both the running parameter and the bulk viscosity coefficient on the cosmological evolution. Thus, for some values of the model parameters, numerical solutions show qualitative differences from the \(\Lambda\)CDM model, which is phenomenologically appealing in light of cosmological observations.
## I Introduction
The standard cosmological model, also known as the \(\Lambda\)CDM model, is currently the most successful theoretical framework for describing the evolution of the universe [1; 2; 3; 4]. However, as the precision of cosmological observations is continuously increasing [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17], the model is facing new challenges in maintaining observational consistency [18; 19]. Despite significant progress in our current understanding of the universe, still, we have some issues that require further investigation. These include: i) the Hubble and \(\sigma_{8}\) tensions, which refer to discrepancies between the values predicted by the model and observations of the Hubble constant [19; 20] and the amplitude of matter fluctuations on large scales [5; 21; 22; 23], respectively; and ii) a lack of comprehension of the physics involved in the dark sector. One of the most crucial conceptual problems is related to the nature of dark matter (DM) [24], which comprises approximately \(80\%\) of the total matter of the universe. Another well-known problem associated with the \(\Lambda\)CDM model is the cosmological constant problem [25; 26](CC problem for short), which arises due to the discrepancy between the estimated value for the vacuum energy density (VED) provided by quantum field theory and the observed value inferred by type Ia supernovae (SNe Ia) [27].
Given the significant discrepancies previously mentioned, as well as the physical argument that an expanding universe is not expected to have a static vacuum energy density, scientists have suggested to explore a smooth time-dependence of it. One option along this line of thinking is to use a decreasing function for the cosmological constant that could potentially address not only the Hubble constant tension but also bring the predicted value closer to the observed one [28]. A more general time-dependence of the vacuum energy density has been shown to be implicitly given through the Hubble constant and its time derivatives \(\rho_{\rm vac}(H,H)\)[29], which is motivated by perturbative results of Quantum Field Theory in a curved classical background [30]. This is an interesting cosmological scenario in contrast to other physical proposals based on dynamical dark energy [31; 32], which assume that the cosmological constant is small or negligible compared to the total energy density [31].
A more generalized Ansatz for the vacuum energy density has been proposed by considering Renormalization Group ideas, which consider it as a running quantity depending on the typical energy-scale of the processes involved [33]. This strategy has been used in ref. [34] to deduce a functional form for the vacuum energy density, which depends dynamically on the Hubble constant and on its time derivative. We will use this suited Ansatz for \(\rho_{\rm vac}\) to describe the late universe evolution, which replaces a constant vacuum energy density with its "running" counterpart. For technical details see [29].
The other dark component of the universe, which is commonly described by a pressureless fluid, is known as cold dark matter (CDM). This is responsible for the structure formation of the Universe. In a wider physical ambit DM can include viscosity and even warmness due to late decoupling from the primordial plasma. In
fact, some of the present tensions of the standard model have been alleviated with the inclusion of viscosity in the dark sector. For example, the Hubble tension, which exhibits a discrepancy of \(4.4\sigma\) between the measurements obtained from Planck CMB and the locally ones obtained in [35] for \(H_{0}\), has been tackled in [36; 37; 38]. The \(\sigma_{8}\) tension (where \(\sigma_{8}\) is the r.m.s. fluctuations of perturbations at \(8h^{-1}\) Mpc scale) that emerges when confronting large-scale structure (LSS) observations and Planck CMB data [39; 40], can be attenuated assuming a viscous DM component [39]. The EDGES experiment has observed an excess of radiation at \(z\approx 17\)[41], which is not predicted by the \(\Lambda\)CDM model during the reionization epoch. This excess can be indeed explained by the presence of a viscosity DM component [42].
Despite the fact that dynamical viscous dark matter models and the presence of a running vacuum energy density have the potential to alleviate some of the tensions present in the \(\Lambda\)CDM model, it should be noted that they have, separately, limitations in successfully describing the entire cosmological evolution, besides the fundamental physical conceptions. However, by combining both hypotheses, we seek for a more complete physical scenario to address the limitations of each single approach, aiming to associate, for instance, the rate of structure formation to dissipative effects of CDM, while the tension related to the Hubble constant to the running vacuum energy density. The primary question we want to address in this paper is whether incorporating these two ideas into a more comprehensive cosmological framework could provide a completely consistent cosmological evolution.
Moreover, together with the above-mentioned argument, in ref. [43] different varying viscous DM models are used to address the Hubble and \(\sigma_{8}\) tensions of the standard \(\Lambda\)CDM model. It is shown that although the proposed dissipative models tend to reduce the \(\sigma_{8}\) tension, they aggravate the Hubble tension, which leads the authors to conclude that in addition to DM viscosity a dynamical presence of relativistic universe components or dark energy should be required to simultaneously alleviate both tensions.
In light of the aforementioned discussions concerning the two possible modifications of the dark sector of the standard model, the main goal of the present article is to investigate -using the dynamical system approach combined with numerical methods- the critical points and their stability properties, which account for the dynamical expansion of the universe. A fundamental requirement for a cosmological model is to accurately describe the complete evolution of the universe, which includes radiation, matter, and dark energy periods. Although the radiation-dominated period is often neglected for simplicity, it is not trivial to incorporate it into viscous dark matter models using certain parametrizations, as shown in [44]. Therefore, it is necessary to ensure that the three main eras of cosmic evolution are present as critical points with the appropriate stability properties within a consistent model parameter region. The proposed model includes a running vacuum density and dissipative dark matter.
Lastly, a previous advance obtained by using dynamical system analysis, which considers also a running vacuum and viscous DM with the particular parameterization \(\xi=\xi_{0}H\), where \(\xi\) is the usual bulk viscosity coefficient, was performed in [45]. Nevertheless, aside of the particular Ansatz for the dissipation used, the analysis is restricted to late times, as the radiation component was not included. In the present paper, we aim to address these limitations by considering a more general parametrization for the bulk viscosity given by [44] and a running vacuum energy density. This general parametrization has the advantage of including simultaneously kinematic effects represented by the Hubble constant, and dynamical ones played by the dark matter energy density. In this way, the bulk viscosity associated to DM is consistently handled vanishing as matter density does. More relevant is the fact that Friedmann's equations can be written in the form of an autonomous dynamical system for any value of the exponent that characterizes dissipation within Eckart's framework of relativistic non-perfect fluids. In addition, we incorporate the radiation component of the universe for consistency, as discussed above.
The present paper is organized as follows: In Section II the main ingredients of the model, the evolution equations, and the physical motivations behind the Ansatze chosen for the running vacuum energy density and the DM viscosity coefficient are present. In Section III a dynamical system analysis is performed, whose space is spanned by the phase variables \(\Omega_{r}\), \(\Omega_{m}\) and \(\Omega_{\rm vac}\) (see Eq.12 for details). Furthermore, the fixed points and their stability properties are explicitly computed. In particular, based on the parametrization of the running vacuum density and the corresponding one for the dissipative DM, four prominent models are throughout studied in subsections III.1-III.4. In Section IV we describe the numerical procedure used for the integration of the suited classes of models considered in this paper. Finally, in Section V the main findings of the present article are summarized, and a physical discussion of the cosmological scenarios resulting from the models studied is provided.
## II The model
In this section we will describe the cosmological model we propose to study the cosmological dynamics of the universe. It represents a two-fold extension of the \(\Lambda\)CDM model, which includes two essentials aspects that has been extensively used to alleviate or solve some problems associated to the standard cosmological model. Firstly, we consider a vacuum energy density described by a running coupling depending on both the Hubble parameter and its cosmological time derivative \(\rho_{\rm vac}(H,\dot{H})\). This Ansatz is not only motivated by the intuitive observation that an expanding universe quite improbably would
preserve a static value throughout its complete evolution, but it is also motivated by fundamental physics, in fact, a smooth evolving vacuum energy density is suggested by quantum field theory in curved spacetime (see [29] and references therein). Secondly, we propose a more realistic fluid description of the dark matter component including dissipation through a bulk viscosity coefficient, which has been recently proposed by the authors [44]. This proposal leads to remarkable advantages, such that the cosmological time evolution equations can be written in a form of an autonomous dynamical system suited to be studied by the stability theory, and that the bulk viscosity effects fade away when the dark matter density vanishes.
As already mentioned in the above paragraph, for the running vacuum vacuum energy density we will use a two-parameter model inspired by a phenomenological application of Renormalization Group analysis, whose cosmological consequences has been studied in [29], which can be written as
\[\rho_{\rm vac}(H)=\frac{3}{8\pi G_{N}}\Big{(}c_{0}+\nu H^{2}+\tilde{\nu}\dot{H }\Big{)}+\mathcal{O}(H^{4}), \tag{1}\]
where \(\nu\) and \(\tilde{\nu}\), are both dimensionless parameters, and it is expected that \(|\nu|\) and \(|\tilde{\nu}|\) should be lower than one. Indeed, according to QFT calculations, the more suitable values of the set \(\{\nu,\tilde{\nu}\}\) rounding \(10^{-3}\), which also it has been obtained from the constraints using SNe Ia+BAO+H(z)+LSS+CMB cosmological observations [46]. The above phenomenological Ansatz is suited for the wide range of the universe expansion excluding early times, as the Hubble parameter grows very fast when for instance, the inflationary epoch is approached.
We are particularly interested in two sub-lasses of running vacuum models for checking their cosmological viability by dynamical system perspective, directly based on Eq. (1). The constant parameters \(\nu\) and \(\tilde{\nu}\) account for the dynamical character of the vacuum energy density, \(c_{0}\) is a constant determined by the boundary condition \(\rho_{\rm vac}(H_{0},\dot{H}^{(0)})=\rho_{\rm vac}^{(0)}\), where the superscript \((0)\) refers to the present value, i.e. \(a_{0}=1\). Thus, by exploiting the independence of both dynamical contributions, two classes of running vacuum models can arise. The first possibility we want to investigate corresponds to the case with \(\tilde{\nu}=0\). This is indeed one of most treated cases in scenarios of variable vacuum energy density. As to the second class of running vacuum models, we will focus on the particular choice \(\tilde{\nu}=\nu/2\), since it has the potential advantage of alleviating some tensions permeated in the \(\Lambda\)CDM cosmological model [47]. The inclusion of such term has appealing consequences in the conservation law for the involved components because, as it will be seen, it allows to write the vacuum energy density only in terms of the matter component in contrast to the first class of models1. Hence, when the radiation component is considered as a part of the total energy density for the first class of models, the vacuum energy density will depend on it, whereby all components are coupled directly to each other apart from gravitational interaction. This feature can provide appreciable differences in the background cosmological evolution. It is expected however that the radiation component naturally becomes negligible in the late-time dynamics, and the effects of the running of the vacuum in the radiation era can be considerably small to impact the thermal history of the Universe. But, are there some consequences of the running of the vacuum energy density from the dynamical system perspective on the emerging critical points? If so, how much do the stability conditions are changed with respect to the reference \(\Lambda\)CDM model? These aspects are the ones to be assessed in this work.
Footnote 1: Notice that even though \(\dot{H}\) involves a term associated to the radiation energy density it cancels out with the coming one from \(H^{2}\).
According to the previous discussion and considering the running vacuum model in Eq. (1), the Friedmann and the acceleration equations are respectively written as
\[3H^{2} = 8\pi G_{N}\left(\rho_{r}+\rho_{m}+\rho_{\rm vac}\right), \tag{2}\] \[3H^{2}+2\dot{H} = -8\pi G_{N}\left(P_{r}+P_{m}^{\rm eff}+P_{\rm vac}\right), \tag{3}\]
where the usual polytropic relation for radiation \(P_{r}=\rho_{r}/3\) is set, and the one for the vacuum energy density \(P_{\rm vac}=-\rho_{\rm vac}\) holds provided that Eq. (1) is identified as the true vacuum energy density. One would expect however some deviation from \(w_{\rm vac}=-1\) at, for instance, early times given the dependence of \(\rho_{\rm vac}\) on the Hubble parameter: Eq. (1) tell us that once \(\rho_{\rm vac}\) is promoted to a dynamical quantity, it can evolve so that an effective equation of state may take an appreciably different value from the standard one.
For the dark matter fluid, the bulk viscous pressure \(\Pi\) is introduced as an effective pressure to allow more phenomenological outcomes within a cosmological setting beyond the standard running vacuum models:
\[P_{m}^{\rm eff}=P_{m}+\Pi=-3H\xi, \tag{4}\]
where \(\xi\) is the bulk viscosity coefficient that respects the second law of the thermodynamics provided that \(\xi>0\). As previously said, this extra ingredient has been introduced to get a more realistic fluid description of DM, and for enriching the phase space of the system and also for suitable comparison with typical bulk viscosity models with \(\nu=\tilde{\nu}=0\) which are minimal extensions of the \(\Lambda\)CDM cosmological model. In doing so, we will use a recently proposed general parameterization for the viscosity coefficient \(\xi\)
\[\xi=\frac{\xi_{0}}{8\pi G_{N}}H^{1-2s}H_{0}^{2s}\left(\frac{\rho_{m}}{\rho_{m} ^{0}}\right)^{s}=\frac{\hat{\xi}_{0}}{8\pi G_{N}}H\;\Omega_{m}^{s}, \tag{5}\]
(for technical aspects see ref. [44]). The above parametrization has several advantages, among them we mention that it encompasses the well known models \(\xi=\xi(H)\) (corresponding to \(s=0\)) and \(\xi=\xi(\rho_{m})\), or more precisely2\(\xi\sim\rho_{m}^{1/2}\) (for \(s=1/2\)) and that, in turns, it is very useful when writing the resulting evolution equations in the form of autonomous system through the second equality (r.h.s. of Eq.(5)). Notice that \(\hat{\xi}_{0}\) and \(\xi_{0}\) are both dimensionless constants related to each other by \(\hat{\xi}_{0}=\frac{\xi_{0}}{\Omega_{m}^{2}}\). It is very instructive now to formulate the conservation law for each component for the two classes of models discussed above. The Bianchi identities establish thus the global conservation law
Footnote 2: Notice that taking \(s=1\) and assuming \(H\propto\rho_{m}^{1/2}\) lead also to \(\xi\sim\rho_{m}^{1/2}\). We stress however that this limit is achieved only when the universe is in the matter domination epoch. So it is expected that \(s=1\) and \(s=1/2\) are different when dark matter is subdominant but not negligible.
\[\dot{\rho}_{r}+4H\rho_{r}+\dot{\rho}_{m}+3H(\rho_{m}+\Pi)=-\dot{\rho}_{\rm vac}. \tag{6}\]
Let us write the conservation law considering Eqs. (1), (2) and (3) keeping \(\tilde{\nu}\) free to trace its effect at the level of the conservation equations:
\[\dot{\rho}_{r}+4H\rho_{r}+\dot{\rho}_{m}+3H(\rho_{m}+\Pi)= \tag{7}\] \[\nu\Big{(}3(\Pi+\rho_{m})+4\rho_{r}\Big{)}H+\frac{3}{2}\tilde{\nu }\left(\dot{\Pi}+\dot{\rho}_{m}+\frac{4}{3}\dot{\rho}_{r}\right).\]
Considering, for the time being, the lineal relation \(\tilde{\nu}=A\nu\), with \(A\) being some arbitrary value. So once terms associated to the same nature's fluid have been grouped, the continuity equations for each fluid take the form
\[\dot{\rho}_{r}(1-2A\nu)+4H\rho_{r}(1-\nu)=0, \tag{8}\] \[\dot{\rho}_{m}\left(1-\frac{3}{2}A\nu\right)+3H(\rho_{m}+\Pi)(1- \nu)-\frac{3}{2}A\nu\dot{\Pi}=0, \tag{9}\]
where \(\dot{\Pi}\) is a function of \(\rho_{m}\) and \(\dot{\rho}_{m}\) determined by the viscous model of Eq. (5). It is interesting to see that for the particular value \(A=1/2\), or equivalently \(\tilde{\nu}=\nu/2\), the standard conservation equation for radiation holds whereas the one for the dark matter fluid is modified3. It means that necessarily one of those conservation equations must be modified at the cost of allowing the running of the vacuum energy in the form given by Eq. (1). As to the evolution equation for the vacuum energy density, it has two possible contributions according to the right hand side of Eq. (7). For the first class of models (with \(\tilde{\nu}=0\)), we can see that the energy densities of the fluids, and not their derivatives, will contribute to the evolution of \(\rho_{\rm vac}\). This implies that the prefactor \((1-\nu)\) in the continuity equations (Eqs. (8) and (9)) can not be cancel out unless one goes to the trivial case \(\nu=0\). By the contrary, turning on \(\tilde{\nu}\) implies that the evolution equations for radiation and dark matter, in addition to the one of the viscous pressure, must be considered to account properly for the time evolving vacuum energy density. This is, in fact, the main difference between both classes of models we want to investigate along with the possibility of taking different values of the power \(s\) in the bulk viscosity coefficient of Eq. (5). This is specified in Table 1. Having thus specified the main ingredients of the general model, we will proceed to perform dynamical system analysis in the next section.
Footnote 3: Notice that there exists a value for \(A\) (\(A=2/3\)), along with turning off the bulk viscosity, for which the dark matter fluid follows the standard form. We are however interested in the model with \(A=1/2\) without prejudice against other values that may lead to interesting phenomenological features in the radiation era.
## III Dynamical system analysis
We start by defining the dimensionless variables that span the phase space of the system and allows us to rewrite their dynamics in the form of an autonomous system. For practicality, such variables are chosen essentially to describe the density parameters associated to each fluid
\[\Omega_{r} \equiv\frac{8\pi G_{N}\rho_{r}}{3H^{2}}, \tag{10}\] \[\Omega_{m} \equiv\frac{8\pi G_{N}\rho_{m}}{3H^{2}},\] \[\Omega_{\rm vac} \equiv\frac{8\pi G_{N}\rho_{\rm vac}}{3H^{2}}.\]
Therefore, the Friedmann constraint takes the usual form
\[\Omega_{r}+\Omega_{m}+\Omega_{\rm vac}=1, \tag{11}\]
and the evolution equations for radiation, dark matter and the vacuum energy are, respectively, for the first class of models
\[\Omega^{\prime}_{r} = \Omega_{r}(-1+4\nu-3\hat{\xi}_{0}\Omega^{s}_{m}+\Omega_{r}-3\Omega_{ \rm vac}),\] \[\Omega^{\prime}_{m} = -3(-1+\nu)\hat{\xi}_{0}\Omega^{s}_{m}-3\hat{\xi}_{0}\Omega^{1+s}_{m }+\Omega_{m}(3\nu+\Omega_{r}-3\Omega_{\rm vac}), \tag{12}\] \[\Omega^{\prime}_{\rm vac} = -3\nu\Omega_{m}+3\hat{\xi}_{0}\Omega^{s}_{m}(\nu-\Omega_{\rm vac} )-3(-1+\Omega_{\rm vac})\Omega_{\rm vac}+\Omega_{r}(-4\nu+\Omega_{\rm vac}),\]
and for the second class as follows
\[\Omega^{\prime}_{r} = \Omega_{r}(-1-3\hat{\xi}_{0}\Omega^{s}_{m}+\Omega_{r}-3\Omega_{ \rm vac}),\] \[\Omega^{\prime}_{m} = -\frac{\Omega_{m}\left(-12(-1+\nu)\hat{\xi}_{0}\Omega^{s}_{m}-9 \nu\hat{\xi}_{0}^{2}\Omega^{2s}_{m}+6(-2+3\nu)\hat{\xi}_{0}\Omega^{1+s}_{m}+ \Omega_{m}((4-3\nu)\Omega_{r}+3(\nu+(-4+3\nu)\Omega_{\rm vac}))\right)}{(-4+ 3\nu)\Omega_{m}-3s\nu\hat{\xi}_{0}\Omega^{s}_{m}}, \tag{13}\] \[\Omega^{\prime}_{\rm vac} = -\nu(3\Omega_{m}-3\hat{\xi}_{0}\Omega^{s}_{m}+4\Omega_{r})+\frac {1}{4}(-3+3\hat{\xi}_{0}\Omega^{s}_{m}-\Omega_{r}+3\Omega_{\rm vac})(-3\nu \Omega_{m}+3\nu\hat{\xi}_{0}\Omega^{s}_{m}-4(\nu\Omega_{r}+\Omega_{\rm vac}))+\] \[\frac{\nu}{4}\left((-3+3s\hat{\xi}_{0}\Omega^{1+s}_{m})\Omega^{ \prime}_{m}-4\Omega^{\prime}_{r}\right).\]
Here the prime denotes derivative with respect to \(\lambda\equiv\ln a\). It is worthwhile to emphasizing that the evolution equation for radiation has been included here for illustration purposes and to write in a compact form the evolution equation for the vacuum energy density parameter since the system can be reduced to two dimensional phase space with the help of Eq. (11). As discussed, for the second class of models the evolution equation for the radiation density parameter holds the standard form in the absence of bulk viscosity as in the \(\Lambda\)CDM model, and the evolution equation for the vacuum energy density parameter involves derivatives of the other components as can be evidenced in the last line of Eqs. (13).
The effective equation of state parameter is defined as usual
\[w_{\rm eff}=-\frac{2}{3}\frac{H^{\prime}}{H}-1, \tag{14}\]
with
\[\frac{H^{\prime}}{H}=\frac{1}{2}(-3+3\hat{\xi}_{0}\Omega^{s}_{m}-\Omega_{r}+3 \Omega_{\rm vac}), \tag{15}\]
enclosing however the main features discussed above since the energy density parameters, at the critical points, depend on the model parameters \(\nu\) and \(\hat{\xi}_{0}\), as we shall see. Though so far we have been attempting to describe in a general way the effects of the viscosity associated to the dark matter fluid, it is necessary to take at this point some specific values of the power \(s\) in Eq. (5) for both classes of models in order to carry out suitably the phase space analysis. We will refer henceforth to model 1 for the fist class of model (Eq. (12)) with \(s=1/2\) while, for the second class of models (Eq. (13)), model 2 with \(s=1\), model 3 with \(s=1/2\) and model 4 with \(s=0\). In this regards the free parameters of the models are \(\nu\) and \(\hat{\xi}_{0}\).
It is expected however that these parameters be very small \(\nu,\hat{\xi}_{0}\ll 1\), by construction of the theory itself in addition to the thermodynamical argument (i.e. \(\hat{\xi}_{0}>0\)), to account properly for the late-time background dynamics. Different observational constraints suggest also that there parameters are very small. Though those estimations are not strictly applicable for the present models, they will serve as a reference value in the dynamical analysis. Nevertheless, for the sake of generality of our analysis we will take \(\nu\) free to see what kind of restriction we can infer from the dynamical system analysis unless
\begin{table}
\begin{tabular}{c c c} Label & Class of model & Bulk viscosity exponent \(s\) \\ \hline Model 1 & First class \(\tilde{\nu}=0\) & 1/2 \\ Model 2 & Second class \(\tilde{\nu}=\nu/2\) & 1 \\ Model 3 & Second class \(\tilde{\nu}=\nu/2\) & 1/2 \\ Model 4 & Second class \(\tilde{\nu}=\nu/2\) & 0 \\ \end{tabular}
\end{table}
Table 1: Classification of the viscous running cosmological models to be studied according to the dependence of \(\rho_{\rm vac}\) either on \(H\) only (_first class_\(\tilde{\nu}=0\) and \(s=1/2\)) or on both \(H\) and \(\hat{H}\) (_second class models_\(\tilde{\nu}=\nu/2\) and \(s=1,1/2,0\) respectively).
stated otherwise. Moreover, we will discard _a priori_ any possible solution for which those parameters break the aforementioned conditions. General stability conditions however will be shown for a better comprehension of how the sign works for determining the dynamical character of the critical point. When one of those free parameters is kept fixed it means that the phase space is insensitive to it. So we will be left with just one parameter to investigate changes in the general stability conditions for the critical points.
### Model 1: \(\tilde{\nu}=0\) and \(s=1/2\)
Taking \(s=1/2\) in the system Eq. (12) a set of 5 critical points is found and reported in Table 2. The first critical point listed below describes non-standard radiation (Ia) because of the presence of the parameter \(\nu\) or a most general critical point (Id) depending on both \(\nu\) and \(\hat{\xi}_{0}\). Notice also that there may be apparently a sort of degeneracy between \(\nu\) and \(\hat{\xi}_{0}\) in the critical point (Id), however the equation of state is only sensitive to \(\nu\). These results are fully consistent with our preliminary expectations regarding the first class of models.
The critical point (Ib) accounts for dark matter domination and exhibits some trace of the running effects of the vacuum energy density as well as the effects of the bulk viscosity but, for the latter, in an effective way in the equation of state parameter. An interesting feature of this point is that if \(\hat{\xi}_{0}\sim\mathcal{O}(1)\), what we will refer to as strong viscous regime, it can drive the current acceleration expansion of the universe through the bulk viscosity effect when the negative branch is considered. Accordingly, for \(\hat{\xi}_{0}\) positive and the range \(1-\hat{\xi}_{0}^{2}<\nu<1\), it can reveal a phantom-like behavior, otherwise it will be \(w_{\rm eff}>-1\) for the range \(\nu<1-\hat{\xi}_{0}^{2}\). Interestingly, stability conditions are compatible only with the parameter space associated to a phantom-like solution (see later a more detailed discussion about stability).
The point (Ic) is a sort of scaling solution on account of the parameter \(\hat{\xi}_{0}\) describing de-Sitter-like accelerated expansion in the same fashion as the standard critical point (Ie) does. From here it is conclusive to say that the effect of the running vacuum energy density is surprisingly not present in the critical points that can potentially drive the current acceleration4. It does not mean however that late-time measurements of the background cosmology are completely insensitive to the running effect as was recently assessed in [48]. Although this model is not so different from the standard \(\Lambda\)CDM cosmological model under the scrutiny of parameter estimation, this can fit (slightly) better the background data. Nevertheless, high-redshift measurements can also provide significant evidence of the running vacuum energy density effect to see any deviation from the \(\Lambda\)CDM model [47].
Footnote 4: Despite its incapability of addressing the accelerated expansion all on its own, the running vacuum energy density may determine the phantom-like character in the case of an unified fluid scenario (\(\hat{\xi}_{0}\sim\mathcal{O}(1)\)) as an alternative solution to the de-Sitter solution through the critical point (Ib). Here the bulk viscosity dark matter fluid and the running vacuum energy density can be seen as unified fluid description of dark sector.
Moreover, the running effects can play a crucial role at the perturbation level, leading possibly to different conclusions about structure formation compared to the \(\Lambda\)CDM model [48]. With the inclusion of bulk viscosity, a richer scenario is expected not only at the background level but also to account for the matter density perturbations. This is an open question that should be dealt with in the future.
Two interesting sub-manifolds of this model are achieved when the bulk viscosity is turned off (\(\hat{\xi}_{0}=0\)) and when the limit \(\nu=1\) is taken. The latter possibility leads clearly to the \(\Lambda\)CDM model. This is however a direct consequence of taking \(\tilde{\nu}=0\) and it is by no means the formal way of recovering the \(\Lambda\)CDM model in the more general setup. Finally notice that the existence of the critical points and the request of having positive energy densities are ensured for the range \(\nu<1\).
On the other hand, model parameters can play a crucial role in determining the stability conditions of the model. This can be checked by setting the right sign of the real parts of the eigenvalues associated to the Jacobian matrix of the the linear system. So the dynamical character of the critical points are displayed in table 3. If the conditions \(\nu,\hat{\xi}_{0}\ll 1\) are taken beforehand, the stability criteria do not change considerably compared to the \(\Lambda\)CDM model. Let us be however more flexible to be able to infer the whole range of the model parameters from stability arguments. This is reported in table 3 where we have introduced for short notation the quantities \(\beta=1-\hat{\xi}_{0}^{2}\) and \(\chi=1-9\hat{\xi}_{0}^{2}\) in the stability conditions. It is worthwhile noting that the sign of \(\hat{\xi}_{0}\) is crucial for determining the dynamical character of the critical point (Ie), leaving no room for the parameter space that satisfies the requirement \(\lambda_{1},\lambda_{2}<0\) to be an attractor for \(\hat{\xi}_{0}>0\). This means that even thought the late-time acceleration can be driven by this (unstable) point, the universe will depart from this stage due to bulk viscosity effects. At some point the trajectory will reach the true stable de-Sitter solution (Ic) where the bulk viscosity is also present, or the phantom-like solution (Ib) if the strong viscous regime is considered instead. So it is possible that the universe experiences two accelerated expansion stages or a single one driven by the critical points (Ib) or (Ic). There is no doubt from here that the effects of bulk viscosity on the background cosmological dynamics are meaningful: on one side it spoils the stability of (Ie), but on the other side it shapes suitable conditions for ensuring stable solutions. This is indeed the sharpest distinction between this model and the \(\Lambda\)CDM model at the background level.
Some numerical trajectories are also displayed in the
two-dimensional phase space \((\Omega_{m},\Omega_{\rm vac})\) in fig. 1 for different initial conditions as explained in the caption, they all leading to the attractor point \((0,1)\) (Ic), i.e. to an universe experiencing an accelerated expansion after passing close to a saddle point describing dark matter domination (see left panel). This case corresponds to the case \(\hat{\xi}_{0}=10^{-4}\) while right panel shows the strong viscous case \(\hat{\xi}_{0}\sim\mathcal{O}(1)\) that illustrates the fact that \(\hat{\xi}_{0}>1\), keeping \(\nu\ll 1\), changes the dynamics of the critical point (Ib) from saddle to attractor point. Whether such large values correspond to a realistic viscosity scenario of dark matter, without invoking the unified dark sector description, is a theme that must be independently assessed from cosmological parameter estimation when calculating the best-fit parameters from observational data. This is also a subject that must be treated in the future.
Other values of \(s\) for this first class of models are briefly discussed here as well as their main features. For instance, the case \(s=0\), which leads to the well known parameterization \(\xi\sim H\), can not provide a radiation domination period: there do not exist any physical conditions such that \(\Omega_{r}\neq 0\) along the entire phase space trajectories. Hence, this case must be discarded as a suitable cosmological solution. Before going forward let us, however, describe two critical points that are also present for others values of \(s\). The first point corresponds to \(\Omega_{m}=1-\nu\) and \(\Omega_{\rm vac}=\nu\) with effective equation of state \(w_{\rm eff}=-\nu-\hat{\xi}_{0}\), which describes matter domination provided that \(\nu\ll 1\), which can generate an accelerated expansion if \(\nu\leq 1/3\) and \(\hat{\xi}_{0}>\frac{1}{3}(1-3\nu)\) (strong viscous regime). This critical point is analogue to the critical point (Ib) but with a different equation of state. The second critical point we find is nothing more than a duplicate critical point as described by the point (Ic).
The case \(s=1\) is also phenomenologically interesting because the viscous fluid features are present in one of the critical points with the magnitude of \(\hat{\xi}_{0}\) (and the sign of \(\nu\)) determining unequivocally the cosmological behavior of this point. That is to say, \(w_{\rm eff}=\nu(-1+\hat{\xi}_{0})-\hat{\xi}_{0}\). So accelerated expansion is possible provided that \(\hat{\xi}_{0}>1\) and \(\nu<\frac{-1+3\hat{\xi}_{0}}{-3+3\hat{\xi}_{0}}\) yielding thus a phantom-like behavior, or simply \(\hat{\xi}_{0}=1\) leading to a de de-Sitter-like solution. The first conditions implies clearly that \(\nu>0\) and \(\hat{\xi}_{0}>0\). For such values of \(\hat{\xi}_{0}\) the model is in the strong viscous regime which allows the dark matter component to drive the current expansion of the universe through the bulk viscosity effect. Notice that the parameter allows the vacuum energy component to exist during this period despite that it does not play any role in the acceleration: \(\Omega_{\rm vac}=\nu\) (\(\Omega_{m}=1-\nu\)). This point is hence an attractor when \(\nu<1\) and \(\hat{\xi}_{0}>1\), and saddle for the same range of \(\nu\) and \(0<\hat{\xi}_{0}<1\). The latter condition breaks clearly the strong viscous regime, necessary to realize accelerated expansion, whereby this point corresponds, for such parameter values, to standard dark matter domination in this weak regime. We can conclude from phase space analysis that this point is quite appealing to cosmological dynamics of the late universe, and, as demanded, it must be put under scrutiny with the help of observational data to ensure its cosmological viability.
Another commonly unexplored choice is \(s=-1\), but it is ruled out, as for the \(s=0\)-case, because the critical point that describes the radiation era is not real valued. Larger positive values of \(s\) are also cosmologically viable with one of their critical points characterized by a common effective equation of state and written in the general way as \(w_{\rm eff}=-\nu\pm(1-\nu)^{s}\hat{\xi}_{0}\) where the branch \(-\) corresponds to even integers only: \((-1)^{s+1}\); and the branch \(+\) for all the others, including half-integers. Further exploration about suitable power law values is beyond the scope of this paper since there is not (as far as we know)
\begin{table}
\begin{tabular}{c c c c c c} Point & \(\Omega_{r}\) & \(\Omega_{m}\) & \(\Omega_{\rm vac}\) & \(w_{\rm eff}\) & Existence & Acceleration \\ \hline (Ia) & \(1-\nu\) & \(0\) & \(\nu\) & \(\frac{1}{3}(1-4\nu)\) & \(\forall\nu,\hat{\xi}_{0}\) & No \\ (Ib) & \(0\) & \(1-\nu\) & \(\nu\) & \(-\nu\pm\sqrt{1-\nu}\hat{\xi}_{0}\) & \(\nu<1,\sqrt{\hat{\xi}_{0}}\) & Yes (see main text) \\ (Ic) & \(0\) & \(\hat{\xi}_{0}^{2}\) & \(1-\hat{\xi}_{0}^{2}\) & \(-1\) & \(\forall\nu,\hat{\xi}_{0}\) & Yes \\ (Id) & \(1-\nu-9\hat{\xi}_{0}^{2}\) & \(9\hat{\xi}_{0}^{2}\) & \(\nu\) & \(\frac{1}{3}(1-4\nu)\) & \(\forall\nu,\hat{\xi}_{0}\) & No \\ (Ie) & \(0\) & \(0\) & \(1\) & \(-1\) & \(\forall\nu,\hat{\xi}_{0}\) & Yes \\ \end{tabular}
\end{table}
Table 2: critical points of the autonomous system described by Eq. (12) for the bulk viscosity model \(s=1/2\) along with the conditions of existence and acceleration expansion. The effective equation of state parameter has been also included.
\begin{table}
\begin{tabular}{c c c} Point & \(\lambda_{1}\) & \(\lambda_{2}\) & Stability \\ \hline (Ia) & \(4\) & \(\infty\) & Repeller \(\forall\nu,\xi_{0}>0\) \\ (Ib) & \(-1+\nu\mp 3\sqrt{1-\nu}\hat{\xi}_{0}\) & \(-3(-1+\nu\pm\sqrt{1-\nu}\hat{\xi}_{0})\) & \((-):\) saddle if \(\nu<\beta\); attractor if \(\beta<\nu<1\vee\hat{\xi}_{0}>0\) \\ & & & \((+):\) saddle if \(\nu<\chi_{1}\);repeller if \(\beta(\nu)<\nu<1\vee\hat{\xi}_{0}>0\) \\ (Ic) & \(4(-1+\nu)\) & \(\frac{3}{2}(-1+\nu+\hat{\xi}_{0}^{2})\) & Repeller if \(\nu>1\); saddle if \(\beta<\nu<1\); attractor if \(\nu<\beta\vee\hat{\xi}_{0}>0\) \\ (Id) & \(4-4\nu\) & \(-\frac{1}{2}(-1+\nu+9\hat{\xi}_{0}^{2})\) & Repeller if \(\nu<\chi\); saddle if \(\chi<\nu<1\); attractor if \(\nu>1\vee\hat{\xi}_{0}>0\) \\ (Ie) & \(-4\) & \(\infty\) & Saddle \(\forall\nu,\hat{\xi}_{0}>0\) \\ \end{tabular}
\end{table}
Table 3: Eigenvalues and stability conditions that set the dynamical character of the associated critical points for model 1.
a guidance criterion from physical grounds as thermodynamics principles, apart from dynamical system analysis, to select particular viscous models. The reader can find a more detail discussion about the general pattern of the critical points due to this new parameterization of the bulk viscosity in reference [44].
It is worthwhile mentioning that the critical points (Ia) and (Ie) of Table 1 do not correspond to perturbative fixed points, and therefore the linear stability analysis performed by computing their eigenvalues and associated properties cannot be trusted. This technical involved issue will usually not be mentioned as the validity of the standard stability analysis beyond the linear contributions relies on Malkin's nonlinear stability theorem [49]. Nevertheless, both points (Ia) and (Ie) lead to stationary points of the dynamical system describing the model and therefore they can be included as critical points, but taking care of their stability properties by numerical analysis.
### Model 2: \(\tilde{\nu}=\nu/2\) and \(s=1\)
This sub-class of model with \(\tilde{\nu}=\nu/2\) is characterized by \(s=1\) in Eq. (13). The set of 4 critical points associated to this system is reported in Table 4. There exists one trajectory in phase space describing the background cosmological dynamics that can follow the standard radiation dominated (IIa) and the accelerated expansion (IIb) stages similar to the \(\Lambda\)CDM model. The intermediate period is described however by non-standard dark matter (IIc) which involves both the effects of the running and the viscous dark matter fluid feature. This happens particularly for values where \(\nu,\hat{\xi}_{0}\ll 1\). Turning off the piece of the running vacuum energy density associated to \(\nu\) the viscosity effect remains hidden in the energy densities but encoded in the effective equation of state ( \(w_{\rm eff}=-\hat{\xi}_{0}\)) and in the stability conditions as can be inferred in Tables 4 and 5, respectively. One may in principle argue that if \(\hat{\xi}_{0}\) is small enough, matter domination, as we expect, may be realized. Turning on the \(\nu\)-parameter does not provide a successful exit either to this problem. This point will be investigated numerically using a high-precision solver during the numerical evolution.
In the most general case this point can describe accelerated expansion (\(w_{\rm eff}<-1/3\)) provided that the conditions \(0<\hat{\xi}_{0}\leq 1/3\) and \(\frac{-2+6\xi_{0}}{-3+3\xi_{0}}<\nu<1\) are fulfilled. Here we have taken in advance the constraint \(0<\hat{\xi}_{0}<1\). In the strong viscous regime, this point can also generate accelerated expansion similar to the point (Ib) of model 1 either in the absence of running (\(\nu=0\)) or in the most general case \(\nu\neq 0\) and \(\hat{\xi}_{0}\geq 1/3\). So in the strong viscous regime, the dynamical character of this point is once more changed from saddle to attractor. In both cases the bulk viscosity determines the dynamical character of the expansion. In the case \(\nu=0\) for instance, one finds simply \(w_{\rm eff}=-\hat{\xi}_{0}\). So depending of the bulk viscosity strength with \(\hat{\xi}_{0}>0\), the acceleration can reveal different behaviors including the well-know phantom-like and de-Sitter (\(\hat{\xi}_{0}=1\)) solutions. This is also true in the general case as
along as the bulk viscosity is the dominant effect. To see this, let us take the suggestive value \(\hat{\xi}_{0}=1\). This yields \(\Omega_{m}=1-\nu\), \(\Omega_{\rm vac}=\nu\) and \(w_{\rm eff}=-1\).
From construction one expects however \(\nu\ll 1\) (\(\Omega_{m}\to 1\)) such that the strong bulk viscosity regime, is once again capable of pushing away the accelerated expansion of the universe instead of conventional mechanisms of dark energy. These results are nothing more that bulk viscous unified scenarios of dark matter and dark energy. So, neglecting completely the running effect, viscous dark matter can help to describe independently the complete cosmological dynamics under the underlying physical mechanics behind bulk viscosity. Notice that no matter the dynamical character of this critical point their associated energy density parameters must be positive define which leads to the weak constraint \(0\leq\nu\leq 1\) and \(\hat{\xi}_{0}>0\).
On the other hand, we report the last critical point (IId) that can be surprisingly standard radiation domination for \(\nu=\frac{1+3\hat{\xi}_{0}}{3\hat{\xi}_{0}}\). This point can also generate accelerated expansion by combining both the running and bulk viscosity effects. For instance, the de-Sitter solution is realizable here taking \(\hat{\xi}_{0}=1\) necessarily. The condition for accelerated expansion is achieved even in the weak viscosity regime \(\hat{\xi}_{0}\ll 1\) as along as the general condition \(\hat{\xi}_{0}>\frac{3\nu}{-4+6\nu}\) is fulfilled. The sign of \(\nu\) is decisive to set the dynamical character of the expansion. For instance, for a given positive \(\nu\) and derived \(\hat{\xi}_{0}\), the expansion is a phantom-like while negative \(\nu\) leads to \(w_{\rm eff}>-1\). On the other hand, the condition of positive energy densities put the very tight constraint \(\frac{1+3\nu}{-3+6\nu}<\hat{\xi}_{0}<1\) where \(\nu>4/3\), which is compatible with the less restrictive condition for acceleration expansion but far beyond the expected value from physical grounds. So this solution can not describe successfully the current accelerated expansion whereby it is not of physical interest in this form. Notice finally that it is not possible to neglect the running vacuum energy density or the bulk viscosity here due to the conditions of existence for this critical point that prevent both \(\nu\) and \(\hat{\xi}_{0}\) from nullity.
In the case of vanishing bulk viscosity \(\hat{\xi}_{0}=0\), there only appear the first three critical points where (IIc) is reduced to \(\Omega_{m}=\frac{4(-1+\nu)}{-4+3\nu}\) and \(\Omega_{\rm vac}=\frac{\nu}{4-3\nu}\). Demanding positive energy density parameters yields the constraint \(0<\nu<1\) and the existence of the critical point itself imposes \(\nu\neq\frac{4}{3}\). Notice that negative values are not allowed from this simple request which is in consistent with the observational limit inferred by cosmological data. We remind that this point accounts for matter domination and exhibits small deviation from the \(\Lambda\)CDM model due to the presence of the running vacuum energy density. This is indeed the only difference at the background level that can be appreciated from phase space analysis. It is interesting the early presence of the vacuum energy density in this period, like in the most general form of this class of models (\(\hat{\xi}_{0}\neq 0\)), which is appealing into the light of the coincidence problem and, presumably, into the mechanism behind the formation of large scale structure in the universe. This latter aspect must be examined carefully to find more compelling distinctions beyond the cosmological background.
As to the stability conditions for this model, compatible with \(\hat{\xi}_{0}>0\) (see Table 5), they are plainly achieved. The critical point (IIa) may in principle be repeller or saddle in the general situation. Nevertheless, imposing the criteria \(\hat{\xi}_{0}>0\) and \(\nu\) small, this critical point must be necessarily a repeller. The resulting de-Sitter solution for this model (IIb) is stable for the large range \(-1<\nu<1\) and \(0<\hat{\xi}_{0}<1\). Physical expectations however tell us that \(\nu\ll 1\). The critical point (IIc), in the form describing matter domination (\(\nu,\hat{\xi}_{0}\ll 1\)), is a saddle point because their associated eigenvalues have always opposite signs by the requirement \(\hat{\xi}_{0}>0\) within the same allowed range of the parameter space as critical point (IIb). This same critical point can be also attractor as discussed and its associated (reals parts of) eigenvalues are both negatives for the range \(-1<\nu<0\) and \(\hat{\xi}_{0}>0\). The sign of \(\nu\) is crucial for ensuring the stability. Lastly, the critical point (IId) is sensitive to the sign of both \(\nu\) and \(\hat{\xi}_{0}\) whereby we have chosen the appropriated sign (\(\nu>0\)), by numerical examination, so that the point is an attractor. Notice that we have used the abbreviated quantity
\[\chi\equiv\nu(4+3\nu(-1+\hat{\xi}_{0}))(-1+\hat{\xi}_{0})\hat{ \xi}_{0}(4+36\hat{\xi}_{0}+3(-4\nu+(-2+\nu)\nu(16-6\nu+ \tag{16}\] \[9\nu^{2})\hat{\xi}_{0}+6(6+\nu(-14+3\nu(6+(-4+\nu)\nu)))\hat{ \xi}_{0}^{2}+9(2+(-2+\nu)\nu)^{2}\hat{\xi}_{0}^{3})),\]
in the the eigenvalues. For \(\nu>0\) the effective equation of state has a phantom-like behavior according to the requirement \(\nu,\hat{\xi}_{0}\ll 1\).
In most of the critical points the requirement \(\hat{\xi}_{0}>0\) (and reasonably small values to be still relevant) selects the specific region \(\nu>0\) of the parameter space. Though this region of the parameter space is consistent with the demand for positive energy densities (\(\nu>4/3\)) we remind that this solution is physically attractive due to the inferred large value of \(\nu\). We conclude that phase space analysis along with the condition of positive energy densities do not allow accelerated solutions, apart from the de-Sitter solution (IIb) and the one driven by bulk viscosity IIc, where the running vacuum energy density is the main agent responsible for the expansion. In the simplest version of this class of running vacuum mod
els, that is \(\hat{\xi}_{0}=0\), stability of the resulting solutions are plainly achieved for the range \(\nu<1\) which is consistent with the one demanded for having positive energy densities. The phase space analysis therefore left a few suitable critical points to describe the cosmological backgrounds dynamics. These critical points are slightly different to the ones found in model 1 either in the limit \(\nu,\hat{\xi}_{0}\ll 1\) or in the strong viscous regime \(\hat{\xi}_{0}>1\). For this reason their respective phase spaces are practically indistinguishable from each other. So numerical plots are not shown here. Notice however that the parameter space are distinct, in particular model 1 does allow \(\nu<0\).
### Model 3: \(\tilde{\nu}=\nu/2\) and \(s=1/2\)
This model corresponds to \(s=1/2\) for the bulk viscosity exponent in the system Eq. (13). Some common solutions to the already discussed ones are found, like solutions (IIa) and (IIb) of model 2 and (Ic) of model 1, as well as the sub-manifolds belonging to the branches \(\nu=0\) and \(\hat{\xi}_{0}=0\), so we report the two different solutions in Table 6 for a each given sign, and discuss their main physical properties as follows. The most interesting feature of taking the Ansatz Eq. (5) is that this allows the existence of a (viscosity-running) two-parameters family of solutions (IIIa) whose respective EoS coincide with the one of point (IIc) in the \(\hat{\xi}_{0}\to 0\) limit, despite they were deduced for different bulk viscosity exponents.
Specifically, the solution (IIIa) represents a general form of matter domination solution in the sense that this covers the limit cases: \(w_{\rm eff}=\pm\hat{\xi}_{0}\) when \(\nu\to 0\) and \(w_{\rm eff}=\frac{\nu}{4-3\nu}\) when \(\hat{\xi}_{0}\to 0\). This point can also describe accelerated expansion whose effective equation of state takes a more involved form (see Table 6). The necessary condition for accelerated expansion can be very well approximated to \(\hat{\xi}_{0}\gtrsim 1/3\) and \(0<\nu<1\) for the negative branch. For the positive one however there is not a suitable range of the parameter space, fulfilling particularly \(\nu\ll 1\), that provides the cosmic acceleration. We have defined the parameter \(\eta=64-112\nu+\nu^{2}(48+9\hat{\xi}_{0}^{2})\) everywhere for the sake of compactness. Other general solutions are ruled out by demanding positive energy density parameters or because they do not respect the physical condition \(\nu\ll 1\).
It is interesting to note that the solution (Ic) of model 1 is also a solution of the present model. Eigenvalues however change naturally the form due to the structure of the system but there is an ample region of the parameter space, \(\nu<1\) and \(-\sqrt{1-\nu}<\hat{\xi}_{0}<\sqrt{1-\nu}\), for which accelerated expansion can be still driven by the bulk viscosity effect in this general scenario. Out of this region, the point corresponds to standard dark matter domination. On the other hand, the de-Sitter solution (IIb) is plainly preserved for \(\nu\neq 0\) within this case.
Eigenvalues for the renewed solutions are too lengthy to be reported, so we have to check numerically this aspect to establish the dynamical character of those solutions. For the solution (IIa) with negative branch, the resulting parameter space reads approximately \(\hat{\xi}_{0}>1\) (strong viscous regime) and \(|\nu|<1\). This is strictly valid for \(\nu>\mathcal{O}(\pm 10^{-2})\). The positive branch corresponds to a repeller for the physical parameter space \(0<\hat{\xi}_{0}<1\) and \(0<\nu<1\).
### Model 4: \(\tilde{\nu}=\nu/2\) and \(s=0\)
This is also a sub-class of model belonging to \(\tilde{\nu}=\nu/2\) but with \(s=0\) in Eq. (13), which leads to functional dependence \(\hat{\xi}\sim H\) for the bulk viscosity coefficient. We have found in a previous work without including the running effects that this exponent is discarded because it can not describe consistently the whole cosmological evolu
\begin{table}
\begin{tabular}{c c c c} \hline \hline Point & \(\lambda_{1}\) & \(\lambda_{2}\) & Stability \\ \hline (IIa) & \(4\) & \(\frac{4-12(-1+\nu\hat{\xi}_{0})}{4+3\nu+1\nu\hat{\xi}_{0}}\) & Repeller if \(0\leq\nu\leq 1\wedge\ \hat{\xi}_{0}>0\) \\ (IIb) & \(-\frac{12(-1+\nu(-1+\hat{\xi}_{0})}{4+3\nu+1\hat{\xi}_{0})}\) & \(-4\) & Attractor if \(-1<\nu<1\wedge 0<\hat{\xi}_{0}<1\) \\ (IIc) & \(\frac{12(+1+\nu(-1+\hat{\xi}_{0})}{4+3\nu+1\hat{\xi}_{0})}\) & \(-\frac{4(1+3\nu(-1+\nu\hat{\xi}_{0})}{4+3\nu+1\hat{\xi}_{0})}\) & Saddle : \(0\leq\nu<1\wedge 0<\hat{\xi}_{0}<1\); attractor : \(-1<\nu<0\wedge\hat{\xi}_{0}>0\) \\ (IId) & \(\frac{-6\nu^{2}(4+3\nu(-1+\hat{\xi}_{0})(-1+\hat{\xi}_{0})(6-2\nu)}{\nu(4+3 \nu(-1+\hat{\xi}_{0})(-1+3\nu-3\hat{\xi}_{0})}\) & \(\frac{2(-3\nu^{2}(4+3\nu(-1+\hat{\xi}_{0}))(-1+3\nu-3\hat{\xi}_{0})}{\nu(4+3 \nu(-1+\hat{\xi}_{0})(-1+3\nu-3\hat{\xi}_{0})}\) & Attractor if \(0<\nu<1/3\wedge 0<\hat{\xi}_{0}<1\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Eigenvalues and stability conditions for determining the dynamical character of the associated critical points for model 2.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Point & \(\Omega_{r}\) & \(\Omega_{m}\) & \(\Omega_{mw}\) & \(w_{\rm eff}\) & Existence & Acceleration \\ \hline (IIa) & \(1\) & \(0\) & \(0\) & \(\frac{1}{3}\) & \(\nu\nu,\hat{\xi}_{0}\) & No \\ (IIb) & \(0\) & \(0\) & \(1\) & \(-1\) & \(\nu\nu,\hat{\xi}_{0}\) & Yes \\ (IIc) & \(0\) & \(\frac{4(-1-\nu)}{4+3\nu+1\hat{\xi}_{0}}\) & \(\frac{\nu(-1+\hat{\xi}_{0})}{4+3\nu+1\hat{\xi}_{0}}\) & \(\frac{\nu(-1+\hat{\xi}_{0})}{4+3\nu+1\hat{\xi}_{0}}\) & \(\forall\nu,\hat{\xi}_{0}\neq 1-\frac{4}{3\nu}\) & Yes \\ (IId) & \(\frac{(-1+\hat{\xi}_{0})(1+3(-1+\nu\hat{\xi}_{0})^{2}\hat{\xi}_{0})}{\nu(4+3 \nu+1\hat{\xi}_{0})}\) & \(\frac{-4-126+12\nu\hat{\xi}_{0}}{3\nu(4-1+3\nu-3\hat{\xi}_{0})}\) & \(\frac{(-1-3\nu(-1+\hat{\xi}_{0}))}{34\nu(4-1+3\nu-3\hat{\xi}_{0})}\) & \(-1-\frac{4(-1+\hat{\xi}_{0})}{-1+3\nu-3\hat{\xi}_{0}}\) & \(\nu\neq 0\neq\hat{\xi}_{0},\hat{\xi}_{0}\neq(-\frac{1}{3}+\nu)\) & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 4: critical points of the autonomous system described by Eq. (13) for the bulk viscosity model \(s=1\) along with the conditions of existence. The effective equation of state parameter has been also included.
tion of the universe: radiation dominated period is absent as a critical point. Can the running vacuum energy density effects restore the goodness that offer, for instance, the \(\Lambda\)CDM in this regard? unfortunately the answer is not. So, this model is not of cosmological interest. For completeness we report however the set of new critical points associated to this system in Table 7. The first critical point listed (IVa) is similar to the point (IC) of model I (with \(s=1/2\)), providing also de-Sitter-like acceleration expansion with non-vanishing bulk viscosity. The point (IVb) corresponds to matter domination with non-vanishing dark energy density thanks to both \(\nu\) and \(\hat{\xi}\). This is a kind of scaling solution. When turning off the running effects, it was shown that negative integers of the exponent \(s\) can not provide, by any means, radiation domination. Nevertheless, for the present model with a (general) negative real \(s\)-value, it leads to a higher non-lineal differential equation that are difficult to solve by the methods employed in this work. Though we expect that such harmful features are propagated by the fact of taking \(\hat{\xi}\sim H\), we can not discard certainly this possibility.
## IV Numerical solutions
In this section, we present the results obtained by the numerical integration of two sub-classes of running models studied in this paper, for specific elections of the exponent \(s\) of the bulk viscosity coefficient (see Eq. (5)). Specifically, model 1 corresponds to the first class of models for which \(\tilde{\nu}=0\) and \(s=1/2\) are replaced into Eq. (12), while models 2 and 3 correspond to the second class model where \(\tilde{\nu}=\nu/2\) and \(s=1\) and \(1/2\) respectively are inserted into Eq. (13). Model 4, obtained from the second class model with \(s=0\), is discarded due to the physical argument explained in the subsection III.4.
For the numerical integration we have implemented an algorithm in the programming language _Python_, using the _solve_ivp_ module provided by the _SciPy_ open-source _Python_-based ecosystem. The integration method chosen was _RK45_, which is an explicit _Runge-Kutta_ method of order \(5(4)\), with relative and absolute tolerances of \(10^{-6}\) and \(10^{-9}\), respectively. The systems of differential equations were integrated with respect to \(N=\ln a\) (which is related to the redshift through the expression \(1+z=a_{0}/a\)), in the integration range of \(-15\leq N\leq 5\), partitioned uniformly in \(10\,000\) data points.
It is worthwhile mentioning that, we integrate the differential equation for the Hubble parameter given by Eq. (15) as well, moreover, we only integrate two of the dynamical equations for \(\Omega_{r}\) and \(\Omega_{m}\), as \(\Omega_{\rm vac}\) can straightforwardly be obtained through the Friedmann's constraint of Eq. (11). In this sense, the initial conditions of both class of models were chosen in order to match with the \(\Lambda\)CDM model at current time (\(a_{0}=1\) or \(z=0\)) according to the Planck 2018 results [6], i.e., \(H_{0}=100\frac{km/s}{Mpc}h\) where \(h=0.674\), \(\Omega_{m,0}=0.315\), and \(\Omega_{r,0}=2.469\times 10^{-5}h^{-2}(1+0.2271N_{\rm eff})\) with \(N_{\rm eff}=2.99\). Even more, all the numerical solutions where compared with their \(\Lambda\)CDM counterparts, considering that the expression for the \(\Lambda\)CDM Hubble parameter and effective equation of state parameter are given by
\[H(z) = H_{0}\sqrt{\Omega_{r,0}(1+z)^{4}+\Omega_{m,0}(1+z)^{3}+\Omega_{ \Lambda,0}}, \tag{17}\] \[\omega_{\rm eff} = \frac{4}{3}\Omega_{r}+\Omega_{m}-1, \tag{18}\]
where \(\Omega_{\Lambda,0}=1-\Omega_{r,0}-\Omega_{m,0}\), \(\Omega_{r}=\Omega_{r,0}(1+z)^{4}/E(z)^{2}\), and \(\Omega_{m}=\Omega_{m,0}(1+z)^{3}/E(z)^{2}\), with \(E(z)=H(z)/H_{0}\).
All the figures shown in this section were obtained for two different types of combinations of the free parameters \(\hat{\xi}_{0}\) and \(\nu\). The first one consider combinations of \(\hat{\xi}_{0}\) with only positive values of \(\nu\), namely, \(\hat{\xi}_{0}=1\times 10^{-4}\) and \(\nu=5\times 10^{-4}\), \(\hat{\xi}_{0}=9\times 10^{-3}\) and \(\nu=5\times 10^{-4}\), \(\hat{\xi}_{0}=9\times 10^{-3}\) and \(\nu=1\times 10^{-2}\), and \(\hat{\xi}_{0}=9\times 10^{-3}\) and \(\nu=5\times 10^{-2}\). The second one consider combinations of \(\hat{\xi}_{0}\) with only negative values of \(\nu\), namely, \(\hat{\xi}_{0}=1\times 10^{-4}\) and \(\nu=-5\times 10^{-4}\), \(\hat{\xi}_{0}=9\times 10^{-3}\) and \(\nu=-5\times 10^{-4}\), \(\hat{\xi}_{0}=9\times 10^{-3}\) and \(\nu=-5\times 10^{-3}\), and \(\hat{\xi}_{0}=9\times 10^{-3}\) and \(\nu=-1\times 10^{-2}\). The figures are presented within the range \(3.27\times 10^{6}<z+1\leq 0.1\), except for the model 3, for which the range \(10^{5}\leq z+1\leq 0.1\) was used due to numerical difficulties.
### Model 1
In figures 2, 3, 4, and 5, we present the numerical results for model 1, obtained by the integration of Eq. (12) with \(s=1/2\), and \(\tilde{\nu}=0\).
In figure 2, we depict the energy density parameters \(\Omega_{i,1}\) associated to each fluid component (i.e. \(i\) goes from \(r\) for radiation, \(m\) for matter, to vac or \(\Lambda\) for vacuum) as a function of redshift \(z\), and for comparison their corresponding counterparts \(\Omega_{i}\) for the \(\Lambda\)CDM model according to Eq. (17). In particular, Figure 2(a) shows the numerical results of the energy density parameters obtained for the different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while
\begin{table}
\begin{tabular}{c c c c c c} Point & \(\Omega_{r}\) & \(\Omega_{m}\) & \(\Omega_{\rm vac}\) & \(w_{\rm eff}\) & Existence & Acceleration \\ \hline (IIIa) & 0 & \(\frac{(3\nu\hat{\xi}_{0}+\nu^{1/2})^{2}}{(-8+6\nu)^{2}}\) & \(\frac{\nu(8-3\nu(2+\hat{\xi}_{0})\mp\hat{\xi}_{0}\eta^{1/2})}{2(4-3\nu)^{2}}\) & \(\frac{3\nu^{2}+\nu(-4+6\hat{\xi}_{0})\pm\hat{\xi}_{0}\eta^{1/2}}{(4-3\nu)^{2}}\) & \(\nu\neq\frac{4}{3},\nu\leq 1,\hat{\xi}_{0}>0\) & Yes \\ \end{tabular}
\end{table}
Table 6: critical points of the autonomous system described by Eq. (13) for the bulk viscosity model \(s=1/2\) along with the conditions of existence. The effective equation of state parameter has been also included.
in figure 2(b) numerical results obtained for the different values of \(\hat{\xi}_{0}\) and negative \(\nu\) are presented. From these figures, we can see how the bulk viscosity and the running vacuum affect the redshift value at which the intersection \(\Omega_{r,1}=\Omega_{m,1}\) happens (which we call \(z_{eq,1}\)), without any appreciated effect in the redshift value at which \(\Omega_{m,1}=\Omega_{\rm vac,1}\). Even more, in the case of \(\nu>0\), it can be noted how the increment in the values of \(\hat{\xi}_{0}\) implies that \(z_{eq,1}<z_{eq}\), being \(z_{eq}\) the redshift at which \(\Omega_{r}=\Omega_{m}\); while the increment in the values of \(\nu\) implies that \(z_{eq}<z_{eq,1}\). On the contrary, in the case of \(\nu<0\) the increment in the values of \(\hat{\xi}_{0}\) and/or \(\nu\) implies that \(z_{eq,1}<z_{eq}\). It follows that it is possible to choose a combination of \(\hat{\xi}_{0}\) and \(\nu>0\) such that \(z_{eq,1}=z_{eq}\).
In figure 3, we depict the variation of the density parameters associated to each fluid component with respect to the \(\Lambda\)CDM model as a function of redshift \(z\), according to the expression \(\Delta\Omega_{i,1}=\Omega_{i,1}-\Omega_{i}\). In particular, numerical results of the variation of the density parameters obtained for the different values of \(\hat{\xi}_{0}\) for positive \(\nu\)-values are displayed in figure 3(a), while in figure 3(b) negative \(\nu\)-values are considered. From these figures we can see (with greater details than what it is seen in the figure 2) how the bulk viscosity and the running vacuum affect the evolution of the density parameters \(\Omega_{r,1}\), \(\Omega_{m,1}\), and \(\Omega_{\rm vac,1}\). In particular, it can be noted how a positive \(\nu\) implies a larger value of \(\Omega_{\rm vac,1}\) in comparison to \(\Omega_{\Lambda}\); while a negative \(\nu\) implies a smaller value of \(\Omega_{\rm vac,1}\) in comparison to \(\Omega_{\Lambda}\), which holds at high redshift. This is an expected result, as it can be seen from Eq. (1). On the other hand, the bulk viscosity and the running vacuum affect the evolution of \(\Omega_{r,1}\) despite the fact that the bulk viscosity is associated to the matter and the running to the DE, i.e., radiation "feels" these effects because all the fluids are constrained trough Eq. (11). This analysis is in agreement with the critical points presented in the Table 2, where we can see, for example, that one critical point for \(\Omega_{r}\) is \(1-\nu-9\hat{\xi}_{0}^{2}\) (in the plot the point is \(-\nu-9\hat{\xi}_{0}^{2}\)). Due to the small values of \(\hat{\xi}_{0}\), the effects of the bulk viscosity are visible between the current time and a high redshift, and even more notable for \(z+1\approx 10^{3}-10^{4}\). It is important to note that the vacuum density parameter seems not negligible at a very high redshift, with an apparent constant behavior for \(z+1>10\) in both cases.
In figure 4, we depict the effective barotropic index \(\omega_{\rm eff,1}\), according to Eq. (14), and its deviation from the effective barotropic index \(\omega_{\rm eff}\) of \(\Lambda\)CDM model obtained from Eq. (18), which is defined by \(\Delta\omega_{\rm eff,1}=\omega_{\rm eff,1}-\omega_{\rm eff}\). In particular, in figure 4(a) the effective barotropic index and the difference \(\Delta\omega_{\rm eff,1}\) are shown as a function of redshift for different \(\hat{\xi}_{0}\) values and \(\nu>0\). For comparison, the corresponding quantity for the standard \(\Lambda\)CDM is displayed. The same representation is shown in figure 4(b) for different \(\hat{\xi}_{0}\) values and \(\nu<0\). From these figures we can see how the bulk viscosity and the running vacuum affect the evolution of \(\omega_{\rm eff,1}\), being remarkably different when \(|\nu|\) take larger values. Nevertheless, this behavior is a consequence of the small size of \(\hat{\xi}_{0}\), since there are appreciated effects for larger values of this parameter. Focusing in the effects due to the sign of \(\nu\), we can see that for \(\nu>0\) the values of \(\omega_{\rm eff,1}\) are lower than \(\omega_{\rm eff}\) at high redshift, which is a consequence of a positive not negligible \(\Omega_{\rm vac,1}\); while for a \(\nu<0\) the values of \(\omega_{\rm eff,1}\) are greater than \(\omega_{\rm eff}\) at high redshift, which is a consequence of a negative not negligible \(\Omega_{\rm vac,1}\). At low redshift there is a change of this behaviour for the \(\nu<0\) case. It is important to mention that the possibility of \(\Omega_{\rm vac,1}<0\) comes from the definition of \(\rho_{\rm vac}\) as is discussed below.
In figure 5, we depict the vacuum energy density normalized with respect to their current value as a function of the redshift \(z\), as well as the normalized vacuum energy density for the \(\Lambda\)CDM model for a further comparison. In figure 5(a) the normalized vacuum energy density is displayed for the different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while negative \(\nu\) values are presented in figure 5(b). From this figures we can see how the bulk viscosity and the running vacuum affect the evolution of the vacuum energy density. It follows that an increment in \(\hat{\xi}_{0}\) does not appreciably affect the evolution of \(\rho_{\rm vac}\), contrary to what happens when we increment the values of \(|\nu|\). This behaviour is due to the fact that the bulk viscosity affects the evolution of \(\rho_{\rm vac}\) indirectly through the Hubble parameter according to the Eq. (1) and, therefore, it is necessary a remarkably difference in the evolution of \(H\) that does not appear due to the small values of \(\hat{\xi}_{0}\). On the other hand, depending on the sign of \(\nu\), it is possible to obtain an always positive vacuum energy density when \(\nu>0\) or a vacuum energy density that experiences a transition between positive to negative values when \(\nu<0\). From Eq. (1), this transition occurs when
\[H=H_{0}\sqrt{\frac{\Omega_{\rm vac,0}+|\nu|}{|\nu|}}, \tag{19}\]
and therefore, considering that \(H=H(z)\), the redshift at which this change of sign occur depends strongly on the values of \(\nu\). It is worthwhile pointing out that the
\begin{table}
\begin{tabular}{c c c c c c c} Point & \(\Omega_{r}\) & \(\Omega_{m}\) & \(\Omega_{\rm vac}\) & \(w_{\rm eff}\) & Existence & Acceleration \\ \hline (IVa) & 0 & \(\hat{\xi}\) & \(1-\hat{\xi}\) & \(-1\) & \(\forall\hat{\xi},\nu\) & Yes \\ (IVb) & 0 & \(\frac{-4+\nu(4+3\hat{\xi})}{-4+3\nu}\) & \(\frac{\nu+3\hat{\xi}}{4-3\nu}\) & \(\frac{\nu+4\hat{\xi}}{-4+3\nu}\) & \(\forall\hat{\xi}\ \wedge\ \nu\neq 4/3\) & No \\ \end{tabular}
\end{table}
Table 7: critical points of the autonomous system described by Eq. (13) for the bulk viscosity model \(s=0\) along with the conditions of existence. The effective equation of state parameter has been also included.
contribution of the running vacuum could reach at very high redshift large values of the order \(10^{19}\) with respect to its current value, which according to its measured value is \(\Lambda=4.24\pm 0.11\times 10^{-66}\) eV\({}^{2}\)[6]. Despite that the effective vacuum energy density would be still small, the Hubble parameter would become a very large value. This leads to the question whether large redshift values such as \(z=10^{6}\) lie within the validity range of the Ansatz given by Eq.(1). The answer to this task amounts including further terms of order \(H^{4}\) in the expansion of running vacuum energy, and therefore goes beyond the scope of the present work (for a discussion of different running vacuum energy expansions see ref. [29]).
### Model 2
In figures 6, 7, 8, and 9, we present the numerical results for model 2, obtained by the integration of Eq. (13) with \(s=1\). As a reminder, this model corresponds to the second class of models with \(\tilde{\nu}=\nu/2\).
In figure 6, we show the density parameters \(\Omega_{i,2}\) associated to each fluid component as a function of redshift \(z\), as well as the counterparts density parameters \(\Omega_{i}\) of \(\Lambda\)CDM model, according to Eq. (17), aiming to a further comparison. In figure 6(a) we show numerical results for the density parameters obtained for different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while in figure 6(b) negative \(\nu\) are considered. From these figures we can see how the bulk viscosity and the running vacuum affect the redshift value (which we call \(z_{eq,2}\)) at which \(\Omega_{r,2}=\Omega_{m,2}\), while the other intersection \(\Omega_{m,2}=\Omega_{\rm vac,2}\) remains essentially at the same redshift value, independent of dissipation and running. In this sense, and contrarily to what happens in the model 1, both positive and negative \(\nu\) lead to \(z_{eq,2}<z_{eq}\), being \(z_{eq}\) the redshift value at which \(\Omega_{r}=\Omega_{m}\) for \(\Lambda\)CDM model. Nevertheless, and as a comparison to what happens in model 1, the values of \(z_{eq,2}\) are closer to \(z_{eq}\) for all \(\nu<0\). This is a consequence of the behaviour of \(\rho_{\rm vac}\) as we will argue below.
In figure 7, we depict the differences \(\Delta\Omega_{i,2}=\Omega_{i,2}-\Omega_{i}\) of the density parameters associated to each fluid component with respect to the corresponding ones in the \(\Lambda\)CDM model as a function of redshift \(z\). In figure 7(a) we present the numerical results of the variation of the density parameters obtained for different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while in figure 7(b) we present corresponding results but with negative \(\nu\). From these figures we can see (with better detail than what it is seen in figure 6) how the bulk viscosity and the running vacuum affect the evolution of the density parameters \(\Omega_{r,2}\), \(\Omega_{m,2}\), and \(\Omega_{\rm vac,2}\). Following this line, can be noted how a positive \(\nu\) implies a greater value of \(\Omega_{\rm vac,2}\) in comparison to \(\Omega_{\Lambda}\); while a negative \(\nu\) implies a lower value of \(\Omega_{\rm vac,2}\) in comparison to \(\Omega_{\Lambda}\). But, contrary to what happens in the model 1, this differences occurs only at low redshift because at very high redshift there is not remarkably differences in the three fluids with respect to the \(\Lambda\)CDM model. This is an expected result, as can be seen from Eq. (1), because in this model we add to the vacuum energy density an extra contribution that depends on \(\dot{H}=-H(1+z)dH/dz\), which is a negative contribution for \(H>0\), considering that \(dH/dz\) is positive as \(z\) grows. Therefore, at very high redshift only affect the evolution of the fluids the bulk viscous constant \(\hat{\xi}_{0}\), whose contribution is negligible due to its small values, but visible between the current time and a high redshift, most notable in \(z+1\approx 10^{3}-10^{4}\). This analysis is in agreement with the critical points presented in the Table 4, where we can see, for example, that one critical point for \(\Omega_{r}\) is 1 (in the plot the point is 0). It is important to mention that the difference between \(\Omega_{\rm vac,2}\) and \(\Omega_{\Lambda}\) are lower for \(\nu<0\) because in this case, following Eq. (1), the contribution of the \(\dot{H}\) term in the expansion is positive and the contribution of the \(H^{2}\) term is negative. This last one analyses indicate that the major contribution to \(\rho_{\rm vac}\) comes from the \(H^{2}\) term than the \(\dot{H}\) term.
In figure 8, we depict the effective barotropic index \(\omega_{\rm eff,2}\), according to Eq. (15), and its deviation with respect to the effective barotropic index \(\omega_{\rm eff}\) of the \(\Lambda\)CDM model, obtained from Eq. (18), which is defined by \(\Delta\omega_{\rm eff,2}=\omega_{\rm eff,2}-\omega_{\rm eff}\). In particular, in figure 8(a) the effective barotropic index and the difference \(\Delta\omega_{\rm eff,2}\) are shown as a function of redshift for different \(\hat{\xi}_{0}\) values and \(\nu>0\). For comparison, the corresponding quantity for the standard \(\Lambda\)CDM model is displayed. The same representation are shown in figure 8(b) for different \(\hat{\xi}_{0}\) values and \(\nu<0\). From these figures we can see how the bulk viscosity and the running vacuum affect the evolution of \(\omega_{\rm eff,2}\), being remarkably different when \(|\nu|\) take larger values. Nevertheless, this behavior is a consequence of the small size of \(\hat{\xi}_{0}\), since there are appreciated effects for greater values of this parameter. Focusing in the effects of the sign of \(\nu\), we can see that for both, positive and negative, the values of \(\omega_{\rm eff,2}\) are greater than \(\omega_{\rm eff}\) at high redshift, with a change of this behaviour at low redshift (similar to what happens in the model 1 for \(\nu<0\)). Even more, and contrary to what happens in the model 1, at very high redshift there are no differences between \(\omega_{\rm eff,2}\) and \(\omega_{\rm eff}\), again due to negligible behaviour of \(\Omega_{\rm vac,2}\) at this redshift.
In figure 9, we depict the vacuum energy density normalized with respect to their current value as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 9(a) the normalized vacuum energy density is displayed for different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while negative \(\nu\) values are presented in figure 9(b). From this figures we can see how the bulk viscosity and the running vacuum affect the evolution of the vacuum energy density. In this sense an increment in \(\hat{\xi}_{0}\) does not appreciably affect the evolution of \(\rho_{\rm vac}\), contrary to what happens within an increment in the values of \(|\nu|\) (but note that this effect is not negligible). This behaviour is due to the fact that the bulk viscosity affects the evolu
Figure 2: Plots of density parameters associated to each fluid \(\Omega_{i,1}\) for **Model 1** as a function of redshift \(z\), for different \(\hat{\xi}_{0}\)-values (solid lines). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**. The dashed lines correspond to the density parameters \(\Omega_{i}\) for \(\Lambda\)CDM model, obtained from Eq. (17), where \(i\) stands for \(r\) (radiation), \(m\) (matter), and vac (vacuum). This model corresponds to the first class of models where \(\tilde{\nu}=0\) with \(s=1/2\), whose solutions are obtained by the numerical integration of Eq. (12). The x-axis is presented in the \(z+1\) range in order to obtain a better representation in the logarithm scale.
Figure 3: Plots of the variation of the density parameters \(\Delta\Omega_{i,1}\) associated to each fluid \(\Omega_{i,1}\) for **Model 1**, with respect to their \(\Lambda\)CDM counterparts \(\Omega_{i}\), as a function of redshift \(z\), for different values of \(\hat{\xi}_{0}\). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**. The curves are obtained through the expression \(\Delta\Omega_{i,1}=\Omega_{i,1}-\Omega_{i}\), where \(i\) stands for \(r\) (radiation), \(m\) (matter), and vac or \(\Lambda\) (vacuum).
tion of \(\rho_{\rm vac}\) through the Hubble parameter and its time derivative according to the Eq. (1) and, therefore, it is necessary a remarkably difference in the evolution of \(H\) that does not appear due to the small values of \(\hat{\xi}_{0}\). On the other hand, depending on the sign of \(\nu\) it is possible to obtain an always positive vacuum energy density when \(\nu>0\) or a vacuum energy density that experience a transition between a positive to a negative one when \(\nu<0\). From Eq. (1), this transition occurs when
\[H^{2}=C(1+z)+\frac{H_{0}^{2}\left[\Omega_{\rm vac,0}+\frac{|\nu|}{2}(1-q_{0}) \right]}{|\nu|}, \tag{20}\]
where \(q_{0}\) is the current value of the deceleration parameter, related to the Hubble parameter through the expres
Figure 4: **(left)** Plot of the effective barotropic index \(\omega_{\rm eff,1}\) for **Model 1**, obtained from Eq. (15), as a function of redshift \(z\). **(right)**. Plot of the variation of the effective barotropic index \(\Delta\omega_{\rm eff,1}=\omega_{\rm eff,1}-\omega_{\rm eff}\), were \(\omega_{\rm eff}\) correspond to their \(\Lambda\)CDM counterpart obtained from Eq. (18), as a function of redshift \(z\). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**, for the the same values of \(\hat{\xi}_{0}\).
Figure 5: Plots of vacuum energy density \(\rho_{\rm vac}\) for **Model 1**, normalized with respect to their current value \(\rho_{\rm vac,0}\), as a function of redshift \(z\). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**, for different \(\hat{\xi}_{0}\)-values. Notice that we have plotted \(\rho_{\rm vac}/\rho_{\rm vac,0}-2\) in order to obtain a better representation in the symmetrical logarithm scale. As a reference, we have used \(\Lambda=4.24\pm 0.11\times 10^{-66}\) eV\({}^{2}\)[6] to compute the present vacuum energy density \(\rho_{\rm vac,0}\).
sion \(\dot{H}/H^{2}=-(1+q)\), and \(C\) is an integration constant. The above equation depends explicitly in the r.h.s on the redshift, contrary to whats happens in Eq. (19) which depends in their r.h.s only on constant terms. Therefore, it is necessary to know the dependency in \(z\) of \(H\) in order to obtain the redshift in which the change of sing in the vacuum energy density occurs. This kind of behavior, where a dynamical vacuum energy density can takes negative values at a finite redshift has been considered to alleviated low-redshift tensions, including the \(H_{0}\) tension,[50, 51, 52, 53]. It is interesting to note that this change of sing also occurs in other approaches, as for example the graduated dark energy, which phenomenologically describes a cosmological constant whose sign changes at a certain redshift, becoming positive just in the late time evolution [50]. It is important to mention that the maximal contribution of the running vacuum energy density reaches \(10^{17}\) at high redshift (\(z=10^{6}\)), which is two order of magnitude smaller than the one obtained for the model 1. Still, as the maximal value for the Hubble parameter becomes very large, the RG-inspired Ansatz of Eq.(1) may go beyond its validity range. This issue cannot be addressed without the inclusion of further \(H\)-power contributions, which again goes beyond this present work.
### Model 3
In figures 10, 11, 12, and 13, we present the numerical results for the model 3, obtained by the integration of Eq. (13) with \(s=1/2\). As a reminder, this model correspond to the second class of models where \(\tilde{\nu}=\nu/2\).
In figure 10, we depict the density parameters \(\Omega_{i,3}\) associated to each fluid as a function of redshift \(z\), as well as the density parameters \(\Omega_{i}\) associated to each fluid for \(\Lambda\)CDM model, according to Eq. (17), for a further comparison. In figure 10(a) are presented the numerical results of the density parameters obtained for the different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while in figure 10(b) are presented the numerical result obtained for the different values of \(\hat{\xi}_{0}\) and negative \(\nu\). From these figures we can see how the bulk viscosity and the running vacuum affect the redshift value in which \(\Omega_{r,3}=\Omega_{m,3}\) (\(z_{eq,3}\)), without any appreciate effect in the redshift value in which \(\Omega_{m,3}=\Omega_{\rm vac,3}\). In this sense, and similarly to what happens in the model 2, both positive and negative \(\nu\) implies that \(z_{eq,3}<z_{eq}\), being \(z_{eq}\) the redshift range in which \(\Omega_{r}=\Omega_{m}\); and the values of \(z_{eq,3}\) are more closer to \(z_{eq}\) for \(\nu<0\) in comparison to their counterparts in the model 1. This analysis is in agreement with the dynamical system analysis made in the section III.3, where was indicated that the critical points of this model are the same of those that correspond to the model 2 (with one critical point of the model 1). Therefore considering that in the critical points exclusive of this model, presented in the table 6, the radiation dominated period is absent as a critical point, it is an expected result that this model behaves similarly to the model 2. It is important to mention that some differences between this model and model 2 is due to the fact that, in this model, the figures are presented in the range \(10^{5}\leq z+1\leq 0.1\) due to numerical difficulties.
In figure 11, we depict the variation of the density parameters associated to each fluid with respect to the \(\Lambda\)CDM model as a function of redshift \(z\), according to the expression \(\Delta\Omega_{i,3}=\Omega_{i,3}-\Omega_{i}\). In figure 11(a) are presented the numerical results of the variation of the density parameters obtained for the different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while in figure 11(b) are presented the numerical result obtained for the different values of \(\hat{\xi}_{0}\) and negative \(\nu\). From these figures we can see (with better detail than what it is seen in the figure 10) how the bulk viscosity and the running vacuum affect the evolution of the density parameters \(\Omega_{r,3}\), \(\Omega_{m,3}\), and \(\Omega_{\rm vac,3}\). Following this line, and similarly to what happens in the model 2, note how a positive \(\nu\) implies a greater value of \(\Omega_{\rm vac,3}\) in comparison to \(\Omega_{\Lambda}\); while a negative \(\nu\) implies a lower value of \(\Omega_{\rm vac,3}\) in comparison to \(\Omega_{\Lambda}\). But, contrary to what happens in model 1, this differences occurs only at low redshift because at high redshift \(\Omega_{\rm vac}\) becomes, appreciably, negligible. Unfortunately, due to the numerical issues, we are not able to ensure that at very high redshift there is not remarkably differences in the three fluids with respect to \(\Lambda\)CDM model as in model 2. Nevertheless, this behaviour is possible taking into account the similar behaviour with respect to model 2 seen above. Again, this is an expected result considering that the \(\dot{H}\) term in the expansion for \(\rho_{\rm vac}\), given by Eq. (1), represents a negative contribution while the \(H^{2}\) term is a positive contribution (for \(\nu<0\) the behaviour of these terms change, but, the contribution of \(H^{2}\) is more important than \(\dot{H}\) leading to the less contribution of \(\Omega_{\rm vac,3}\) with respect to \(\Omega_{\Lambda}\)). It is important to note that, when we compare these figures with their model 2 counterparts, we can see how the bulk viscosity affects the evolution of the density parameters by the election of the power \(s\). This is notable for the case \(\nu<0\) with \(\hat{\xi}_{0}=1\times 10^{-4}\) and \(\nu=-5\times 10^{-4}\).
In figure 12, we depict the effective barotropic index \(\omega_{\rm eff,3}\), according to Eq. (15), and their variation with respect to the effective barotropic index \(\omega_{\rm eff}\) of \(\Lambda\)CDM model, obtained from Eq. (18), through the expression \(\Delta\omega_{\rm eff,3}=\omega_{\rm eff,3}-\omega_{\rm eff}\). In figure 12(a) are presented the numerical results of the barotropic index and their respective variation with respect to the \(\Lambda\)CDM model obtained for the different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while in figure 12(b) are presented the numerical result obtained for the different values of \(\hat{\xi}_{0}\) and negative \(\nu\). From these figures we can see how the bulk viscosity and the running vacuum affect the evolution of \(\omega_{\rm eff,3}\), being the most remarkably differences when \(|\nu|\) takes grater values, similarly to what happens in model 2. Indeed, there is not appreciable difference between these figures and their corresponding model 2 counterparts. Therefore, as in model 2, regardless the sign of \(\nu\) the values of \(\omega_{\rm eff,3}\) are greater than the \(\omega_{\rm eff}\) values at high redshift, with a
Figure 6: Plots of density parameters associated to each fluid \(\Omega_{i,2}\) for **Model 2** as a function of redshift \(z\), for different values of \(\hat{\xi}_{0}\) (solid lines). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**. The dashed lines correspond to the density parameters \(\Omega_{i}\) for the \(\Lambda\)CDM model, obtained from Eq. (17), where \(i\) stands for \(r\) (radiation), \(m\) (matter), and vac (vacuum). This model corresponds to the second class of models where \(\tilde{\nu}=\nu/2\) with \(s=1\), whose solutions are obtained by the numerical integration of Eq. (13).
Figure 7: Plots of the variation of the density parameters \(\Delta\Omega_{i,2}\) associated to each fluid \(\Omega_{i,2}\) for **Model 2**, with respect to their \(\Lambda\)CDM counterparts \(\Omega_{i}\), as a function of redshift \(z\), for different values of \(\hat{\xi}_{0}\). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**. The curves are obtained through the expression \(\Delta\Omega_{i,2}=\Omega_{i,2}-\Omega_{i}\), where \(i\) stands for \(r\) (radiation), \(m\) (matter), and vac (vacuum).
change of this behaviour at low redshift (similar to what happens in model 1 for \(\nu<0\)). Unfortunately, due to the numerical issues, we are not able to ensure that at very high redshift there is not remarkably differences between \(\omega_{\text{eff},3}\) and \(\omega_{\text{eff}}\) as in model 2. Nevertheless, this behaviour is possible taking into account the similar behaviour with respect to model 2 seen above.
In figure 13, we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) are presented the numerical results of the normalized vacuum energy density obtained for the different values of \(\hat{\xi}_{0}\) and positive \(\nu\), while in figure 13(b) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(b) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison. In figure 13(a) we depict the vacuum energy density, normalized with respect to their current value, as a function of redshift \(z\), as well as the normalized vacuum energy density for \(\Lambda\)CDM model for a further comparison.
are presented the numerical result obtained for the different values of \(\hat{\xi}_{0}\) and negative \(\nu\). From this figures we can see how the bulk viscosity and the running vacuum affect the evolution of the vacuum energy density. In this sense, and similarly to what happens in model 2, it is clearly to see that an increment in \(\hat{\xi}_{0}\) does not affect remarkably the evolution of \(\rho_{\rm vac}\), contrary to what happens when we increment the values of \(|\nu|\) (but note that this effect is not negligible). Again, this behaviour is due to the fact that the bulk viscosity affects the evolution of \(\rho_{\rm vac}\) through the Hubble parameter and its time derivative according to the Eq. (1) and, therefore, it is necessary a remarkably difference in the evolution of \(H\) that does not appear due to the small values of \(\hat{\xi}_{0}\). On the other hand, depending on the sign of \(\nu\) it is possible to obtain an always positive vacuum energy density when \(\nu>0\) or a vacuum energy density that experience a transition between a positive to a negative one when \(\nu<0\). This last one occurs when the equality given by Eq. (20) is fulfilled. Hence, we obtain an evolution of \(\rho_{\rm vac}\) in this model similar to model 2, using as argument all the previous analysis made to this model (and comparing these figures with their model 2 counterparts).
## V Conclusions and final remarks
In the present article, we have performed a detailed study of two concrete running vacuum models under a non-perturbative dynamical system perspective including, additionally, dissipation of the matter component through a general bulk viscosity given by the Ansatz \(\hat{\xi}\sim H^{1-2s}\rho_{m}^{s}\), which has been recently proposed in [44]. To be more precise, we have combined two non-trivial effects usually studied separately: i) a running vacuum scenario, which basically assumes that the vacuum energy density, \(\rho_{\rm vac}\), is replaced by its scale-dependent/running counterpart, enriching the potential cosmological solutions of the associated models, and ii) a more realistic dissipative fluid description of matter, parameterized by a non-trivial bulk viscosity, \(\hat{\xi}\), which depends on the combination of DM energy density \(\rho_{m}\) and on the Hubble parameter, providing a slightly but relevant modified cosmic evolution of the universe. It is worthwhile mentioning that the inclusion of bulk viscosity under a microscopic point of view is still an open task, which was not addressed here.
Concretely, we were mainly interested in two classes of models: i) \(\tilde{\nu}=0\), i.e., ignoring the derivative term contained into the Ansatz for the vacuum energy density, and ii) setting \(\nu=2\tilde{\nu}\) since it has the potential advantage of alleviating some tensions permeated in the \(\Lambda\)CDM cosmological model [47]. Notice that, albeit we parameterize our Ansatz only by \(\nu\) as \(\tilde{\nu}=\nu/2\), the combined effects of \(H\) and \(\dot{H}\) are still present and hence have non-trivial physical consequences. Let us reinforce that Model 1 corresponds to the first class of model, given by Eqs. (12), with \(s=1/2\); while Models 2, 3, and 4 correspond to the second class of model, given by Eqs. (13), with \(s=1\), \(1/2\), and \(0\), respectively (Table 1 gives a resume of our notations). In this respect, the take-home-message for each model is summarized as follows:
* Model 1: The critical point (Ia) obviously describes a non-canonical radiation point as \(\nu\) appears in its associated energy density parameter (and naturally in the effective EoS parameter \(\omega_{\rm eff}\)). Also, notice that this critical point does not include the bulk viscosity parameter \(\hat{\xi}_{0}\). By contrast, the critical point (Id) includes both, running effects and the bulk viscosity dissipation, giving rise to the same \(\omega_{\rm eff}\) observed in case (Ia). Such "coincidence" means that there is a sort of degeneracy, i.e., in principle, we could not distinguish the fixed points (Ia) and (Id) by reading only the equation of state, in spite of that they give rise to a different cosmological evolution. Point (Ie) is not altered by the inclusion of bulk viscosity and running vacuum models, and that point posses the same \(\omega_{\rm eff}\) than the critical point (Ic), i.e., one again is unable to recognize the effects of bulk viscosity (which is evident in \(\rho_{\rm vac}\)) just by checking the effective EoS. Point (Ib) accounts for dark matter domination and includes running effects (of the vacuum energy density) and also the effects of the bulk viscosity but on the EoS only. Finally, Table 3 summarizes the corresponding Eigenvalues and stability conditions for these points. Irrespective of the precise values of critical points, the impact of running vacuum appears to be more profound in setting the stability conditions than the modifications coming from bulk viscosity. Also, the phase space diagram reveals that the system shows an attractor-character after a saddle-like behaviour, independent of the initial conditions, but strongly dependent of the model parameters. For instance, if \(\hat{\xi}_{0}\sim{\cal O}(1)\) the DM-like fixed point corresponds to an attractor, while for \(\hat{\xi}_{0}\ll 1\), it corresponds to a saddle point. The former case represents, indeed, an unified dark fluid scenario, in which the acceleration expansion is driven by the dark matter component, and the latter corresponds to the usual cosmic evolution, being the DE responsible for the universe acceleration expansion.
* Model 2: This second case accounts for the more general situation addressed in this paper, i.e., \(\tilde{\nu}\neq 0\). Here we have four critical points, remarkably two of them (IIa and IIb) are not susceptible to the inclusion of running of the vacuum energy density and dissipation, as the density parameters for these points are independent of the values of \(\nu\) and \(\hat{\xi}_{0}\). On the contrary, points IIc and IId are strongly dependent of \(\nu\) and \(\hat{\xi}_{0}\). In particular, notice that point IIc encodes an intermediate period which is
Figure 10: Plots of density parameters associated to each fluid \(\Omega_{i,3}\) for **Model 3** as a function of redshift \(z\), for different values of \(\hat{\xi}_{0}\) (solid lines). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**. The dashed lines correspond to the density parameters \(\Omega_{i}\) for the \(\Lambda\)CDM model, obtained from Eq. (17), where \(i\) stands for \(r\) (radiation), \(m\) (matter), and vac (vacuum). This model corresponds to the second class of models where \(\bar{\nu}=\nu/2\) with \(s=1/2\), whose solutions are obtained by the numerical integration of Eq. (13).
Figure 11: Plots of the variation of the density parameters \(\Delta\Omega_{i,3}\) associated to each fluid \(\Omega_{i,3}\) for **Model 3**, with respect to their \(\Lambda\)CDM counterparts \(\Omega_{i}\), as a function of redshift \(z\), for different values of \(\hat{\xi}_{0}\). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**. The curves are obtained through the expression \(\Delta\Omega_{i,3}=\Omega_{i,3}-\Omega_{i}\), where \(i\) stands for \(r\) (radiation), \(m\) (matter), and vac (vacuum).
categorized as non-standard dark matter, involving simultaneously the effects due to the running and the viscosity. Interestingly, for a weak \(\nu\)-coupling the energy density parameters associated to matter and vacuum energy density are slightly modified as follows:
\[\Omega_{m} =1-\frac{1}{4}\nu\Big{(}1+3\dot{\xi}_{0}\Big{)}\,+\,\mathcal{O}( \nu^{2}), \tag{21}\] \[\Omega_{\text{vac}} =\frac{1}{4}\nu\Big{(}1+3\dot{\xi}_{0}\Big{)}\,+\,\mathcal{O}( \nu^{2}). \tag{22}\]
Point IId is even more involved, corresponding to standard radiation domination for a concrete value
Figure 12: **(left)** Plot of the effective barotropic index \(\omega_{\text{eff},3}\) for **Model 3**, obtained from Eq. (15), as a function of redshift \(z\). **(right)**. Plot of the variation of the effective barotropic index \(\Delta\omega_{\text{eff},3}=\omega_{\text{eff},3}-\omega_{\text{eff}}\), were \(\omega_{\text{eff}}\) corresponds to their \(\Lambda\)CDM counterpart obtained from Eq. (18), as a function of redshift \(z\). Positive and negative \(\nu\)-values are respectively considered in **(a)** and **(b)**, for the same values of \(\dot{\xi}_{0}\).
of \(\nu\), but in general, it corresponds to DE domination. With respect to the stability of this model we can confirm, for certain values of \(\nu\) and \(\hat{\xi}_{0}\) that: i) point IIa is a repeller ii) point IIb is an attractor iii) point IIc could be saddle or attractor, and iv) point IId is an attractor.
* Model 3: This case is based on the differential system Eq. (13) with \(s=1/2\). Most of the properties of this model also appear in Model 1 and Model 2, hence, we will not discuss them again. Instead, we will comment the novelty: taking the Ansatz Eq. (5) we notice the existence of a (viscosity-running) two-parameters family of solutions. In fact, points (IIc) and (IIIa) have the same effective EoS in the \(\hat{\xi}_{0}\to 0\) limit.
* Model 4: Consistent with the result achieved in our previous paper [44], in which the running effects were not included, the present model with exponent \(s=0\) must be discarded as it can not describe successfully the whole cosmological evolution of the universe. Unfortunately, the inclusion of a running vacuum energy density does not modify such fact, i.e., in the present case the radiation-dominated period is still absent, the reason why this cosmological scenario is not of physical interest.
Thus, the inclusion of both, a running vacuum energy density and a dissipative DM component enrich the whole physical scenario by adding non-trivial critical points absent in their non-running and non-viscous counterparts. For example, the inclusion of bulk viscosity can drive the current acceleration expansion of the Universe and provide a new phantom-like behaviour, depending on the values of \(\hat{\xi}_{0}\) and \(\nu\). We notice that the effect of the running vacuum energy density is not present in the late-times dynamics of the universe for the realization of the model 1.
The results obtained from the dynamical system analysis were complemented by performing the numerical integration of the models 1, 2, and 3, discarding beforehand the model 4 due to the absence of a dominant radiation critical point. In this sense, the numerical results for the successful models show the capability of describing three eras of the cosmic evolution: radiation, matter and dark energy domination eras.
An interesting feature is that the corresponding redshift value (\(z_{eq,i}\)) at which \(\Omega_{r,i}=\Omega_{m,i}\) (where \(i\) stands for models 1, 2, and 3) depends on the particular model; while the redshift value at which \(\Omega_{m,i}=\Omega_{\rm vac,i}\) remains slightly unchanged compared with the prediction of \(\Lambda\)CDM for all these three models. This is a very desirable result for the proposed extension of the standard cosmological model, in agreement with some observational inferences. Nevertheless, the model 1 has the feature that a non-trivial combination of values \(\nu\) and \(\hat{\xi}_{0}\) may lead to the same redshift value at which \(\Omega_{r,1}=\Omega_{m,1}\), i.e., \(z_{eq,1}=z_{eq}\), while for the other models we were unable to find suitable values to fulfill this property.
On the other hand, all these models lead to a larger contribution of \(\Omega_{\rm vac}\) when compared to the \(\Lambda\)CDM model, for positive \(\nu\)-values. On the contrary, negative values of \(\nu\) reduce \(\Omega_{\rm vac}\). Interestingly, the difference of the vacuum density parameter, \(\Delta\Omega_{\rm vac}\), presents a nearly constant variation for the model 1 from \(z+1\approx 10\) up to very high redshift, which appreciably deviates from the value of the standard cosmological model, as it can be seen in fig. 3. Nevertheless, in models 2 and 3 such differences become approximately neglectable at very high redshift. This feature comes from the extra term present in \(\rho_{\rm vac}\), which depends on \(\dot{H}\) and gives a negative contribution. The same reasoning applies to the behavior of \(\Delta\omega_{\rm eff}\) for these models. In fact, as the redshift increases this difference goes to zero, as it can be seen in fig. 12.
Finally, when compared the vacuum energy density at high redshift to its observed current value, it turns out that the largest difference occurs for model 1, which is again due to the contribution of \(\dot{H}\) to \(\rho_{\rm vac}\) in these models. A remarkable feature of \(\rho_{\rm vac}\) is its sign change during the matter domination period depending on the sign of \(\nu\). This change of sign occurs according to Eq. (19) for model 1 and Eq. (20) for models 2 and 3. It is important to mention that the leading contribution to the variation of \(\rho_{\rm vac}\) comes from \(\nu\). Nevertheless, the dissipative effects certainly affect the whole evolution of the system as was inferred from the dynamical system analysis.
As an overall and encouraging conclusion from the dynamical system analysis is that we have obtained new critical points of cosmological relevance that arise from the combined effects of a running vacuum energy density and dissipative DM component. Hence, the associated cosmological models are characterized by two parameters describing the above-mentioned effects. This is appealing to the light of current tensions permeated in the \(\Lambda\)CDM model, because of the potentiality to successfully describe the current observations due to an enlarged parameter phase space characterizing these extended models. Further constraints on the model parameters can be obtained by means of high-quality observational constraints by using the derived parameter regions from the dynamical system analysis as priors for the statistical treatment. This task will be addressed in future works.
## Acknowledgements
G. G. acknowledges the financial support from Agencia Nacional de Investigacion y Desarrollo (ANID) through the FONDECYT postdoctoral Grant No. 3210417. E. G. acknowledges the support of Direccion de Investigacion y Postgrado at Universidad de Aconeagua. G. P. acknowledges the financial support from Dicty-USACH Grant No. 042231PA. A.R. and N.C. acknowledge Universidad de Santiago de Chile for financial support through the Proyecto POSTDOCIDCYT, Codigo
042231CM-Postdoc.
|
2305.13430
|
Introduction to Robust Power Domination
|
Sensors called phasor measurement units (PMUs) are used to monitor the
electric power network. The power domination problem seeks to minimize the
number of PMUs needed to monitor the network. We extend the power domination
problem and consider the minimum number of sensors and appropriate placement to
ensure monitoring when $k$ sensors are allowed to fail with multiple sensors
allowed to be placed in one location. That is, what is the minimum multiset of
the vertices, $S$, such that for every $F\subseteq S$ with $|F|=k$, $S\setminus
F$ is a power dominating set. Such a set of PMUs is called a $k$-robust power
domination set. This paper generalizes the work done by Pai, Chang and Wang in
2010 on vertex-fault-tolerant power domination, which did not allow for
multiple sensors to be placed at the same vertex. We provide general bounds and
determine the $k$-robust power domination number of some graph families.
|
Beth Bjorkman, Esther Conrad
|
2023-05-22T19:17:17Z
|
http://arxiv.org/abs/2305.13430v1
|
# Introduction to Robust Power Domination
###### Abstract
Sensors called phasor measurement units (PMUs) are used to monitor the electric power network. The power domination problem seeks to minimize the number of PMUs needed to monitor the network. We extend the power domination problem and consider the minimum number of sensors and appropriate placement to ensure monitoring when \(k\) sensors are allowed to fail with multiple sensors allowed to be placed in one location. That is, what is the minimum multiset of the vertices, \(S\), such that for every \(F\subseteq S\) with \(|F|=k\), \(S\setminus F\) is a power dominating set. Such a set of PMUs is called a \(k\)_-robust power domination set_. This paper generalizes the work done by Pai, Chang and Wang in 2010 on vertex-fault-tolerant power domination, which did not allow for multiple sensors to be placed at the same vertex. We provide general bounds and determine the \(k\)-robust power domination number of some graph families.
**Keywords:** robust power domination, power domination, tree
**AMS subject classification:** 05C69, 05C85, 68R10, 94C15
## 1 Introduction
The power domination problem seeks to find the placement of the mimimum number of sensors called phasor measurement units (PMUs) needed to monitor an electric power network. In [3], Haynes et al. defined the power domination problem in graph theoretic terms by placing PMUs at a set of initial vertices and then applying observation rules to the vertices and edges of the graph. This process was simplified by Brueni and Heath in [1].
Pai, Chang, and Wang [5] generalized power domination to create _vertex-fault-tolerant power domination_ in 2010 to model the possibility of sensor failure. The \(k\)_-fault-tolerant power domination problem_ seeks to find the minimum number of PMUs needed to monitor a power network (and their placements) given that any \(k\) of the PMUs will fail. The vertex containing the failed PMU remains in the graph, as do its edges; it is only the PMU that fails. This generalization allows for the placement of only one PMU per vertex.
We consider the related problem of the minimum number of PMUs needed to monitor a power network given that \(k\) PMUs will fail _but also allow for multiple PMUs to be placed at a given vertex_. We call this _PMU-defect-robust power domination_, as it is not the vertices that cause a problem with monitoring the network, but the individual PMUs themselves. This models potential synchronization issues, sensor errors, or malicious interference with the sensor outputs.
To demonstrate the difference between vertex-fault-tolerant power domination and PMU-defect-robust power domination and how drastic the difference between these two parameters can be, consider the star on 16 vertices with \(k=1\), shown in Figure 1. Notice that in vertex-fault-tolerant power domination, if one PMU is placed in the center of the star and this PMU fails, then all but one of the leaves must have PMUs in order to still form a power dominating set. However, with PMU-defect-robust power domination, placing two PMUs in the center is sufficient to ensure that even if one PMU fails, the power domination process will still observe all of the vertices.
In Section 2, we review definitions from past work and formally define PMU-defect-robust power domination. We also include some basic results in that section. Section 3 consists of general bounds for \(k\)-robust power domination and in Section 4 we demonstrate the tightness of these bounds with a family of complete bipartite graphs. In Section 5 we establish the \(k\)-robust power domination number for trees. Section 6 contains concluding remarks, including suggestions for future work.
Figure 1: A minimum vertex-fault-tolerant power dominating set and a minimum PMU-defect-robust power dominating set shown for a star when \(k=1\).
Preliminaries
We begin by giving relevant graph theory definitions. Then we define power domination, vertex-fault-tolerant power domination, and PMU-defect-robust power domination. Finally, we include useful properties of the floor and ceiling functions.
### Graph Theory
A graph \(G\) is a set of vertices, \(V(G)\), and a set of edges, \(E(G)\). Each (unordered) edge consists of a set of two distinct vertices; the edge \(\{u,v\}\) is often written as \(uv\). When \(G\) is clear, we write \(V=V(G)\) and \(E=E(G)\). A _path_ from \(v_{1}\) to \(v_{\ell+1}\) is a sequence of vertices and edges \(v_{1},e_{1},v_{2},e_{2},\ldots,v_{\ell},e_{\ell},v_{\ell+1}\) so that the \(v_{i}\) are distinct vertices and \(v_{i}\in e_{i}\) for all \(i\) and \(v_{i}\in e_{i-1}\) for all \(i\geq 2\). Such a path has _length_\(\ell\). The _distance_ between vertices \(u\) and \(v\) is the minimum length of a path between \(u\) and \(v\). A graph \(G\) is _connected_ if there is a path from any vertex to any other vertex. _Throughout what follows, we consider only graphs that are connected_.
We say that vertices \(u\) and \(v\) are _neighbors_ if \(uv\in E\). The _neighborhood_ of \(u\in V\) is the set containing all neighbors of \(u\) and is denoted by \(N(u)\). The _closed neighborhood_ of \(u\) is \(N[u]=N(u)\cup\{u\}\). The _degree_ of a vertex \(u\in V\) is the number of edges that contain \(u\), that is, \(\deg_{G}\left(u\right)=|N(u)|\). When \(G\) is clear, we omit the subscript. The _maximum degree_ of a graph \(G\) is \(\Delta\left(G\right)=\max\limits_{v\in V}\deg\left(v\right)\).
A _subgraph_\(H\) of a graph \(G\) is a graph such that \(V(H)\subseteq V(G)\) and \(E(H)\subseteq E(G)\). An _induced subgraph_\(H\) of a graph \(G\), denoted \(H=G[V(H)]\), is a graph with vertex set \(V(H)\subseteq V(G)\) and edge set \(E(H)=\{uv:u,v\in V(H)\) and \(uv\in E(G)\}\).
We refer the reader to _Graph Theory_ by Diestel [2] for additional graph terminology not detailed here.
### Power domination, vertex-fault-tolerant power domination, and PMU-defect-robust power domination
What follows is an equivalent statement of the power domination process as defined in [3], and established by [1].
The _power domination process_ on a graph \(G\) with initial set \(S\subseteq V\) proceeds recursively by:
1. \(B=\bigcup\limits_{v\in S}N[v]\)
2. While there exists \(v\in B\) such that exactly one neighbor, say \(u\), of \(v\) is _not_ in \(B\), add \(u\) to \(B\).
Step 1 is referred to as the _domination step_ and each repetition of step 2 is called a _zero forcing step_. During the process, we say that a vertex in \(B\) is _observed_ and a vertex not in \(B\) is _unobserved_. A _power dominating set_ of a graph \(G\) is an
initial set \(S\) such that \(B=V(G)\) at the termination of the power domination process. The _power domination number_ of a graph \(G\) is the minimum cardinality of a power dominating set of \(G\) and is denoted by \(\gamma_{P}\left(G\right)\).
In [5], Pai, Chang, and Wang define the following variant of power domination. For a graph \(G\) and an integer \(k\) with \(0\leq k\leq\left|V\right|\), a set \(S\subseteq V\) is called a \(k\)_-fault-tolerant power dominating set of \(G\)_ if \(S\setminus F\) is still a power dominating set of \(G\) for any subset \(F\subseteq V\) with \(\left|F\right|\leq k\). The \(k\)_-fault-tolerant power domination number_, denoted by \(\gamma_{P}^{k}\left(G\right)\), is the minimum cardinality of a \(k\)-fault-tolerant power dominating set of \(G\).
While \(k\)-fault-tolerant power domination allows us to examine what occurs when a previously chosen PMU location is no longer usable (yet the vertex remains in the graph), it is also interesting to study when an individual PMU fails. That is, allow for multiple PMUs to be placed at the same location and consider if a subset of the PMUs fail. This also avoids issues with poorly connected graphs, such as in Figure 1, where \(\gamma_{P}^{1}\left(G\right)\) may be close to the number of vertices of \(G\). Thus we define _PMU-defect-robust power domination_ as follows.
**Definition 2.1**.: For a given graph \(G\) and integer \(k\geq 0\), we say that a multiset \(S\), each of whose elements is in \(V\), is a \(k\)_-robust power dominating set_ of \(G\) if \(S\setminus F\) is a power dominating set of \(G\) for any submultiset \(F\) of \(S\) with \(\left|F\right|=k\). We shorten \(k\)-robust power dominating set of \(G\) to \(k\)-rPDS of \(G\). The size of a minimum \(k\)-rPDS is denoted by \(\tilde{\gamma}_{P}^{k}\left(G\right)\) and such a multiset is also referred to as a \(\tilde{\gamma}_{P}^{k}\)-set of \(G\). The _number of PMUs_ at a vertex \(v\in S\) is its multiplicity in \(S\), denoted by \(\#\mathrm{PMU}_{S}\left(v\right)\), or when \(S\) is clear, by \(\#\mathrm{PMU}\left(v\right)\).
There are several observations that one can quickly make.
**Observation 2.2**.: _Let \(G\) be a graph and \(k\geq 0\). Then_
1. \(\tilde{\gamma}_{P}^{0}\left(G\right)=\gamma_{P}^{0}\left(G\right)=\gamma_{P} \left(G\right)\)_,_
2. \(\tilde{\gamma}_{P}^{k}\left(G\right)\leq\gamma_{P}^{k}\left(G\right)\)_,_
3. \(\gamma_{P}\left(G\right)=1\) _if and only if_ \(\tilde{\gamma}_{P}^{k}\left(G\right)=k+1\)_._
For any minimum \(k\)-rPDS, having more than \(k+1\) PMUs at a single vertex is redundant.
**Observation 2.3**.: _Let \(G\) be a graph and \(k\geq 0\). If \(S\) is a \(\tilde{\gamma}_{P}^{k}\)-set of \(G\), then for all \(v\in S\) we have \(\#\mathrm{PMU}\left(v\right)\leq k+1\)._
### Floor and ceiling functions
Throughout what follows, recall the following rules for the floor and ceiling functions. Most can be found in Chapter 3 in [4] and we provide proofs for the rest.
**Proposition 2.4**.: [4, Equation 3.11] _If \(m\) is an integer, \(n\) is a positive integer, and \(x\) is any real number, then_
\[\left\lceil\frac{\left\lceil x\right\rceil+m}{n}\right\rceil=\left\lceil\frac{ x+m}{n}\right\rceil.\]
**Proposition 2.5**.: [4, Ch. 3 Problem 12] _If \(m\) is an integer and \(n\) is a positive integer, then_
\[\left\lceil\frac{m}{n}\right\rceil=\left\lfloor\frac{m-1}{n}\right\rfloor+1.\]
**Proposition 2.6**.: [4, Equation 3.4] _For any real number \(x\), \(\left\lceil-x\right\rceil=-\lfloor x\rfloor\)._
**Proposition 2.7**.: _If \(x\) and \(y\) are real numbers then_
\[\left\lceil x\right\rceil+\left\lceil y\right\rceil-1\leq\left\lceil x+y\right\rceil.\]
Proof.: Observe that \(\left\lceil x\right\rceil-1+\left\lceil y\right\rceil-1<x+y\) and so \(\left\lceil x\right\rceil+\left\lceil y\right\rceil-2<\left\lceil x+y\right\rceil\) which is a strict inequality of integers, so \(\left\lceil x\right\rceil+\left\lceil y\right\rceil-1\leq\left\lceil x+y\right\rceil\).
We can repeatedly apply the inequality in Proposition 2.7 to obtain
**Corollary 2.8**.: _If \(x\) is a real number and \(a\) is a positive integer then_
\[a\lceil x\rceil\leq\left\lceil ax\right\rceil+a-1.\]
## 3 General bounds
A useful property of robust power domination is the subadditivity of the parameter with respect to \(k\). This idea is established in the next three statements.
**Proposition 3.1**.: _Let \(k\geq 0\). For any graph \(G\), \(\tilde{\gamma}_{P}^{k}\left(G\right)+1\leq\tilde{\gamma}_{P}^{k+1}\left(G\right)\)._
Proof.: Consider a \(\tilde{\gamma}_{P}^{k+1}\)-set, \(S\), of \(G\). Let \(v\in S\). Create \(S^{\prime}=S\setminus\{v\}\), that is, \(S^{\prime}\) is \(S\) with one fewer PMU at \(v\). Observe that for any \(F^{\prime}\subseteq S^{\prime}\) with \(\left|F^{\prime}\right|=k\), we have \(F^{\prime}\cup\{v\}\subseteq S\) and \(\left|F^{\prime}\cup\{v\}\right|=k+1\). Hence \(S\setminus\left(F^{\prime}\cup\{v\}\right)\) is a power dominating set of \(G\). Thus, for any such \(F^{\prime}\), we have \(\left(S\setminus\{v\}\right)\setminus F^{\prime}=S^{\prime}\setminus F^{\prime}\) is a power dominating set of \(G\). Therefore, \(S^{\prime}\) is a \(k\)-robust power dominating set of \(G\) of size \(\left|S\right|-1\).
Proposition 3.1 can be applied repeatedly to obtain the next result.
**Corollary 3.2**.: _Let \(k\geq 0\) and \(j\geq 1\). For any graph \(G\),_
\[\tilde{\gamma}_{P}^{k}\left(G\right)+j\leq\tilde{\gamma}_{P}^{k+j}\left(G \right).\]
Corollary 3.2 implies the lower bound in the next proposition. The upper bound follows from taking \(k+1\) copies of any minimum power dominating set for \(G\) to form a \(k\)-rPDS.
**Proposition 3.3**.: _Let \(k\geq 0\). For any graph \(G\),_
\[\gamma_{P}\left(G\right)+k\leq\tilde{\gamma}_{P}^{k}\left(G\right)\leq(k+1) \gamma_{P}\left(G\right).\]
Observe that if \(\gamma_{P}\left(G\right)=1\) for any graph \(G\), both Observation 2.2 and Proposition 3.3 demonstrate that \(\tilde{\gamma}_{P}^{k}\left(G\right)=k+1\).
Haynes et al. observed in [3, Observation 4] that in a graph with maximum degree at least three, a minimum power dominating set can be chosen in which each vertex has degree at least 3. We observe that this is the same for robust power domination.
**Observation 3.4**.: _Let \(k\geq 0\). If \(G\) is a connected graph with \(\Delta(G)\geq 3\), then \(G\) contains a \(\tilde{\gamma}_{P}^{k}\)-set in which every vertex has degree at least 3._
A _terminal path_ from a vertex \(v\) in \(G\) is a path from \(v\) to a vertex \(u\) such that \(\deg\left(u\right)=1\) and every internal vertex on the path has degree 2. A _terminal cycle_ from a vertex \(v\) in \(G\) is a cycle \(v,u_{1},u_{2},\ldots,u_{\ell},v\) in which \(\deg_{G}\left(u_{i}\right)=2\) for \(i=1,\ldots,\ell\).
**Proposition 3.5**.: _Let \(k\geq 0\) and let \(G\) be a connected graph with \(\Delta(G)\geq 3\). Let \(S\) be a \(\tilde{\gamma}_{P}^{k}\)-set in which every vertex has degree at least 3. Any vertex \(v\in S\) that has at least two terminal paths from \(v\) must have \(\#\mathrm{PMU}\left(v\right)=k+1\). Any vertex \(v\in S\) that has at least one terminal cycle must have \(\#\mathrm{PMU}\left(v\right)=k+1\)._
Proof.: Let \(v\) be a vertex in \(S\) and suppose that \(v\) has two terminal paths or a terminal cycle. All of the vertices in the terminal paths or terminal cycle have degree 1 or 2 and so are not in \(S\). Thus, there are at least two neighbors of \(v\) which can only be observed via \(v\). As \(v\) can only observe both of these neighbors via the domination step, it must be the case that \(\#\mathrm{PMU}\left(v\right)=k+1\).
Zhao, Kang, and Chang [6] defined the family of graphs \(\mathcal{T}\) to be those graphs obtained by taking a connected graph \(H\) and for each vertex \(v\in V(H)\) adding two vertices, \(v^{\prime}\) and \(v^{\prime\prime}\); and two edges \(vv^{\prime}\) and \(vv^{\prime\prime}\), with the edge \(v^{\prime}v^{\prime\prime}\) optional. The complete bipartite graph \(K_{3,3}\) is the graph with vertex set \(X\cup Y\) with \(|X|=|Y|=3\) and edge set \(E=\{xy:x\in X,y\in y\}\).
**Theorem 3.6**.: [6, Theorem 3.] _If \(G\) is a connected graph on \(n\geq 3\) vertices then \(\gamma_{P}\left(G\right)\leq\frac{n}{3}\) with equality if and only if \(G\in\mathcal{T}\cup\{K_{3,3}\}\)._
This gives an upper bound for \(\tilde{\gamma}_{P}^{k}\left(G\right)\) in terms of the size of the vertex set and equality conditions, as demonstrated in the next corollary.
**Corollary 3.7**.: _Let \(G\) be a connected graph with \(n\geq 3\) vertices. Then \(\tilde{\gamma}_{P}^{k}\left(G\right)\leq(k+1)\frac{n}{3}\) for \(k\geq 0\). When \(k=0\), this is an equality if and only if \(G\in\mathcal{T}\cup\{K_{3,3}\}\). When \(k\geq 1\), this is an equality if and only if \(G\in\mathcal{T}\)._
Proof.: The upper bound is given by Proposition 3.3 and Theorem 3.6. From these results, we need only consider \(\mathcal{T}\cup\{K_{3,3}\}\) for equality. The \(k=0\) case follows directly from the power domination result. Let \(k\geq 1\).
First consider \(G\in\mathcal{T}\), constructed from \(H\). Note that \(\Delta\left(G\right)\geq 3\), so there exists a \(\tilde{\gamma}_{P}^{k}\)-set, say \(S\), in which every vertex has degree at least 3, so every vertex in \(S\) is a vertex of \(H\). For each \(v\in V(H)\), \(\deg_{G}\left(v\right)\geq 3\) and there are
either two terminal paths (if \(v^{\prime}v^{\prime\prime}\not\in E(H)\)) or a terminal cycle (if \(v^{\prime}v^{\prime\prime}\in E(H)\)). By Proposition 3.5, each \(v\in V(H)\) must have at least \(k+1\) PMUs.
Finally, consider \(K_{3,3}\). Note that \(\gamma_{P}\left(K_{3,3}\right)=2\). We will see in Theorem 4.3 that \(\tilde{\gamma}_{P}^{k}\left(K_{3,3}\right)=k+\left\lfloor\frac{k}{5}\right\rfloor +2<2(k+1)\) for \(k\geq 1\).
**Definition 3.8**.: For any graph \(G\), define \(s\left(G\right)\) to be the size of the largest set \(A\subseteq V\) such that for any \(B\subseteq A\) with \(\left|B\right|=\gamma_{P}\left(G\right)\), \(B\) is a power dominating set of \(G\).
Observe that \(\gamma_{P}\left(G\right)\leq s\left(G\right)\). For example, the star graph \(S_{16}\) shown in Figure 1 has \(\gamma_{P}\left(S_{16}\right)=s\left(S_{16}\right)=1\). The complete bipartite graph \(K_{3,3}\) has \(\gamma_{P}\left(K_{3,3}\right)=2\) and \(s\left(K_{3,3}\right)=6\) as any two vertices of \(K_{3,3}\) form a power dominating set.
**Proposition 3.9**.: _For any graph \(G\) and \(k\geq 0\), if \(s\left(G\right)\geq k+\gamma_{P}\left(G\right)\) then \(\tilde{\gamma}_{P}^{k}\left(G\right)=k+\gamma_{P}\left(G\right)\)._
Proof.: If \(s\left(G\right)\geq k+\gamma_{P}\left(G\right)\), then there exists a set \(S\) of size at least \(k+\gamma_{P}\left(G\right)\) so that any \(\gamma_{P}\left(G\right)\) elements of \(S\) form a power dominating set of \(G\). Thus, any \(\gamma_{P}\left(G\right)+k\) elements of \(S\) form a \(k\)-rPDS of \(G\) of size \(\gamma_{P}\left(G\right)+k\) and so \(\tilde{\gamma}_{P}^{k}\left(G\right)\leq\gamma_{P}\left(G\right)+k\). By the lower bound in Proposition 3.3, \(\tilde{\gamma}_{P}^{k}\left(G\right)\geq\gamma_{P}\left(G\right)+k\).
When \(s\left(G\right)>\gamma_{P}\left(G\right)\geq 2\), the following upper bound sometimes improves the upper bound from Proposition 3.3.
**Theorem 3.10**.: _If \(s\left(G\right)>\gamma_{P}\left(G\right)\geq 2\), then \(\tilde{\gamma}_{P}^{k}\left(G\right)\leq\left\lceil\frac{s\left(G\right) \left(k+\gamma_{P}\left(G\right)-1\right)}{s\left(G\right)-\gamma_{P}\left(G \right)+1}\right\rceil\) for \(k\geq 1\)._
Proof.: Let \(A=\left\{v_{1},v_{2},\ldots,v_{s\left(G\right)}\right\}\subseteq V\) be a maximum set such that any subset of size \(\gamma_{P}\left(G\right)\) is a power dominating set of \(G\). For what follows, let \(p=\left\lceil\frac{s\left(G\right)\left(k+\gamma_{P}\left(G\right)-1\right)}{s \left(G\right)-\gamma_{P}\left(G\right)+1}\right\rceil\). Construct \(S=\left\{v_{1}^{m_{1}},v_{2}^{m_{2}},\ldots,v_{s\left(G\right)}^{m_{s\left(G \right)}}\right\}\) where
\[m_{1}=\left\lceil\frac{p}{s\left(G\right)}\right\rceil\text{ and }m_{i}=\min \left\{\left\lceil\frac{p}{s\left(G\right)}\right\rceil,p-\sum_{j=1}^{i-1}m_{j }\right\}\text{ for }i\geq 2.\]
In order to show that \(S\) is a \(k\)-rPDS of \(G\), we will show that \(p-k\geq\left(\gamma_{P}\left(G\right)-1\right)\left\lceil\frac{p}{s\left(G \right)}\right\rceil+1.\) Assume this is true. Then whenever we have \(p\) PMUs and \(k\) fail, there are at least \(\left(\gamma_{P}\left(G\right)-1\right)\left\lceil\frac{p}{s\left(G\right)} \right\rceil+1\) working PMUs. As each vertex has at most \(\left\lceil\frac{p}{s\left(G\right)}\right\rceil\) PMUs, there are at least \(\gamma_{P}\left(G\right)\) vertices of \(A\) that must have at least one PMU remaining and so form a power dominating set.
We prove the equivalent statement
\[p-k-\left(\gamma_{P}\left(G\right)-1\right)\left\lceil\frac{p}{s\left(G\right) }\right\rceil\geq 1.\]
Observe that by Proposition 2.4,
\[p-k-\left(\gamma_{P}\left(G\right)-1\right)\left\lceil\frac{p}{s\left(G\right)} \right\rceil=p-k-\left(\gamma_{P}\left(G\right)-1\right)\left\lceil\frac{k+ \gamma_{P}\left(G\right)-1}{s\left(G\right)-\gamma_{P}\left(G\right)+1}\right\rceil.\]
Then by Corollary 2.8 and simplifying, we see that
\[p-k-\left(\gamma_{P}\left(G\right)-1\right)\left\lceil\frac{p}{s \left(G\right)}\right\rceil\geq p-k-\left(\left\lceil\frac{(k+\gamma_{P}\left( G\right)-1)(\gamma_{P}\left(G\right)-1)}{s\left(G\right)-\gamma_{P}\left(G \right)+1}\right\rceil+\gamma_{P}\left(G\right)-2\right)\] \[=p-\left(\left\lceil\frac{(k+\gamma_{P}\left(G\right)-1)(\gamma_ {P}\left(G\right)-1)}{s\left(G\right)-\gamma_{P}\left(G\right)+1}\right\rceil +\gamma_{P}\left(G\right)-2+k\right)\] \[=p-\left\lceil\frac{(k+\gamma_{P}\left(G\right)-1)(\gamma_{P} \left(G\right)-1)+(s\left(G\right)-\gamma_{P}\left(G\right)+1)(\gamma_{P} \left(G\right)-2+k)}{s\left(G\right)-\gamma_{P}\left(G\right)+1}\right\rceil\] \[=p-\left\lceil\frac{s\left(G\right)\left(k+\gamma_{P}\left(G \right)-1\right)-s\left(G\right)+\gamma_{P}\left(G\right)-1}{s\left(G\right) -\gamma_{P}\left(G\right)+1}\right\rceil\] \[=1.\qed\]
To see the difference between the upper bounds given in Theorem 3.10 and Proposition 3.3, let \(s\left(G\right)=\gamma_{P}\left(G\right)+r\). Then Theorem 3.10 becomes
\[\tilde{\gamma}_{P}^{k}\left(G\right) \leq\left\lceil\frac{(\gamma_{P}\left(G\right)+r)(k+\gamma_{P} \left(G\right)-1)}{r+1}\right\rceil\] \[=\left\lceil\frac{\gamma_{P}\left(G\right)(k+1)+\gamma_{P}\left( G\right)(\gamma_{P}\left(G\right)-2)+r(k+\gamma_{P}\left(G\right)-1)}{r+1}\right\rceil\] \[=\gamma_{P}\left(G\right)(k+1)+\left\lceil\frac{-kr(\gamma_{P} \left(G\right)-1)+\gamma_{P}\left(G\right)(\gamma_{P}\left(G\right)-2)-r}{r+1} \right\rceil.\]
This means that the bound from Theorem 3.10 improves the upper bound in Proposition 3.3 when the second term above is negative. Since \(r\geq 1\) and \(\gamma_{P}\left(G\right)\geq 2\), the second term is negative as \(k\) approaches infinity. Thus for \(s\left(G\right)>\gamma_{P}\left(G\right)\geq 2\), we see that there exists some \(k^{\prime}\) such that for every \(k\geq k^{\prime}\geq 1\), the bound from Theorem 3.10 is an improvement over the upper bound from Proposition 3.3. It will be useful to look specifically at Theorem 3.10 when \(\gamma_{P}\left(G\right)=2\).
**Corollary 3.11**.: _If \(s\left(G\right)>\gamma_{P}\left(G\right)=2\), then \(\tilde{\gamma}_{P}^{k}\left(G\right)\leq\left\lceil\frac{s\left(G\right)(k+1 )}{s\left(G\right)-1}\right\rceil\) for \(k\geq 1\)._
To illustrate the use of the bounds in this section, we use these results to determine the \(k\)-robust power domination number for \(K_{3,3}\) and \(K_{3,b}\) for sufficiently large \(b\) in Section 4.
Complete bipartite graphs
The _complete bipartite graph_, \(K_{n,m}\) is the graph with vertex set \(V=X\cup Y\) such that \(\left|X\right|=n\), \(\left|Y\right|=m\), and edge set \(E=\left\{xy:x\in X,y\in Y\right\}\). We examine the case when \(n=m=3\), then the case when \(n=3\) and \(m\geq 4\). Next, we find bounds for the \(n,m\geq 4\) case, which combine to provide a result for the general \(n=m\) case. We will need the following observation.
**Observation 4.1**.: _Suppose the parts of \(K_{n,m}\) are \(X\) and \(Y\), such that \(\left|X\right|=n\) and \(\left|Y\right|=m\). If \(S\) is a power dominating set of \(K_{n,m}\), then \(S\) contains: at least 1 vertex from \(X\) and 1 vertex from \(Y\); or at least \(n-1\) vertices from \(X\); or at least \(m-1\) vertices from \(Y\)._
We will also need the Reverse Pigeonhole Principle. This follows from the the Pigeonhole Principle, which states that if \(k\) objects are distributed among \(n\) sets, then one set must have at least \(\left\lceil\frac{k}{n}\right\rceil\) objects.
**Remark 4.2** (Reverse Pigeonhole Principle).: If \(k\) objects are distributed among \(n\) sets, then one set must have at most \(\left\lfloor\frac{k}{n}\right\rfloor\) objects.
To see why Remark 4.2 holds, observe that if \(n\) sets each had at least \(\left\lfloor\frac{k}{n}\right\rfloor+1\) objects, then, for \(k=qn+r\) such that \(q\geq 0\) and \(0\leq r\leq n-1\), we would have
\[k\geq\sum_{i=1}^{n}\left(\left\lfloor\frac{k}{n}\right\rfloor+1\right)=\sum_{ i=1}^{n}\left(q+1\right)=qn+n>k,\]
which is a contradiction.
We begin with \(\tilde{\gamma}_{P}^{k}\left(K_{3,3}\right)\).
**Theorem 4.3**.: _Let \(k\geq 0\). Let \(K_{3,3}\) be the complete bipartite graph with parts \(X=\left\{x_{1},x_{2},x_{3}\right\}\) and \(Y=\left\{x_{4},x_{5},x_{6}\right\}\). Then_
\[\tilde{\gamma}_{P}^{k}\left(K_{3,3}\right)=k+\left\lfloor\frac{k}{5}\right \rfloor+2.\]
Proof.: We begin by observing that any two vertices of \(K_{3,3}\) form a power dominating set, and so \(s\left(K_{3,3}\right)=6\). First we prove the lower bound \(k+\left\lfloor\frac{k}{5}\right\rfloor+2\leq\tilde{\gamma}_{P}^{k}\left(K_{3, 3}\right)\) where \(k=5q\). Assume for contradiction that there exists a \(\tilde{\gamma}_{P}^{5q}\)-set \(S\) of size \(5q+\left\lfloor\frac{5q}{5}\right\rfloor+1=6q+1\). By the Pigeonhole Principle, some \(x_{i}\) contains at least \(\left\lceil\frac{6q+1}{6}\right\rceil=q+1\) of the PMUs. Observe that \(\left|S\right|-5q=q+1\). Thus, we can remove \(5q\) PMUs so that some vertex \(x_{i}\) contains all remaining PMUs. This is a contradiction, as \(\gamma_{P}\left(K_{3,3}\right)=2\). Thus, \(\tilde{\gamma}_{P}^{5q}\left(K_{3,3}\right)\geq 6q+2=5q+\left\lfloor\frac{5q}{5} \right\rfloor+2\), as desired. The lower bounds when \(k\) is not a multiple of \(5\) then follow by Corollary 3.2.
For the upper bound, observe that by Corollary 3.11,
\[\tilde{\gamma}_{P}^{k}\left(K_{3,3}\right) \leq\left\lceil\frac{6(k+1)}{5}\right\rceil\] \[=k+1+\left\lceil\frac{k+1}{5}\right\rceil.\]
Then by Proposition 2.5, we see that
\[\tilde{\gamma}_{P}^{k}\left(K_{3,3}\right) \leq k+1+\left\lfloor\frac{k+1-1}{5}\right\rfloor+1\] \[=k+\left\lfloor\frac{k}{5}\right\rfloor+2.\qed\]
Theorem 4.3 gives an example of a graph for which Theorem 3.10 is tight and the structure of the \(\tilde{\gamma}_{P}^{k}\)-set suggested by the proof of Theorem 3.10 is shown in Figure 2. A larger family of complete bipartite graphs follows the same pattern, as shown in Theorem 4.4.
**Theorem 4.4**.: _Let \(k\geq 0\). Let \(K_{3,m}\) be the complete bipartite graph with parts \(X=\left\{x_{1},x_{2},x_{3}\right\}\) and \(Y=\left\{y_{1},y_{2},\ldots,y_{m}\right\}\). For \(m\geq\left\lfloor\frac{k}{3}\right\rfloor+3\),_
\[\tilde{\gamma}_{P}^{k}\left(K_{3,m}\right)=k+\left\lfloor\frac{k}{3}\right \rfloor+2.\]
Proof.: First we prove the lower bound when \(k=3q\). Assume for eventual contradiction that there exists a \(\tilde{\gamma}_{P}^{3q}\)-set, \(S\), of size \(3q+\left\lfloor\frac{3q}{3}\right\rfloor+1=4q+1\). Let \(y=\sum_{i=1}^{m}\#\text{PMU}\left(y_{i}\right)\). Then
\[\#\text{PMU}\left(x_{1}\right)+\#\text{PMU}\left(x_{2}\right)+\#\text{PMU} \left(x_{3}\right)+y=4q+1.\]
By the Pigeonhole Principle, we see that one of \(x_{1},x_{2},x_{3}\) or \(y\) must represent at least
\[\left\lceil\frac{4m+1}{4}\right\rceil=q+1\]
of the PMUs. Observe that \(\left|S\right|-3q=q+1\). Thus, we can remove \(3q\) PMUs such that either:
1. All \(q+1\) remaining PMUs are on a single \(x_{i}\), which is a contradiction as this is only one vertex and \(\gamma_{P}\left(K_{3,m}\right)=2\); or
2. All \(q+1\) remaining PMUs are on the \(y_{i}\) vertices. In order for the PMUs on the \(y_{i}\)'s to form a power dominating set of \(K_{3,m}\), \(m-1\) of the \(y_{i}\)'s must have a PMU. However, we also have that \[m-1 \geq\left\lfloor\frac{3q}{3}\right\rfloor+3-1\] \[=q+2.\] This means that at least \(q+2\) PMUs are needed but after \(3q\) PMUs are removed only \(q+1\) PMUs remain, a contradiction.
Therefore, \(\tilde{\gamma}_{P}^{3q}\left(K_{3,m}\right)>4q+1\). Hence, \(\tilde{\gamma}_{P}^{3q}\left(K_{3,m}\right)\geq 4q+2=3q+\left\lfloor\frac{3q}{3} \right\rfloor+2\), as desired. The lower bounds for the remaining cases then follow by Corollary 3.2.
For the upper bound, the case of \(k=0\) is given by the power domination number. If \(m=3\) we need only consider when \(k=0,1,2\); this is covered by Theorem 4.3. If \(m\geq 4\) and \(k\geq 1\), we have \(s\left(K_{3,m}\right)=4\). Then by Corollary 3.11,
\[\tilde{\gamma}_{P}^{k}\left(K_{3,m}\right) \leq\left\lceil\frac{4(k+1)}{3}\right\rceil\] \[=k+1+\left\lceil\frac{k+1}{3}\right\rceil\]
and by Proposition 2.5,
\[\tilde{\gamma}_{P}^{k}\left(K_{3,m}\right) \leq k+1+\left\lfloor\frac{k+1-1}{3}\right\rfloor+1\] \[=k+\left\lfloor\frac{k}{3}\right\rfloor+2.\qed\]
Next, we examine complete bipartite graphs with at least \(4\) vertices in each part, beginning with the lower bound in the following theorem.
**Theorem 4.5**.: _Let \(m\geq n\geq 4\) and \(k\geq 1\). Then,_
\[\tilde{\gamma}_{P}^{k}\left(K_{n,m}\right)\geq\begin{cases}2(k+1)-4\left\lfloor \frac{k}{n+2}\right\rfloor&k\equiv i\pmod{n+2},\ 0\leq i\leq n-4\\ k+n-1+(n-2)\left\lfloor\frac{k}{n+2}\right\rfloor&\text{otherwise}\end{cases}\]
Proof.: Let \(K_{n,m}\) be the complete bipartite graph with \(V(K_{n,m})=X\cup Y\), leaving the sizes of \(X\) and \(Y\) generic. Let \(k=q(n+2)+i\) for \(q\geq 0\) and \(0\leq i<n+2\). Observe the following:
\[i =k-q(n+2)\] \[=k-\left\lfloor\frac{k}{n+2}\right\rfloor(n+2)\]
First suppose that \(0\leq i\leq n-4\). For the sake of contradiction, assume that there exists a \(k\)-robust power dominating set \(S\) such that \(\left|S\right|=2k+1-4\left\lfloor\frac{k}{n+2}\right\rfloor\). Thus,
\[\left|S\right| =2k+1+(n-2-(n+2))\left\lfloor\frac{k}{n+2}\right\rfloor\] \[=k+1+k-(n+2)\left\lfloor\frac{k}{n+2}\right\rfloor+(n-2)\left \lfloor\frac{k}{n+2}\right\rfloor\] \[=k+1+i+(n-2)\left\lfloor\frac{k}{n+2}\right\rfloor\] \[=q(n+2)+i+1+i+q(n-2)\] \[=q(n+2)+2i+1+q(n-2)\] \[=2qn+2i+1.\]
Without loss of generality, assume that \(\#\text{PMU}\left(X\right)\leq\#\text{PMU}\left(Y\right)\). By Observation 4.2, \(\#\text{PMU}\left(X\right)\leq\left\lfloor\frac{2qn+2i+1}{2}\right\rfloor=qn+i \leq k\). Observe that if \(q=0\), we have equality. Let \(B\subseteq S\), such that \(\left|B\right|=qn+i\) and \(B\) contains all the PMUs from vertices of \(X\) (and possibly some from vertices of \(Y\)). Then, by definition, \(S\setminus B\) contains only PMUs from vertices of \(Y\), and \(\left|S\setminus B\right|=qn+i+1\). Note that if \(q=0\), we are distributing \(i+1\leq n-3\) PMUs amongst the vertices of \(Y\), and by Observation 4.1, \(S\setminus B\) is not a power dominating set. By Remark 4.2, there exists a vertex \(y_{1}\in Y\), such that
\[\#\text{PMU}_{S\setminus B}\left(y_{1}\right) \leq\left\lfloor\frac{qn+i+1}{\left|Y\right|}\right\rfloor\] \[\leq\left\lfloor\frac{qn+n-4+1}{n}\right\rfloor\] \[=\left\lfloor\frac{qn}{n}+\frac{n-3}{n}\right\rfloor\] \[=q+\left\lfloor\frac{n-3}{n}\right\rfloor\] \[=q.\]
Let \(B^{\prime}\subseteq S\setminus B\) such that \(|B^{\prime}|=q\) and \(B^{\prime}\) contains all the PMUs from \(y_{1}\) (and possibly some from other vertices of \(Y\)). Then, by definition, \(S\setminus(B\cup B^{\prime})\) contains PMUs only from vertices of \(Y\setminus\{y_{1}\}\) and \(|S\setminus(B\cup B^{\prime})|=q(n-1)+i+1\). Thus, by Remark 4.2 there exists a vertex, \(y_{2}\in Y\), such that
\[\#\mathrm{PMU}_{S\setminus(B\cup B^{\prime})}\left(y_{2}\right) \leq\left\lfloor\frac{q(n-1)+i+1}{|Y|-1}\right\rfloor\] \[\leq\left\lfloor\frac{q(n-1)+n-4+1}{n-1}\right\rfloor\] \[=\left\lfloor\frac{q(n-1)}{n-1}+\frac{n-3}{n-1}\right\rfloor\] \[=q+\left\lfloor\frac{n-3}{n}\right\rfloor\] \[=q.\]
Let \(B^{\prime\prime}\subseteq S\setminus(B\cup B^{\prime})\) such that \(|B^{\prime\prime}|=q\) and \(B\) contains all the PMUs from \(y_{2}\) (and possibly from other vertices of \(Y\)). Then, by definition \(S\setminus(B\cup B^{\prime}\cup B^{\prime\prime})\) contains only PMUs from vertices of \(Y\setminus\{y_{1},y_{2}\}\). Note that \(|B\cup B^{\prime}\cup B^{\prime\prime}|=k\) and by Observation 4.1, \(S\setminus(B\cup B^{\prime}\cup B^{\prime\prime})\) is not a power dominating set. Therefore, no such \(S\) is a \(k-\)robust dominating set.
For the second case, suppose that \(n-3\leq i\leq n+1\). By Proposition 3.3, to show that \(\tilde{\gamma}_{P}^{k}\left(K_{n,m}\right)\geq k+n-1+(n-2)\left\lfloor\frac{k} {n+2}\right\rfloor\), we need only show it for \(i=n-3\). For the sake of contradiction, suppose there exists a \(k\)-robust power dominating set, \(S\), such that \(|S|=k+n-2+(n-2)\left\lfloor\frac{k}{n+2}\right\rfloor\). Thus,
\[|S| =q(n+2)+n-3+n-2+q(n-2)\] \[=2qn+2n-5\]
Without loss of generality, assume that \(\#\mathrm{PMU}\left(X\right)\leq\#\mathrm{PMU}\left(Y\right)\). By Remark 4.2,
\[\#\mathrm{PMU}\left(X\right) \leq\left\lfloor\frac{2qn+2n-5}{2}\right\rfloor\] \[=qn+n-3\] \[\leq q(n+2)+n-3\] \[=k.\]
Observe that if \(q=0\), we have equality. Let \(B\subseteq S\), such that \(|B|=qn+n-3\) and \(B\) contains all the PMUs from \(X\) (and possibly some from \(Y\)). Then by definition, \(S\setminus B\) contains only PMUs from vertices of \(Y\) and \(|S\setminus B|=qn+n-2\). Note that if \(q=0\), we are distributing \(n-2\) PMUs amongst the vertices of \(Y\), and by Observation 4.1, \(S\setminus B\) is not a power dominating set. By Remark 4.2,
there exists a vertex, \(y_{1}\in Y\), such that
\[\#\mathrm{PMU}_{S\setminus B}\left(y_{1}\right) \leq\left\lfloor\frac{qn+n-1}{|Y|}\right\rfloor\] \[\leq\left\lfloor\frac{qn+n-1}{n}\right\rfloor\] \[\leq q.\]
Let \(B^{\prime}\subseteq S\setminus B\), such that \(|B^{\prime}|=q\) and \(B^{\prime}\) contains all the vertices from \(y_{1}\) (and possibly from other vertices of \(Y\)). Then, by definition, \(S\setminus(B\cup B^{\prime})\) contains only PMUs from vertices of \(Y\setminus\{y_{1}\}\) and \(|S\setminus(B\cup B^{\prime})|=q(n-1)+n-2\). Thus, by Remark 4.2, there exists a vertex, \(y_{2}\in Y\), such that,
\[\#\mathrm{PMU}_{S\setminus(B\cup B^{\prime})}\left(y_{2}\right) \leq\left\lfloor\frac{q(n-1)+n-2}{|Y|-1}\right\rfloor\] \[\leq\left\lfloor\frac{q(n-1)+n-2}{n-1}\right\rfloor\] \[\leq q.\]
Let \(B^{\prime\prime}\subseteq S\setminus(B\cup B^{\prime})\) such that \(|B^{\prime\prime}|=q\) and \(B\) contains all of the PMUs from \(y_{2}\) (and possibly from other vertices of \(Y\)). Then by definition, \(S\setminus(B\cup B^{\prime}\cup B^{\prime\prime})\) has only PMUs from vertices of \(Y\setminus\{y_{1},y_{2}\}\). Note that \(|B\cup B^{\prime}\cup B^{\prime\prime}|=k\) and by Observation 4.1, \(S\setminus(B\cup B^{\prime}\cup B^{\prime\prime})\) is not a power dominating set. Therefore, no such \(S\) is a \(k\)-robust power dominating set, a contradiction for the second case.
Note that the bound found in Theorem 4.5 is not always tight. For example, we observe that \(\tilde{\gamma}_{P}^{4}\left(K_{4,6}\right)>7\). To see this, suppose for contradiction that there exists a \(k\)-robust power dominating set \(S\) such that \(|S|=7\). Let the parts of \(K_{4,6}\) be \(X\) and \(Y\) but leave the sizes of \(X\) and \(Y\) generic. Then, one side, say \(X\), has at most \(3\) PMUs. If \(Y\) has \(6\) vertices, then removing all PMUs from \(X\) leaves \(4\) PMUs on \(Y\), and so \(S\) is not a \(4\)-rPDS. Thus, \(Y\) has \(4\) vertices and \(X\) has \(6\) vertices. We then consider if we remove \(4\) PMUs from \(Y\). If all of the \(3\) remaining PMUs are on \(X\), then \(S\) is not a \(4\)-rPDS. Thus, the PMUs must be some remaining PMUs on \(Y\) and some on \(X\), and so \(X\) has at most \(2\) PMUs. Thus, \(Y\) has either \(5\), \(6\), or \(7\) PMUs. In any case, we can remove \(2\) PMUs so that there are only \(5\) PMUs on \(Y\). However, we can still remove \(2\) PMUs, which leaves us with at most \(2\) vertices of \(Y\) that contain PMUs and no vertices of \(X\) containing PMUs, which is not a power dominating set of \(K_{4,6}\). Therefore, we see that \(\tilde{\gamma}_{P}^{4}\left(K_{4,6}\right)>7\).
Next, we provide an upper bound for complete bipartite graphs.
**Theorem 4.6**.: _Let \(m\geq n\geq 4\), and \(k\geq 1\)_
\[\tilde{\gamma}_{P}^{k}\left(K_{n,m}\right)\leq\begin{cases}2(k+1)+(m-n-4) \left\lfloor\frac{k}{n+2}\right\rfloor&k\equiv i\ (\mathrm{mod}\ n+2),\ 0\leq i\leq n-4\\ k+m-1+(m-2)\left\lfloor\frac{k}{n+2}\right\rfloor&\text{otherwise}\end{cases}\]
Proof.: Suppose \(V(K_{n,m})=X\cup Y\) are the parts of \(K_{n,m}\), such that \(X=\{x_{1},\ldots,x_{n}\}\) and \(Y=\{y_{1},\ldots,y_{m}\}\). Let \(k=q(n+2)+i\) for \(q\geq 0\) and \(0\leq i\leq n+1\). Observe that \(i=k-q(n+2)=k-(n+2)\left\lfloor\frac{k}{n+2}\right\rfloor\).
For the first case, suppose that \(0\leq i\leq n-4\). To show that \(\tilde{\gamma}_{P}^{k}\left(K_{n,m}\right)\leq 2(k+1)+(m-n-4)\left\lfloor\frac{k}{n+2}\right\rfloor\), it suffices to construct a \(k\)-robust power dominating set of this size. First, we observe that
\[2(k+1)+(m-n-4)\left\lfloor\frac{k}{n+2}\right\rfloor =k+2+k-(n+2)\left\lfloor\frac{k}{n+2}\right\rfloor+(m-2)\left \lfloor\frac{k}{n+2}\right\rfloor\] \[=k+2+i+(m-2)\left\lfloor\frac{k}{n+2}\right\rfloor\] \[=q(n+2)+i+2+i+q(m-2)\] \[=q(n+2)+2i+2+q(m-2)\] \[=q(n+m)+2(i+1)\]
Let
\[S=\{x_{1}^{q+1},\ldots,x_{i+1}^{q+1},x_{i+2}^{q},\ldots,x_{n}^{q},y_{1}^{q+1},\ldots,y_{i+1}^{q+1},y_{i+2}^{q},\ldots,y_{m}^{q}\}\]
Observe that \(|S|=q(n+m)+2(i+1)\). We will now show that \(S\) is a \(k\)-robust power dominating set. Let \(B\subseteq S\) such that \(|B|=k=q(n+2)+i\). We have two cases:
1. \(S\setminus B\) contains vertices from both \(X\) and \(Y\). By Observation 4.1, \(S\setminus B\) is a power dominating set.
2. \(S\setminus B\) contains vertices only from \(X\) (or only from \(Y\)). For generality, call this side \(Z\) with size \(z\). Observe that \(|S\setminus B|=q(n+m)+2(i+1)-q(n+2)-i=q(m-2)+i+2\). By Observation 4.1, for \(S\setminus B\) to be a robust power dominating set, it must have PMUs on at least \(z-1\) vertices. Assume for contradiction that at most \(z-2\) vertices of \(Z\) have PMUs, then, \[|S\setminus B| \leq(q+1)(i+1)+q(z-3-i)\] \[\leq(q+1)(i+1)+q(m-3-i)\] \[=qi+i+q+1+qm-3q-qi\] \[=q(m-2)+i+1\] \[<|S\setminus B|,\] a contradiction.
Thus \(S\setminus B\) is a power dominating set, and since \(B\) was arbitrary, \(S\) is a \(k\)-robust power dominating set for \(K_{n,m}\).
For the second case, suppose that \(n-3\leq i\leq n+1\). We now show that \(\tilde{\gamma}_{P}^{k}\left(K_{n,m}\right)\leq k+m-1+(m-2)\left\lfloor\frac{k }{n+2}\right\rfloor\) by constructing a \(k\)-robust power
dominating set of size \(k+m-1+(m-2)\left\lfloor\frac{k}{n+2}\right\rfloor\). By Proposition 3.3, we need only provide the construction for \(i=n+1\). Then,
\[k+m-1+(m-2)\left\lfloor\frac{k}{n+2}\right\rfloor =q(n+2)+n+1+m-1+q(m-2)\] \[=q(n+m)+(n+m)\] \[=(q+1)(n+m)\]
Let \(S=\{x_{1}^{q+1},\ldots,x_{n}^{q+1},y_{1}^{q+1},\ldots,y_{m}^{q+1}\}\). Observe that \(|S|=(q+1)(n+m)\). We will now show that \(S\) is a \(k\)-robust power dominating set. Let \(B\subseteq S\) such that \(|B|=k=q(n+2)+n+1\). We have two cases:
1. \(S\setminus B\) contains vertices from both \(X\) and \(Y\). By Observation 4.1, \(S\setminus B\) is a power dominating set.
2. \(S\setminus B\) contains vertices only from \(X\) or only from \(Y\). For generality, call this side \(Z\) with size \(z\). Observe that \(|S\setminus B|=(q+1)(n+m)-q(n+2)-n-1=qm+m-2q-1=q(m-2)+m-1\). By Observation 4.1, for \(S\setminus B\) to be a robust power dominating set, it must have PMUs on at least \(z-1\) vertices. Assume for contradiction that at most \(z-2\) vertices of \(Z\) have PMUs, then, \[|S\setminus B| \leq(q+1)(z-2)\] \[\leq(q+1)(m-2)\] \[\leq qm+m-2q-2\] \[<qm+m-2q-1\] \[=|S\setminus B|\] a contradiction.
Thus \(S\setminus B\) is a power dominating set, and since \(B\) was arbitrary, \(S\) is a \(k\)-robust power dominating set for \(K_{n,m}\). Thus \(S\) is a \(k\)-robust power dominating set for \(K_{n,m}\).
We can combine Theorem 4.5 and Theorem 4.6 to find a complete characterization for balanced complete bipartite graphs, as shown in the following corollary. Moreover, the proof of Theorem 4.6 yields the construction of \(k\)-robust power dominating sets for \(K_{n,n}\)
**Corollary 4.7**.: _Let \(n\geq 4\), and \(k\geq 1\)_
\[\tilde{\gamma}_{P}^{k}\left(K_{n,n}\right)=\begin{cases}2(k+1)-4\left\lfloor \frac{k}{n+2}\right\rfloor&k\equiv i\ (\mathrm{mod}\ n+2),\ 0\leq i\leq n-3\\ k+n-1+(n-2)\left\lfloor\frac{k}{n+2}\right\rfloor&\text{otherwise.}\end{cases}\]
Trees
In this section, we establish the \(k\)-robust power domination number for trees. A _tree_ is an acyclic connected graph. A _spider_ is a tree with at most one vertex of degree \(3\) or more. A _spider cover_ of a tree \(T\) is a partition of \(V\), say \(\{V_{1},\ldots,V_{\ell}\}\) such that \(G[V_{i}]\) is a spider for all \(i\). The _spider number_ of a tree \(T\), denoted by \(\operatorname{sp}\left(T\right)\), is the minimum number of partitions in a spider cover. A _rooted tree_ is a tree in which one vertex is called the _root_. Suppose two vertices \(u\) and \(v\) are in a rooted tree with root \(r\). If \(u\) is on the \(r-v\) path, we say that \(v\) is a _descendant_ of \(u\) an \(u\) is an _ancestor_ of \(v\). If \(u\) and \(v\) are also neighbors, \(v\) is a _child_ of \(u\) and \(v\) is the _parent_ of \(u\). The vertex \(u\) is an _ancestor_ of \(v\) if \(v\) is a descendant of \(u\).
**Theorem 5.1**.: [3, Theorem 12] _For any tree \(T\), \(\gamma_{P}\left(T\right)=\operatorname{sp}\left(T\right)\)._
**Theorem 5.2**.: _For any tree \(T\), \(\tilde{\gamma}_{P}^{k}\left(T\right)=\left(k+1\right)\operatorname{sp}\left(T\right)\)._
Proof.: By Proposition 3.3, we have that \(\tilde{\gamma}_{P}^{k}\left(T\right)\leq\left(k+1\right)\gamma_{P}\left(T\right)\) for any tree \(T\), and by Theorem 5.1\(\operatorname{sp}\left(T\right)=\gamma_{P}\left(T\right)\). Therefore, \(\tilde{\gamma}_{P}^{k}\left(T\right)\leq\left(k+1\right)\operatorname{sp} \left(T\right)\).
To prove that \(\tilde{\gamma}_{P}^{k}\left(T\right)\geq\left(k+1\right)\operatorname{sp} \left(T\right)\), we proceed by contradiction. That is, assume that \(\tilde{\gamma}_{P}^{k}\left(T\right)<\left(k+1\right)\operatorname{sp}\left(T\right)\). Since \(\operatorname{sp}\left(T\right)=\gamma_{P}\left(T\right)\), any \(\tilde{\gamma}_{P}^{k}\)-set must contain at least \(\operatorname{sp}\left(T\right)\) distinct vertices and by the pigeonhole principle there exists at least one vertex in the set with at most \(k\) PMUs.
Let \(S\) be a \(\tilde{\gamma}_{P}^{k}\)-set of \(T\) such that \(\deg\left(v\right)\geq 3\) for each \(v\in S\), and choose \(S\) to have the smallest number of vertices \(x\) with \(\#\text{PMU}\left(x\right)\leq k\). Root \(T\) at a vertex \(r\in S\).
Let \(A=\{v\in S:0<\#\text{PMU}\left(v\right)\leq k\}\). Let \(v\in A\) with the property that \(d(v,r)=\max\{d(u,r):u\in A\}\). Observe that for any descendant \(u\) of \(v\), then \(\#\text{PMU}\left(u\right)\geq k+1\) or \(\#\text{PMU}\left(u\right)=0\). Let \(w\) be the nearest ancestor of \(v\) such that \(\deg\left(w\right)\geq 3\). Let \(S^{\prime}=\left(S\cup\{w^{\#\text{PMU}_{S}\left(v\right)}\}\right)\setminus\{ v^{\#\text{PMU}_{S}\left(v\right)}\}\), so that for each \(x\in S^{\prime}\) such that \(x\neq w\), \(\#\text{PMU}_{S^{\prime}}\left(x\right)=\#\text{PMU}_{S}\left(x\right)\) and \(\#\text{PMU}_{S^{\prime}}\left(w\right)=\#\text{PMU}_{S}\left(v\right)+\# \text{PMU}_{S}\left(w\right)\). That is, \(S^{\prime}\) is the same as \(S\) with the exception that the PMUs on \(v\) have been moved to \(w\).
We will show that \(S^{\prime}\) is a \(k-\)robust power dominating set. First observe that since \(S\setminus\{v^{\#\text{PMU}_{S}\left(v\right)}\}\) is a power dominating set, \(S^{\prime}\) is also a power dominating set. Let \(B^{\prime}\subseteq S^{\prime}\), such that \(\left|B^{\prime}\right|=k\). We have two cases: \(\#\text{PMU}_{B^{\prime}}\left(w\right)\leq\#\text{PMU}_{S}\left(v\right)\), or \(\#\text{PMU}_{B^{\prime}}\left(w\right)>\#\text{PMU}_{S}\left(v\right)\).
First, assume that \(\#\text{PMU}_{B^{\prime}}\left(w\right)\leq\#\text{PMU}_{S}\left(v\right)\). Let
\[B=\left(B^{\prime}\setminus\{w^{\#\text{PMU}_{B^{\prime}}\left(w\right)}\} \right)\cup\{v^{\#\text{PMU}_{B^{\prime}}\left(w\right)}\}.\]
Note that \(S\setminus B\) removes the same PMUs as \(S^{\prime}\setminus B^{\prime}\) with the exception that the PMUs removed from \(w\) in \(S^{\prime}\setminus B^{\prime}\), are instead removed from \(v\) in \(S\setminus B\). Since \(\#\text{PMU}_{B^{\prime}}\left(w\right)\leq\#\text{PMU}_{S}\left(v\right)\), we see that \(\#\text{PMU}_{S\setminus B}\left(w\right)=0\).
Now, if \(w\notin S^{\prime}\setminus B^{\prime}\), then \(S\setminus B=S^{\prime}\setminus B^{\prime}\). Thus \(S^{\prime}\setminus B^{\prime}\) is a power dominating set. Next consider the case where \(w\in S^{\prime}\setminus B^{\prime}\). The ancestor \(w\), which is observed by virtue of being in \(S^{\prime}\setminus B^{\prime}\), will cause the observation of \(v\) and \(v\) can perform a zero forcing step to observe one descendant. Since \(\deg\left(v\right)\geq 3\), all
the descendants of \(v\), except for possibly one, must be observed by forces from descendants of \(v\) in \(S^{\prime}\setminus B^{\prime}\). To see this, recall that \(S\setminus\left\{v^{\#\mathrm{PMU}_{S}\left(v\right)}\right\}\) is a power dominating set. Since \(v\) is not in the power dominating set, \(v\) can only perform the zero forcing step. Since the only path from non-descendants of \(v\) to descendants of \(v\) is through \(v\), all descendants, except for possibly one descendant, must be observed by descendants of \(v\). Since \(\#\mathrm{PMU}_{S\setminus B}\left(x\right)=\#\mathrm{PMU}_{S^{\prime} \setminus B^{\prime}}\left(x\right)\) whenever \(x\) is a descendant of \(v\), \(S^{\prime}\setminus B^{\prime}\) will force all the descendants of \(v\).
Thus, any vertices that rely on \(\#\mathrm{PMU}_{S\setminus B}\left(v\right)>0\) to be observed in \(S\setminus B\), will have been observed by \(w\) or the descendants of \(v\). The remaining vertices will be observed by the same vertices as they would have been by \(S\setminus B\). Therefore, \(S^{\prime}\setminus B^{\prime}\) is a power dominating set.
For the second case, assume that \(\#\mathrm{PMU}_{B^{\prime}}\left(w\right)>\#\mathrm{PMU}_{S}\left(v\right)\). Let
\[B=(B^{\prime}\setminus\left\{w^{\#\mathrm{PMU}_{S}\left(v\right)}\right\}) \cup\left\{v^{\#\mathrm{PMU}_{S}\left(v\right)}\right\}.\]
Note that \(S\setminus B\) removes the same PMUs as \(S^{\prime}\setminus B^{\prime}\) with the exception that any PMUs removed from \(w\) are first removed from \(v\), and then the remaining are removed from \(w\). Thus, \(v\notin S\setminus B\), \(S^{\prime}\setminus B^{\prime}=S\setminus B\), and therefore \(S^{\prime}\setminus B^{\prime}\) is a power dominating set.
In either case, \(S^{\prime}\setminus B^{\prime}\) is a power dominating set. If \(w\in S\), then we have found \(S^{\prime}\) with fewer vertices for which \(\#\mathrm{PMU}\left(x\right)\leq k\) for \(x\in S^{\prime}\), which is a contradiction. If \(w\not\in S\), we may repeat the process. Since \(r\in S\), we know that eventually the process terminates with the same contradiction.
## 6 Concluding remarks
PMU-defect-robust power domination allows us to place multiple PMUs at the same location and consider the consequences if some of these PMUs fail. There are many questions left to examine in future work.
Is there an improvement to the lower bound given in Proposition 3.3 for \(\gamma_{P}\left(G\right)>1\)? As \(K_{3,3}\) demonstrates in Theorem 4.3, it seems likely that there is a better lower bound based on the number of vertices and the power domination number that utilizes the pigeonhole principle to show that the lower bound must increase at certain values of \(k\).
We have begun the study of \(k\)-robust power domination for certain families of graphs but work remains to be done. We have determined the \(k\)-robust power domination number for trees. For complete bipartite graphs, we still have the case of \(\tilde{\gamma}_{P}^{k}\left(K_{3,b}\right)\) for \(4\leq b<\left\lfloor\frac{k}{3}\right\rfloor+3\). The question of \(\tilde{\gamma}_{P}^{k}\left(K_{a,b}\right)\) for unbalanced complete bipartite graphs when \(a,b\geq 4\) is also open and preliminary observations indicate an extensive case analysis for this problem.
## Acknowledgments
B. Bjorkman was supported by the US Department of Defense's Science, Mathematics and Research for Transformation (SMART) Scholarship for Service Pro
gram. E. Conrad was supported by the Autonomy Technology Research (ATR) Center Summer Program.
|
2304.13719
|
To the theory of decaying turbulence
|
We have found an infinite dimensional manifold of exact solutions of the
Navier-Stokes loop equation for the Wilson loop in decaying Turbulence in
arbitrary dimension $d >2$. This solution family is equivalent to a fractal
curve in complex space $\mathbb C^d$ with random steps parametrized by $N$
Ising variables $\sigma_i=\pm 1$, in addition to a rational number
$\frac{p}{q}$ and an integer winding number $r$, related by $\sum \sigma_i = q
r$. This equivalence provides a dual theory describing a strong turbulent phase
of the Navier-Stokes flow in $\mathbb R_d$ space as a random geometry in a
different space, like ADS/CFT correspondence in gauge theory. From a
mathematical point of view, this theory implements a stochastic solution of the
unforced Navier-Stokes equations. For a theoretical physicist, this is a
quantum statistical system with integer-valued parameters, satisfying some
number theory constraints. Its long-range interaction leads to critical
phenomena when its size $N \rightarrow \infty$ or its chemical potential $\mu
\rightarrow 0$. The system with fixed $N$ has different asymptotics at odd and
even $N\rightarrow \infty$, but the limit $\mu \rightarrow 0$ is well defined.
The energy dissipation rate is analytically calculated as a function of $\mu$
using methods of number theory. It grows as $\nu/\mu^2$ in the continuum limit
$\mu \rightarrow 0$, leading to anomalous dissipation at $\mu \propto
\sqrt{\nu} \to 0$. The same method is used to compute all the local vorticity
distribution, which has no continuum limit but is renormalizable in the sense
that infinities can be absorbed into the redefinition of the parameters. The
small perturbation of the fixed manifold satisfies the linear equation we
solved in a general form. This perturbation decays as $t^{-\lambda}$, with a
continuous spectrum of indexes $\lambda$ in the local limit $\mu \to 0$.
|
Alexander Migdal
|
2023-04-26T17:55:57Z
|
http://arxiv.org/abs/2304.13719v11
|
# Decaying Turbulence as a Fractal Curve
###### Abstract
We develop a quantitative microscopic theory of decaying Turbulence by studying the dimensional reduction of the Navier-Stokes loop equation for the velocity circulation. We have found an infinite dimensional manifold of solutions of the Navier-Stokes loop equation[1, 2] for the Wilson loop in decaying Turbulence in arbitrary dimension \(d>2\). This family of solutions corresponds to a fractal curve in complex space \(\mathbb{C}^{d}\), described by an algebraic equation between consecutive positions plus a nonlinear periodicity condition. We derive the constrained SDE for the evolution of the fractal curve at a fixed moment of physical time as a function of an auxiliary stochastic time. We expect this stochastic process to cover our fixed manifold of the solutions of the decaying Turbulence. The energy density of the fluid decays as \(\mathcal{E}_{0}/t\), where \(\mathcal{E}_{0}\) is an initial dissipation rate. Presumably, we have found a new phase of extreme Turbulence yet to be observed in real or numerical experiments.
Turbulence, Fractal, Anomalous dissipation, Fixed point, Velocity circulation, Loop Equations +
Footnote †: journal: Physics Letters A
0
Footnote 0: 0.0
## 0 Introduction
A while ago, we derived [1, 3] a functional equation for the so-called loop average [4, 5] or Wilson loop in Turbulence. The path to an exact solution by a dimensional reduction in this equation was proposed in the '93 paper [1] but has just been explored.
At the time, we could not compare a theory with anything but crude measurements in physical and numerical experiments at modest Reynolds numbers. All these experiments agreed with the K41 scaling, so the exotic equation based on unjustified methods of quantum field theory was premature.
The specific prediction of the Loop equation, namely the Area law [1], could not be verified in DNS at the time with existing computer power.
The situation has changed over the last decades. No alternative microscopic theory based on the Navier-Stokes equation emerged, but our understanding of the strong turbulence phenomena grew significantly.
On the other hand, the loop equations technology in the gauge theory also advanced over the last decades. The correspondence between the loop space functionals and the original vector fields was better understood, and various solutions to the gauge loop equations were found.
In particular, the momentum loop equation was developed, similar to our momentum loop used below [6, 7, 8]. Recently, some numerical methods were found to solve loop equations beyond perturbation theory [9, 10].
The loop dynamics was extended to quantum gravity, where it was used to study nonperturbative phenomena [11, 12].
All these old and new developments made loop equations a major nonperturbative approach to gauge field theory.
So, it is time to revive the hibernating theory of the loop equations in Turbulence, where these equations are much simpler.
The latest DNS [13, 14, 15, 16] with Reynolds numbers of tens of thousands revealed and quantified violations of the K41 scaling laws. These numerical experiments are in agreement with so-called multifractal scaling laws [17].
However, as we argued in [2, 18], at those Reynolds numbers, the DNS cannot yet distinguish between pure scaling laws with anomalous dimension \(\zeta(n)\) and some algebraic function of the logarithm of scale \(\xi(n,\log r)\) modifying the K41 scaling.
Theoretically, we studied the loop equation in the confinement region (large circulation over large loop \(C\)), and we have justified the Area law, suggested back in '93 on heuristic arguments [1].
This law says that the tails of velocity circulation PDF in the confinement region are functions of the minimal area inside this loop.
It was verified in DNS four years ago [13] which triggered the further development of the geometric theory of turbulence[2; 14; 15; 16; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30].
In particular, the Area law was justified for flat and quadratic minimal surfaces [22], and an exact scaling law in confinement region \(\Gamma\propto\sqrt{Area}\) was derived [21]. The area law was verified with better precision in [14].
It was later conjectured in [18] that the dominant field configurations in extreme Turbulence are so-called Kelvinons, which were shown to solve stationary Navier-Stokes equations assuming the sparse distribution of vorticity structures.
These topological solitons of the Euler theory are built around a vortex sheet bounded by a singular vortex line. This vortex line is locally equivalent to the cylindrical Burgers vortex [31], with infinitesimal thickness in the limit of a large Reynolds number.
As we argued in [2; 18], the Kelvinon has an anomalous dissipation, surviving the strong turbulent limit. This dissipation is proportional to the square of constant circulation of the Burgers vortex times a line integral of the tangent component of the strain along the loop.
The Kelvinon minimizes the energy functional, with anomalous terms coming from the Burgers core of the vortex line. There is also a constant scale factor \(Z\) in the representation of the Kelvin vorticity in terms of spherical Clebsch variables:
\[\vec{\omega}=\nicefrac{{1}}{{2}}Ze_{\alpha k}S_{\alpha}\vec{ \nabla}S_{b}\times\vec{\nabla}S_{c}=\vec{\nabla}\phi_{1}\times\vec{\nabla} \phi_{2}; \tag{1}\] \[S_{1}^{2}+S_{2}^{2}+S_{3}^{2}=1;\] (2) \[\phi_{2}=\arg\left(S_{1}+tS_{2}\right);\;\phi_{1}=ZS_{3}; \tag{3}\]
In that paper, the constant \(Z\) was related to the Kolmogorov energy dissipation density and the boundary value of the \(S_{3}\) variable at the loop \(C\).
The anomalous Hamiltonian [2; 18] explicitly violated the K41 scaling by the logarithmic terms \(\log Z/\nu\) in the region of small loops \(C\). This region resembles the asymptotically free QCD. The logarithmic terms were summed up by RG equation with running coupling constant logarithmically small in this region.
These exciting developments explain and quantitatively describe many interesting phenomena [2] but do not provide a complete microscopic theory covering the full inertial range of Turbulence without simplifying assumptions of the sparsity of vortex structures.
Moreover, while the Kelvinon (presumably) solves the stationary Navier-Stokes equations, it **does not** solve the loop equations for the following reason.
The loop equation assumes that the velocity field is **independent** of the loop \(C\). In this case, the circulation \(\oint_{C}v_{\alpha}dr_{\alpha}\) variations in the loop functional by the shape \(C\) of the loop can be reduced to the Navier-Stokes equation.
Otherwise, the variation would also involve the variation of the velocity field \(\oint_{C}\delta v_{\alpha}dr_{\alpha}\).
This problem does not invalidate the Kelvinon theory as an ideal gas of random vortex rings sparsely distributed in a turbulent flow.
The loop functional is not needed for that statistical theory, and the stationary solution of the Navier-Stokes equation is sufficient. The shape of the loop and the vortex sheet inside would become random variables influenced by a background strain like in the pure vortex sheet solutions [2].
These objections, however, prevent the Kelvinon gas model from being a complete theory of strong isotropic Turbulence. This model is merely an approximation of the full theory.
In the present work, we develop the theory free of these assumptions by exactly solving the loop equations for decaying Turbulence.
## 1 Loop equation
### Loop operators
We introduced the loop equation in Lecture Series at Cargese and Chernogolovka Summer Schools [1].
Here is a summary for the new generation.
We write the Navier-Stokes equation as follows
\[\partial_{t}v_{\alpha} =v\partial_{\beta}\omega_{\beta\alpha}-v_{\beta}\omega_{\beta \alpha}-\partial_{\alpha}\Bigg{(}p+\frac{v_{\beta}^{2}}{2}\Bigg{)}; \tag{4}\] \[\partial_{\alpha}v_{\alpha} =0; \tag{5}\]
The Wilson loop average for the Turbulence
\[\Psi[\gamma,C]=\left\langle\exp\left(\frac{\imath\gamma}{\nu}\oint_{C}v_{ \alpha}dr_{\alpha}\right)\right\rangle \tag{6}\]
treated as a function of time and a functional of the periodic function \(C:r_{\alpha}=C_{\alpha}(\theta);\ \theta\in(0,2\pi)\) (not necessarily a single closed loop), satisfies the following functional equation
\[n\partial_{t}\Psi=\mathcal{H}_{C}\Psi; \tag{7a}\] \[\mathcal{H}_{C}=\mathcal{H}_{C}^{(1)}+\mathcal{H}_{C}^{(2)}\] (7b) \[\mathcal{H}_{C}^{(1)}=\nu\gamma\oint_{C}dr_{\alpha}\partial_{ \beta}\partial_{\alpha\beta}(r);\] (7c) \[\mathcal{H}_{C}^{(2)}=\gamma\oint_{C}dr_{\alpha}\partial_{\alpha \beta}(r)\vartheta_{\beta}(r);\] (7d) \[\partial_{\alpha\beta}=-\imath\frac{\nu}{\gamma}\frac{\delta}{ \delta\sigma_{\alpha\beta}}\] (7e) \[\vartheta_{\beta}(r)=\frac{1}{\partial_{\mu}^{2}}\partial_{\alpha }\partial_{\beta\alpha}(r) \tag{7f}\]
We added a dimensionless factor \(\gamma\) in the exponential compared to some previous definitions as an extra parameter of the Wilson loop. Without loss of generality, we shall assume that \(\gamma>0\). The negative \(\gamma\) corresponds to a complex conjugation of the Wilson loop.
In Abelian gauge theory, this would be the continuous electric charge. In turbulence theory, the Fourier transform of the Wilson loop by \(\gamma\) would produce the PDF for velocity circulation.
The statistical averaging \(\langle\dots\rangle\) corresponds to initial randomized data, to be specified later.
The area derivative \(\frac{\delta}{\delta\sigma_{\alpha\beta}}\) is related to the variation of the functional when the little closed loop \(\delta C\) is added
\[\Sigma_{\alpha\beta}(\delta C)\frac{\delta F[C]}{\delta\sigma_{\alpha\beta}( r)}=F[C+\delta C]-F[C]; \tag{8}\]
\[\Sigma_{\alpha\beta}(\delta C)=\frac{1}{2}\oint_{\delta C}r_{\alpha}dr_{\beta} \tag{9}\]
In the review, [1; 2], we present the explicit limiting procedure needed to define these functional derivatives in terms of finite variations of the loop while keeping it closed.
All the operators \(\partial_{\mu},\partial_{\alpha\beta},\vartheta_{\alpha}\) are expressed in terms of the spike operator
\[D_{\alpha}(\theta,\epsilon)=\int_{-\epsilon}^{+\epsilon}d\xi\Bigg{(}1-\frac{| \xi|}{\epsilon}\Bigg{)}\frac{\delta}{\delta C_{\alpha}(\theta+\xi)} \tag{10}\]
The area derivative operator can be regularized as
\[\Omega_{\alpha\beta}(\theta,\epsilon)=-l\frac{\nu}{\gamma}\frac{\delta}{\delta C_ {\alpha}^{\prime}(\theta)}\int_{-\epsilon}^{\epsilon}d\xi\frac{\delta}{\delta C _{\beta}(\theta+\xi)}-\{\alpha\leftrightarrow\beta\}; \tag{11}\]
and velocity operator (with \(\delta,\epsilon\to 0^{+}\))
\[V_{\alpha}(\theta,\epsilon,\delta)=\frac{1}{D_{\mu}^{2}(\theta,\epsilon)}D_{ \beta}(\theta,\epsilon)\Omega_{\beta\alpha}(\theta,\delta); \tag{12}\]
In addition to the loop equation, every valid loop functional \(F[C]\) must satisfy the Bianchi constraint [4; 5]
\[\partial_{\alpha}\frac{\delta F[C]}{\delta\sigma_{\beta\gamma}(r)}+\mathbf{ cyclic}=0 \tag{13}\]
In three dimensions, it follows from identity \(\vec{\nabla}\cdot\vec{\omega}=0\); in general dimension \(d>3\), the dual vorticity \(\vec{\omega}\) is an antisymmetric tensor with \(d-2\) components. The divergence of this tensor equals zero identically.
However, for the loop functional, this restriction is not an identity; it reflects that this functional is a function of a circulation of some vector field, averaged by some set of parameters.
This constraint was analyzed in [2] in the confinement region of large loops, where it was used to predict the Area law. The area derivative of the area of some smooth surface inside a large loop reduces to a local normal vector. The Bianchi constraint is equivalent to the Plateau equation for a minimal surface (mean external curvature equals zero).
In the Navier-Stokes equation, we did NOT add artificial random forces, choosing instead to randomize the initial data for the velocity field.
These ad hoc random forces would lead to the potential term [2] in the loop Hamiltonian \(\mathcal{H}_{C}\), breaking certain symmetries needed for the dimensional reduction we study below.
With random initial data instead of time-dependent delta-correlated random forcing, we no longer describe the steady state (i.e., statistical equilibrium) but decaying Turbulence, which is also an interesting process, manifesting the same critical phenomena.
The energy is pumped in at the initial moment \(t=0\) and slowly dissipates over time, provided the viscosity is small enough, corresponding to the large Reynolds number we are studying.
### Dimensional reduction
The crucial observation in [1] was that the right side of the Loop equation, without random forcing, dramatically simplifies in functional Fourier space. The dynamics of the loop field can be reproduced in an Ansatz
\[\Psi[\gamma,C]=\left\langle\exp\left(\frac{\gamma\gamma}{\nu}\oint dC_{\alpha} (\theta)P_{\alpha}(\theta)\right)\right\rangle \tag{14}\]
The difference with the original definition of \(\Psi[\gamma,C]\) is that our new function \(P_{\alpha}(\theta)\) depends directly on \(\theta\) rather then through the function \(v_{\alpha}(r)\) taken at \(r_{\alpha}=C_{\alpha}(\theta)\).
This transformation is the dimensional reduction \(d\Rightarrow 1\) we mentioned above. From the point of view of the loop functional, there is no need to deal with field \(v(r)\); one could take a shortcut.
The reduced dynamics must be equivalent to the Navier-Stokes dynamics of the original field. With the loop calculus developed above, we have all the necessary tools to build these reduced dynamics.
Let us stress an important point: the function \(\vec{P}(\theta,t)\) is **independent** of the loop \(C\). As we shall see later, it is a random variable with a universal distribution in functional space.
This independence removes our objection in the Introduction to the Kelvin theory and any other Navier-Stokes stationary solutions with a singularity at fixed loop \(C\) in space.
The functional derivative, acting on the exponential in ((14)) could be replaced by the derivative \(P^{\prime}\) as follows
\[\frac{\delta}{\delta C_{\alpha}(\theta)}\leftrightarrow-\frac{t\gamma}{\nu}P^{ \prime}_{\alpha}(\theta) \tag{15}\]
The equation for \(P(\theta)\) as a function of \(\theta\) and also a function of time, reads:
\[\partial_{t}P_{\alpha}=\big{(}\nu D_{\beta}-V_{\beta}\big{)}\Omega_{\beta\alpha} \tag{16}\]
where the operators \(V,D,\Omega\) should be regarded as ordinary numbers, with the following definitions.
The spike derivative \(D\) in the above equation
\[D_{\alpha}(\theta,\epsilon)=-\frac{t\gamma}{\nu}\int_{-1}^{1}d\mu\ \text{sgn}(\mu)P_{\alpha}(\theta+\epsilon\mu) \tag{17}\]
The vorticity (11) and velocity (12) also become singular functionals of the trajectory \(P(\theta)\).
The first observation about this equation is that the viscosity factor cancels after the substitution (17).
As we shall see, the viscosity enters initial data so that at any finite time \(t\), the solution for \(P\) still depends on viscosity.
Another observation is that the spike derivative \(D(\theta,\epsilon)\) turns to the discontinuity \(\Delta P(\theta)=P(\theta^{+})-P(\theta^{-})\) in the limit \(\epsilon\to 0^{+}\)
\[D(\theta,0^{+})=-\frac{t\gamma}{\nu}\Delta P(\theta) \tag{18}\]
The relation of the operators in the QCD loop equation to the discontinuities of the momentum loop was noticed, justified, and investigated in [7,8].
In the Navier-Stokes theory, this relation provides the key to the exact solution.
In the same way, we find the limit for vorticity
\[\Omega_{\alpha\beta}(\theta,0^{+})=\frac{-t\gamma}{\nu}P_{\alpha \beta}(\theta); \tag{19}\] \[P_{\alpha\beta}(\theta)=\Delta P_{\alpha}(\theta)P_{\beta}( \theta)-\{\alpha\leftrightarrow\beta\};\] (20) \[P_{\alpha}(\theta)\equiv\frac{P_{\alpha}(\theta^{+})+P_{\alpha} (\theta^{-})}{2} \tag{21}\]
and velocity (skipping the common argument \(\theta\) )
\[V_{\alpha}=\frac{\Delta P_{\beta}}{\Delta P_{\mu}^{2}}P_{\beta\alpha}=P_{ \alpha}-\frac{\Delta P_{\alpha}\Delta P_{\beta}P_{\beta}}{\Delta P^{2}} \tag{22}\]
The Bianchi constraint is identically satisfied as it should
\[\Delta P_{\alpha}\big{(}\Delta P_{\beta}P_{\gamma}-\{\beta\leftrightarrow\gamma \}\big{)}+\text{cyclic}=0 \tag{23}\]
We arrive at a singular loop equation for \(P_{\alpha}(\theta)\)
\[\frac{\nu}{\gamma}\partial_{t}\vec{P}=-\gamma^{2}(\Delta\vec{P}) ^{2}\vec{P}+\] \[\Delta\vec{P}\Bigg{(}\gamma^{2}\vec{P}\cdot\Delta\vec{P}+t\gamma \Bigg{(}\frac{(\vec{P}\cdot\Delta\vec{P})^{2}}{\Delta\vec{P}^{2}}-\vec{P}^{2} \Bigg{)}\Bigg{)}; \tag{24}\]
This equation is complex due to the irreversible dissipation effects in the Navier-Stokes equation.
The viscosity dropped from the right side of this equation; it can be absorbed in units of time. Viscosity also enters the initial data, as we shall see in the next Section on the example of the random rotation.
However, the large-time asymptotic behavior of the solution would be universal, as it should be in the Turbulent flow.
We are looking for a degenerate fixed point [2], a fixed manifold with some internal degrees of freedom. The spontaneous stochastization corresponds to random values of these hidden internal parameters.
Starting with different initial data, the trajectory \(\vec{P}(\theta,t)\) would approach this fixed manifold at some arbitrary point and then keep moving around it, covering it with some probability measure.
The Turbulence problem is to find this manifold and determine this probability measure.
### Random global rotation
Possible initial data for the reduced dynamics were suggested in the original papers [1; 2]. The initial velocity field's simplest meaningful distribution is the Gaussian one, with energy concentrated in the macroscopic motions. The corresponding loop field reads (we set \(\gamma=1\) for simplicity in this section)
\[\Psi_{0}[C]=\exp\biggl{(}-\frac{1}{2\nu^{2}}\int_{C}d\vec{C}(\theta)\cdot d\vec {C}(\theta^{\prime})f\Bigl{(}\vec{C}(\theta)-\vec{C}(\theta^{\prime})\Bigr{)} \biggr{)} \tag{25}\]
where \(f(\vec{\tau})\) is the velocity correlation function
\[\bigl{\langle}v_{\alpha}(r)v_{\beta}(r^{\prime})\bigr{\rangle}=\Bigl{(}\delta _{\alpha\beta}-\partial_{\alpha}\partial_{\beta}\partial_{\mu}^{-2}\Bigr{)}f( r-r^{\prime}) \tag{26}\]
The potential part drops out in the closed loop integral.
The correlation function varies at the macroscopic scale, which means that we could expand it in the Taylor series
\[f(r-r^{\prime})\to f_{0}-f_{1}(r-r^{\prime})^{2}+\dots \tag{27}\]
The first term \(f_{0}\) is proportional to initial energy density,
\[\frac{1}{2}\Bigl{\langle}v_{\alpha}^{2}\Bigr{\rangle}=\frac{d-1}{2}f_{0} \tag{28}\]
and the second one is proportional to initial energy dissipation rate \(\mathcal{E}_{0}\)
\[f_{1}=\frac{\mathcal{E}_{0}}{2d(d-1)\nu} \tag{29}\]
where \(d=3\) is the dimension of space.
The constant term in (27) as well as \(r^{2}+r^{\prime 2}\) terms drop from the closed loop integral, so we are left with the cross-term \(rr^{\prime}\), which reduces to a full square
\[\Psi_{0}[C]\to\exp\biggl{(}-\frac{f_{1}}{\nu^{2}}\biggl{(}\oint dC_{\alpha}( \theta)C_{\beta}(\theta)\biggr{)}^{2}\biggr{)} \tag{30}\]
This distribution is almost Gaussian: it reduces to Gaussian one by extra integration
\[\Psi_{0}[C]\to\ const\ \int(d\phi)\exp\Bigl{(}-\phi_{\alpha\beta}^{2} \Bigr{)}\] \[\exp\Biggl{(}2\imath\frac{\sqrt{f_{1}}}{\nu}\phi_{\mu\nu}\oint dC_{ \mu}(\theta)C_{\nu}(\theta)\Biggr{)} \tag{31}\]
The integration here involves all \(\frac{d(d-1)}{2}=3\) independent \(\alpha<\beta\) components of the antisymmetric tensor \(\phi_{\alpha\beta}\). Note that this is ordinary integration, not the functional one.
The physical meaning of this \(\phi\) is the random uniform vorticity \(\hat{\omega}=\sqrt{f_{1}}\hat{\phi}\) at the initial moment.
However, as we see it now, this initial data represents a spurious fixed point unrelated to the turbulence problem.
It was discussed in our review paper [2]. The uniform global rotation represents a fixed point of the Navier-Stokes equation for arbitrary uniform vorticity tensor.
Gaussian integration by \(\phi\) keeps it as a fixed point of the Loop equation.
The right side of the Navier-Stokes equation vanishes at this special initial data so that the exact solution of the loop equation with this initial data equals its initial value (30).
Naturally, the time derivative of the momentum loop with the corresponding initial data will vanish as well.
It is instructive to look at the momentum trajectory \(P_{\alpha}(\theta)\) for this fixed point.
The functional Fourier transform [1; 2] leads to the following simple result for the initial values of \(P_{\alpha}(\theta)\).
In terms of Fourier harmonics, this initial data read
\[P_{\alpha}(\theta)=\sum_{n=1}^{\infty}P_{\alpha,n}\exp(\imath n \theta)+\bar{P}_{\alpha,n}\exp(-\imath n\theta); \tag{32}\] \[P_{\alpha,n}=\mathcal{N}(0,1)\forall\alpha,n>0;\] (33) \[\bar{P}_{\alpha,n}=\frac{4\sqrt{f_{1}}}{\imath\nu}\phi_{\alpha \beta}P_{\beta,n};\forall\beta,n>0;\] (34) \[\phi_{\alpha\beta}=-\phi_{\beta\alpha};\] (35) \[\phi_{\alpha\beta}=\mathcal{N}(0,1)\forall\alpha<\beta; \tag{36}\]
As for the constant part \(P_{\alpha,0}\) of \(P_{\alpha}(\theta)\), it is not defined, but it drops from equations by translational invariance.
Note that this initial data is not real, as \(P_{\alpha,n}\neq P_{\alpha,n}^{*}\). Positive and negative harmonics are real but unequal, leading to a complex Fourier transform. At fixed tensor \(\phi\) the correlations are
\[\bigl{\langle}P_{\alpha,n}P_{\beta,m}\bigr{\rangle}_{t=0}=\frac{4 \sqrt{f_{1}}}{\imath mv}\delta_{-nm}\phi_{\alpha\beta}; \tag{37}\] \[\bigl{\langle}P_{\alpha}(\theta)P_{\beta}(\theta^{\prime})\bigr{ }_{t=0}=2\imath\frac{\sqrt{f_{1}}}{\nu}\phi_{\alpha\beta}\operatorname{sign}( \theta^{\prime}-\theta); \tag{38}\]
This correlation function immediately leads to the uniform expectation value of the vorticity
\[\bigl{\langle}P_{\alpha}(\theta)\Delta P_{\beta}(\theta)\bigr{\rangle}=4 \imath\sqrt{f_{1}}\phi_{\alpha\beta};\ \forall\theta \tag{39}\]
The uniform constant vorticity kills the linear term of the Navier-Stokes equation in the original loop space, involving \(\partial_{\alpha}\hat{\Omega}_{\alpha\beta}=0\).
The nonlinear term \(\hat{V}_{\alpha}\hat{\Omega}_{\alpha\beta}\) vanishes in the coordinate loop space only after integration around the loop.
Here are the steps involved
\[\hat{V}_{\beta}=\frac{1}{2}\hat{\Omega}_{\alpha\beta}C_{\beta}; \tag{40}\] \[\oint\hat{\Omega}_{\alpha\beta}C_{\beta}\hat{\Omega}_{\beta\gamma}d C_{\alpha}\propto\hat{\Omega}_{\alpha\beta}\hat{\Omega}_{\beta\gamma}\Sigma_{ \alpha\beta}(C); \tag{41}\]
Here the tensor area \(\Sigma\) was defined in (9). It is an antisymmetric tensor; therefore its trace with a symmetric tensor \(\hat{\Omega}_{\alpha\beta}\hat{\Omega}_{\beta\gamma}\) vanishes.
This calculation demonstrates how an arbitrary uniform vorticity tensor satisfies the loop equation in coordinate loop space.
We expect the turbulent solution of the loop equation to be more general, with the local vorticity tensor at the loop becoming a random variable with some distribution for every point on the loop.
### Decay or fixed point
The absolute value of loop average \(\Psi[\gamma,C]\) stays below \(1\) at any time, which leaves two possible scenarios for its behavior at a large time.
**Decay:** \[\vec{P}\to 0;\ \Psi[\gamma,C]\to 1;\] (42) **Fixed Point:** \[\vec{P}\to\vec{P}_{\infty};\ \Psi[\gamma,C]\to\Psi_{\infty}[C];\] (43)
The **Decay** scenario in the nonlinear ODE (24) corresponds to the \(1/\sqrt{t}\) decrease of \(\vec{P}\).
Omitting the common argument \(\theta\), we get the following **exact** time-dependent solution (not just asymptotically, at \(t\to+\infty\)).
\[\vec{P}=\sqrt{\frac{\nu}{2(t+t_{0})}}\frac{\vec{F}}{\gamma}; \tag{44}\] \[\left((\Delta\vec{F})^{2}-1\right)\vec{F}=\] \[\Delta\vec{F}\left(\vec{F}\cdot\Delta\vec{F}+\frac{t}{\gamma} \left(\frac{(\vec{F}\cdot\Delta\vec{F})^{2}}{(\Delta\vec{F})^{2}}-\vec{F}^{2} \right)\right); \tag{45}\]
The **Fixed Point** would correspond to the vanishing right side of the momentum loop equation (24). Multiplying by \((\Delta\vec{P})^{2}\) and reducing the terms, we find a singular algebraic equation
\[\gamma^{2}(\Delta\vec{P})^{2}\left((\Delta\vec{P})^{2}\vec{P}-( \vec{P}\cdot\Delta\vec{P})\Delta\vec{P}\right)=\] \[\nu\lambda\vec{P}\cdot\left((\vec{P}\cdot\Delta\vec{P})^{2}- \vec{P}^{2}(\Delta\vec{P})^{2}\right); \tag{46}\]
The fixed point could mean self-sustained Turbulence, which would be too good to be true, violating the second law of Thermodynamics. Indeed, it is easy to see that this fixed point cannot exist.
The fixed point equation (46) is a linear relation between two vectors \(\vec{P},\Delta\vec{P}\) with coefficients depending on various scalar products. The generic solution is simply
\[\Delta\vec{P}=\lambda\vec{P}; \tag{47}\]
with the complex parameter \(\lambda\) to be determined from the equation (46).
This solution is degenerate: the fixed point equation is satisfied for arbitrary complex \(\lambda\).
The discontinuity vector \(\Delta\vec{P}\) aligned with the principal value \(\vec{P}\) corresponds to vanishing vorticity in (19), leading to a trivial solution of the loop equation \(\Psi[\gamma,C]=1\).
We are left with the decaying turbulence scenario (45) as the only remaining physical solution.
## 2 Fractal curve in complex space
### Random walk
One may try the solution where the discontinuity vector is proportional to the principal value. However, in this case, such a solution does not exist.
\[\Delta\vec{F}\stackrel{{ 2}}{{=}}\lambda\vec{F}; \tag{48}\] \[\lambda^{2}\vec{F}^{2}-1\stackrel{{ 2}}{{=}}\lambda^{2}\vec{F}^{2}; \tag{49}\]
There is, however, another solution where the vectors \(\Delta\vec{F},\vec{F}\) are not aligned. This solution requires the following relations
\[(\Delta\vec{F})^{2}=1; \tag{50a}\] \[(2\vec{F}\cdot\Delta\vec{F}-\imath\gamma)^{2}+\gamma^{2}=4\vec{F}^ {2} \tag{50b}\]
These relations are very interesting. The complex numbers reflect irreversibility, and lack of alignment leads to vorticity distributed along the loop.
Also, note that this complex vector \(\vec{F}(\theta)\) is dimensionless, and the fixed point equation (50) is completely universal, up to a single dimensionless parameter \(\gamma\).
One can build this solution as a **Markov process** by the following method. Start with a complex vector \(\vec{F}(\theta=0)=\vec{F}_{0}\).
We compute the next values \(\vec{F}_{k}=\vec{F}\left(\frac{2\pi k}{N}\right)\) from the following discrete version of the discontinuity equations (50).
\[\left(\vec{F}_{k+1}-\vec{F}_{k}\right)^{2}=1; \tag{51a}\] \[\left(\vec{F}_{k+1}^{2}-\vec{F}_{k}^{2}-\imath\gamma\right)^{2}+ \gamma^{2}=\left(\vec{F}_{k+1}+\vec{F}_{k}\right)^{2} \tag{51b}\]
### Constraints imposed on a random step
A solution to these equations can be represented using a complex vector \(\vec{q}_{k}\) subject to two complex constraints
\[\vec{q}_{k}^{2}=1; \tag{52a}\] \[\left(2\vec{F}_{k}\cdot\vec{q}_{k}-\imath\gamma\right)^{2}=4\vec{ F}_{k}^{2}+\gamma(2\imath-\gamma) \tag{52b}\]
after which we can find the next value
\[\vec{F}_{k+1}=\vec{F}_{k}+\vec{q}_{k}; \tag{53}\]
We assume \(N\) steps, each with the angle shift \(\Delta\theta=\frac{2\pi}{N}\).
This recurrent sequence is a Markov process because each step only depends on the current position \(\vec{F}_{k}\). On top of this Markov process, there is a closure requirement \(\vec{F}_{N}=\vec{F}_{0}\).
This requirement represents a nonlinear restriction on all the variables \(\vec{F}_{k}\), which we discuss below.
With this discretization, the circulation can be expressed in terms of these steps
\[\oint\vec{F}(\theta)\cdot d\vec{C}(\theta)=-\oint\vec{C}(\theta)\cdot d\vec{F }(\theta)\Rightarrow-\sum_{k=0}^{N-1}\frac{\vec{C}_{k+1}+\vec{C}_{k}}{2}\cdot \vec{q}_{k} \tag{54}\]
Note that the complex unit vector is **not** defined with the Euclidean metric in six dimensions \(\left\langle\vec{A},\vec{B}\right\rangle=\textbf{Re }\vec{A}\cdot\textbf{Re }\vec{B}+\textbf{Im }\vec{A}\cdot\textbf{Im }\vec{B}\). Instead, we have a complex condition
\[\vec{q}^{2}=1 \tag{55}\]
which leads to **two** conditions between real and imaginary parts
\[(\textbf{Re }\vec{q})^{2}=1+(\textbf{Im }\vec{q})^{2}; \tag{56}\] \[\textbf{Re }\vec{q}\cdot\textbf{Im }\vec{q}=0; \tag{57}\]
In \(d\) dimensions, there are \(d-1\) complex parameters of the unit vector; with an extra linear constraint in (52a), there are now \(d-2\) free complex parameters at every step of our iteration, plus the discrete choice of the sign of the root in the solution of the quadratic equation.
### Closure condition
At the last step, \(k=N-1\), we need to get a closed loop \(\vec{F}_{N}=\vec{F}_{0}\). This is one more constraint on the complex vectors \(\vec{q}_{0},\ldots\vec{q}_{N-1}\)
\[\sum_{0}^{N-1}\vec{q}_{k}=0; \tag{58}\]
We use this complex vector constraint to fix the arbitrary initial complex vector \(\vec{F}_{0}\) as a function of all remaining parameters.
Looking ahead into the rest of our investigation, it turns out that the closure conditions fix only half of the \(2d\) real parameters in the initial point \(\vec{F}_{0}\). The remaining parameters are free zero modes of our fixed manifold.
Due to the closure of the space loop \(\vec{C}(\theta)\), the global translation of the momentum loop \(\vec{P}(\theta)\) leaves invariant the Wilson loop; therefore, the translational zero modes of the momentum loop do not lead to ambiguities.
However, the missing \(d\) out of \(2d\) parameters in \(\vec{F}_{0}\) mean that some other \(d\) parameters should be adjusted to provide the momentum loop closure.
We discuss this issue in the next Section, where we derive the SDE for the closed momentum loop in three dimensions. This SDE has explicit terms, which we computed in Appendix B and coded in _Mathematica_ in [32].
The adjustment of parameters we mentioned earlier yields three constraints on the Wiener process we derived.
### Mirror pairs of solutions
Return to the general study of the discrete loop equations (52).
There is a trivial solution to these equations at any even \(N\)
\[\vec{f}_{k} =\frac{(-1)^{k}\vec{q}}{2}; \tag{59}\] \[\vec{q}^{2} =1; \tag{60}\]
We reject this solution as unphysical: the corresponding vorticity equals zero, as all the vectors \(\vec{f}_{k}\) are aligned.
Our set of equations has certain mirror reflection symmetry
\[\vec{F}_{k}\leftrightarrow\vec{F}_{N-k}^{*} \tag{61}\]
Thus, the complex solutions come in mirror pairs \(\vec{F}_{k},\vec{F}_{N-k}^{*}\). The real solutions are only a particular case of the above trivial solution with real \(\vec{q}\).
Each nontrivial solution represents a periodic random walk in complex vector space \(\mathbb{C}^{d}\). The complex unit step \(\vec{q}_{k}\in\mathbb{C}^{d}\) depends on the current position \(\vec{F}_{k}\in\mathbb{C}^{d}\), or, equivalently, on the initial position \(\vec{F}_{0}\) plus the sum of the preceding steps.
We are interested in the limit of infinitely many steps \(N\to\infty\), corresponding to a closed fractal curve with a discontinuity at every point.
### The degenerate fixed point and its statistical meaning
This solution's degeneracy (fewer restrictions than the number of free parameters) is a welcome feature. One would expect this from a fixed point of the Hopf equation for the probability distribution.
In the best-known example, the microcanonical Gibbs distribution covers the energy surface with a uniform measure (ergodic hypothesis, widely accepted in Physics).
The parameters describing a point on this energy surface are not specified- in the case of an ideal Maxwell gas, these are arbitrary velocities of particles.
Likewise, the fixed manifold, corresponding to our fractal curve, is parametrized by \(N\) arbitrary local rotations, as discussed in the next Section.
This rich internal random structure of our fixed manifold, combined with its rotation and translation invariance in loop space \(C\), makes it an acceptable candidate for extreme isotropic Turbulence.
## 3 The structure of turbulent manifold
The simplest case where these equations have nontrivial solutions is the three-dimensional space. For smaller dimensions of space, there is only a degenerate solution with zero vorticity (a vanishing cross product \(\hat{\Omega}\propto\vec{P}\times\Delta\vec{P}\) ). Thus, we only consider \(d>2\) in the rest of the paper.
### Canonical form of a single step
The complex unit vector in \(d\) dimensions can be parametrized by rotation matrix and a unit real vector in \(d-2\) dimensions
\[\vec{q}=\hat{O}\cdot\vec{u}(\alpha_{1},\alpha_{2},\vec{w},\beta); \tag{62a}\] \[\vec{u}(\alpha_{1},\alpha_{2},\vec{w},\beta)=\{\alpha_{1},\alpha _{2}\vec{w},\imath\beta\};\] (62b) \[\vec{w}^{2}=1;\] (62c) \[\alpha_{1}^{2}+\alpha_{2}^{2}=1+\beta^{2}; \tag{62d}\]
The following steps lead to this canonical form. Take a general complex \(d\)-vector \(\vec{q}\) and choose the rotation \(\hat{O}\in O(d)\) to direct its imaginary part at the last axis \(d\).
The imaginary part of the condition \(\vec{q}^{2}=1\) implies the real part of this vector has zero component \(d\). This real vector in \(d-1\) dimensions can be parametrized as \(\{\alpha_{1},\alpha_{2}\vec{w}\}\) with the unit vector \(\vec{w}\in\mathbb{S}_{d-3}\) and arbitrary real parameters \(\alpha_{1},\alpha_{2}\).
There is a multiple counting of the same unit vector with this parametrization: the rotation matrix space \(O(d)\) must be factored by rotations \(O(d-3)\) of the unit vector \(\vec{w}\).
\[\hat{O}\in\binom{O(d)}{O(d-3)} \tag{63}\]
Also, the sign change of \(\alpha_{2}\) is equivalent to the reflection of the vector \(\vec{w}\), so we have to factor out such reflections and keep an arbitrary sign of \(\alpha_{2}\)
\[\vec{w}\in\binom{\mathbb{S}^{d-3}}{\mathbb{Z}^{2}}; \tag{64}\] \[\{\alpha_{1},\alpha_{2}\}\in\mathbb{R}_{2} \tag{65}\]
The complex constraint for \(\vec{F}_{k}\cdot\vec{q}_{k}\) can be used to fix these \(\alpha_{1},\alpha_{2}\) as a linear function of \(\beta\) given a complex vector
\[\vec{f}_{k}=\hat{O}_{k}^{T}\cdot\vec{F}_{k}; \tag{66}\]
as follows:
\[\{\alpha_{1},\alpha_{2}\}=\hat{M}^{-1}.\{\mathbf{Re}\;(R)-\beta \mathbf{Im}\;(c),\beta\mathbf{Re}\;(c)+\mathbf{Im}\;(R)\}; \tag{67}\] \[R=\frac{1}{2}\bigg{(}t\gamma\pm\sqrt{4\vec{f}_{k}^{2}+\gamma(2t- \gamma)}\bigg{)} \tag{68}\]
where \(\vec{f}_{k}=\{a,\vec{b},c\}\) and
\[\hat{M}=\left(\begin{array}{cc}\mathbf{Re}\;(a)&\mathbf{Re}\;(\vec{b}\cdot \vec{w})\\ \mathbf{Im}\;(a)&\mathbf{Im}\;(\vec{b}\cdot\vec{w})\end{array}\right) \tag{69}\]
After that, \(\alpha_{1}^{2}+\alpha_{2}^{2}=1+\beta^{2}\) yields a quadratic equation for \(\beta\).
Note in passing that \(\vec{u}\) belongs to De Sitter space \(dS_{d-1}\). However, this is where an analogy with the ADS/CFT duality ends.
There are, in general, four solutions for \(\beta\): two signs for \(R\) in (68) and two more signs in a solution of the quadratic equation for \(\beta\) (62d).
We have to choose a particular real solution for \(\beta\). A universal option is to choose the step with the smallest Euclidean distance \((\mathbf{Re}\;\vec{q})^{2}+(\mathbf{Im}\;\vec{q})^{2}\). We used this choice in our initial simulations [32], but later we switched to another method, using the SDE we describe later in this work. The SDE guarantees the closure condition, unlike the naive random walk approach.
### Partition function
We arrive at the invariant distribution for our fractal curve. At a fixed \(N\), the partition function (in terms of statistical mechanics)
\[\mathcal{Z}=\int d\Omega_{d}(N)=\int\prod_{k=0}^{N-1}\frac{(d\hat{O}_{k})(d \vec{w}_{k})}{2|O(d-3)|}\int d^{2d}\vec{F}_{0}\delta^{2d}(\vec{Q}); \tag{70a}\] \[\vec{Q}=\vec{F}_{N}-\vec{F}_{0}=\sum_{k=0}^{N-1}\vec{q}_{k};\] (70b) \[\vec{F}_{k+1}=\vec{F}_{k}+\vec{q}_{k};\forall k=0,\ldots N-1; \tag{70c}\]
where \(\vec{q}_{k}\) are complex vectors, parametrized by \(\hat{O}_{0},\ldots\hat{O}_{N-1},\vec{w}_{0},\ldots\vec{w}_{N-1}\) via recurrent equations (62),(67).
The complex vector's integration and delta function is understood as a product of its real and imaginary parts.
We conclude that the fixed manifold \(\mathbb{T}_{d}(N)\) of the decaying Turbulence is a subset of the tensor product of rotational and spherical spaces.
\[\mathbb{T}_{d}(N)\in\bigg{(}\Big{(}\Big{(}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{}\Big{.}\Big{.}\Big{.}\Big{.} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{}\Big{.}\Big{.}\Big{.}\Big{} \Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.}\Big{}\Big{.}\Big{.}\Big{.}\Big{.}\Big{.} \Big
We cannot resolve the global closure conditions. Still, we found a way to achieve the same goal by an SDE describing the evolution of our curve from the exact symmetric solution (72) we have found; this method preserves the closure condition at each infinitesimal step of the stochastic process.
### Symmetric fixed point
The above formal definition of the probability measure does not offer a practical simulation method for covering this manifold.
We attempted to simulate a random walk \(\vec{F}_{k}\Rightarrow\vec{F}_{k+1}\) step by step, taking random rotation matrices. Unfortunately, there was a rapidly diminishing probability of the return to the vicinity of the initial point \(\vec{F}_{N}=\vec{F}_{0}\) after \(N\) steps.
We could not numerically solve the resulting transcendental equation for the initial position \(\vec{F}_{0}\) at large \(N\), neither by analytical nor by Monte Carlo methods.
Instead, we have found an alternative algorithm for covering this manifold, preserving the closed curve.
First, we have found a symmetric family of solutions [32] of our recurrent equation (51) for arbitrary \(N\)
\[\vec{\Phi}_{k}=\frac{1}{2}\csc\left(\frac{\beta}{2}\right)\left\{\cos(\alpha_{ k}),\sin(\alpha_{k})\vec{w},i\cos\left(\frac{\beta}{2}\right)\right\}; \tag{72}\]
Here \(\vec{w}\in\mathbb{S}^{d-3}\) is a unit vector.
The angles \(\alpha_{k}\) must satisfy recurrent relation
\[\alpha_{k+1}=\alpha_{k}+\sigma_{k}\beta; \tag{73}\] \[\alpha_{N}=\alpha_{0}=0;\] (74) \[\sigma_{k}^{2}=1 \tag{75}\]
This sequence with arbitrary signs \(\sigma_{k}\) solves recurrent equation (51) independently of \(\gamma\).
For the curve to be closed, the angle \(\beta\) must be a rational multiple of \(2\pi\)
\[\beta=\frac{2\pi p}{q} \tag{76}\]
In this case, the closure condition
\[\beta\sum_{0}^{N-1}\sigma_{k}=2\pi np,\ n\neq 0. \tag{77}\]
will always have a solution for the discrete variables \(\sigma_{k}=\pm 1\). All that is needed is a relation between the net numbers \(N_{\pm}\) of positive and negative \(\sigma_{k}\)
\[N_{+}-N_{-}=nq,\ n\neq 0. \tag{78}\]
There are \(\begin{pmatrix}N\\ N_{-}\end{pmatrix}\) different states \(\sigma_{0}=\pm 1,\ldots\sigma_{N-1}=\pm 1\) satisfying the closing condition, provided \(N>2N_{-}+q\).
The variables \(\sigma_{k}\) can otherwise be random, like the spin variables in the Ising model. This distribution is less trivial than the Ising model because the angles \(\alpha_{k}\) are related to all the preceding \(\sigma\) variables, not just the local one \(\sigma_{k}\).
This solution describes a closed random walk on a circle. It is characterized by an integer \(N\) and a rational number \(\frac{p}{q}\) with \(q\leq N\).
As we pointed out in Section 2, a reflected sequence \(\vec{\Phi}_{N-k}^{*}\) also represents a solution to the recurrent equations (51).
At first glance, this simple formula seems a valid solution for the loop equations for decaying Turbulence.
Unfortunately, it does not have any energy dissipation. The vorticity vector is finite
\[\vec{\omega}_{k}=i\vec{\Phi}_{k}\times\vec{\Phi}_{k+1}=\frac{\sigma_{k}}{2}\cot \biggl{(}\frac{\beta}{2}\biggr{)}\biggl{\{}\cos\biggl{(}\frac{\beta\sigma_{k}}{ 2}+\alpha_{k}\biggr{)},\sin\biggl{(}\frac{\beta\sigma_{k}}{2}+\alpha_{k} \biggr{)},i\biggr{\}} \tag{79}\]
However, the square of this complex 3-vector is identically zero.
\[\vec{\omega}_{k}^{2}\equiv 0 \tag{80}\]
This solution can serve as initial data for the Brownian motion over the turbulence manifold, but the energy dissipation appears only after averaging over this motion.
This square of the complex vector, in the general case, can be related to the square of the vertex \(\vec{F}_{k}\) by the recurrent equations (52):
\[\biggl{(}\vec{F}_{k}\times\vec{F}_{k+1}\biggr{)}^{2}=\frac{1}{2}\gamma\biggl{(} \gamma+i\biggl{(}-1\pm\sqrt{4\vec{F}_{k}\cdot\vec{F}_{k}-\gamma(\gamma-2i)} \biggr{)}\biggr{)} \tag{81}\]
Our symmetric solution corresponds to \(\vec{F}_{k}\cdot\vec{F}_{k}=\nicefrac{{1}}{{4}}\), and \(\pm=1\), where this expression vanishes identically at arbitrary \(\gamma\).
The random walk step \(\vec{q}_{k}=\vec{\Phi}_{k+1}-\vec{\Phi}_{k}\) is a real unit vector in this case
\[\vec{q}_{k} =\sigma_{k}\{-\sin\delta_{k},\vec{w}\cos\delta_{k},0\}; \tag{82}\] \[\delta_{k} =\alpha_{k}+\frac{\beta\sigma_{k}}{2} \tag{83}\]
The direction of this vector is not random, though; in addition to the random sign \(\sigma_{k}\) and random unit vector \(\vec{w}\) in \(d>3\) dimensions, its direction depends on the previous position \(\alpha_{k}\) on a circle.
So, this is a perfect example of a Markov chain, with the closure condition analytically solved by quantizing the angular step to a rational number \(\beta=\frac{2\pi p}{q}\).
This solution corresponds to the real value of velocity circulation on each of these two solutions; however, the reflection changes the circulation.
Thus, the arithmetic average of two Wilson loops with two reflected solutions is reflection-symmetric, but it is still a complex number.
Our covering algorithm will use a symmetric fixed point with a random choice of sign variables \(\sigma_{k}\) or its reflection as a starting element.
The imaginary parts of the steps \(\vec{q}_{k}\) are zero vectors at the start. Still, the evolution below will involve **complex** infinitesimal rotations \(\delta\vec{q}_{k}=\vec{l}_{k}\times\vec{q}_{k}\) so that the imaginary parts appear later in the evolution.
Due to the global \(O(d)\) symmetry, the rotated curve \(\{\hat{O}\cdot\vec{\Phi}_{0},\dots\hat{O}\cdot\vec{\Phi}_{N-1}\}\) with arbitrary orthogonal matrix \(\hat{O}\) is also a solution. In our simulations, we integrate the Wilson loop by this global rotation after finding a particular numerical solution for the momentum loop \(\vec{F}(\theta)\). The corresponding \(O(3)\) group Fourier integral is computed in Appendix A.
### Infinitesimal complex rotations in 3D
Let us assume we already know a particular solution \(\vec{F}_{0},\vec{q}_{0},\dots\vec{q}_{N-1}\) of the recurrent equations (52) in \(d=3\) and perturb it by an infinitesimal transformation of the complex vectors \(\vec{q}_{k}\), preserving their square.
We also shift the initial point \(\vec{F}_{0}\) to keep the loop closed after infinitesimal transformations of all the steps \(\vec{q}_{k}\).
\[\delta\vec{q}_{k} =\vec{l}_{k}\times\vec{q}_{k}; \tag{84}\] \[\delta\vec{F}_{0} =\vec{\lambda}; \tag{85}\]
Here \(\vec{\mu}_{k},\vec{\lambda}\) are infinitesimal **complex** 3D vectors.
The real part \(\mathbf{Re}\;\vec{\mu}_{k}\in\mathbb{R}^{3}\) comes from the infinitesimal group transformation \(\delta_{L}\) of rotation matrices in our canonical form (62)
\[\delta_{L}\hat{O}_{k}=\hat{\Omega}_{k}\cdot\hat{O}_{k}; \tag{86}\] \[\hat{\Omega}_{k}^{\alpha\beta}=e^{\alpha\vec{\beta}\gamma}\Omega_ {k}^{\gamma};\] (87) \[\vec{\Omega}_{k}=-\mathbf{Re}\;\vec{\mu}_{k};\] (88) \[\delta_{L}\vec{q}_{k}=\mathbf{Re}\;\vec{\mu}_{k}\times\vec{q}_{k}; \tag{89}\]
The imaginary part \(\mathbf{Im}\;\vec{\mu}_{k}\) leads to the infinitesimal transformation of parameters \(\alpha_{1},\alpha_{2},\beta\) in two-dimensional de Sitter space \(dS_{2}\); therefore, there are only two independent components of \(\mathbf{Im}\;\vec{\mu}_{k}\).
We do not need an explicit split of the parameters of \(\vec{\mu}_{k}\) into these two transformations; it is sufficient to know that cross product \(\vec{\mu}_{k}\times\vec{q}_{k}\) with any complex vector \(\vec{\mu}_{k}\) is orthogonal to \(\vec{q}_{k}\), as we need it in our random walk with \(\vec{q}_{k}^{2}=1\).
Below, we will parameterize \(\mathbf{Im}\;\vec{\mu}_{k}\) by two scalar parameters.
There are two contributions to the variation of each position \(\vec{F}_{k}\). One variation comes from the rotation of the step from the previous position, and another comes from the variation of the previous position.
\[\delta\vec{F}_{k}=\delta\vec{F}_{k-1}+\vec{\mu}_{k-1}\times\vec{q}_{k-1}= \lambda+\sum_{0}^{k-1}\vec{\mu}_{l}\times\vec{q}_{l}; \tag{90}\]
By variation of the second of the constraints in (52), we find the following set of relations between infinitesimal \(\vec{\mu}_{k},\vec{\lambda}\)
\[\left(G_{k}\vec{q}_{k}\times\vec{F}_{k}\right)\cdot\vec{\mu}_{k}= -\vec{V}_{k}\cdot\left(\vec{\lambda}+\sum_{0}^{k-1}\vec{\mu}_{l}\times\vec{q} _{l}\right); \tag{91a}\] \[G_{k}=(2\vec{F}_{k}\cdot\vec{q}_{k}-\imath\gamma);\] (91b) \[\vec{V}_{k}=G_{k}\vec{q}_{k}-2\vec{F}_{k}; \tag{91c}\]
Some constraints are left in the solution for the vectors \(\vec{q}_{k}\) even after the complex rotations. Three scalar constraints on the imaginary parts of the complex rotation vectors \(\vec{\mu}_{k}\) remain in three dimensions.
These constraints are needed to provide the closure condition. There are only three scalar constraints among \(N\) real vectors, which leads to a nontrivial \(3N-3\) dimensional quotient space.
### The closure equation
We treat it as a recurrent system of equations for \(\mathbf{Im}\;\vec{\mu}_{k}\), assuming known values of \(\mathbf{Re}\;\mu_{l},\vec{\lambda}\).
After solving that system, the complex vector \(\vec{\lambda}\) is supposed to be determined from the closure equation
\[\sum_{l=0}^{N-1}\vec{\mu}_{l}\times\vec{q}_{l}=0 \tag{92}\]
assuming all \(\vec{\mu}_{l}\) expressed as linear combinations of \(\mathbf{Re}\;\vec{\lambda},\mathbf{Im}\;\vec{\lambda},\mathbf{Re}\;\vec{\mu}_{l}\).
As we found in [32] in three dimensions, this system of equations for \(\vec{\lambda}\) is degenerate: three parameters in \(\vec{\lambda}\) are left undetermined.
The solution for \(\vec{\lambda}\) exists only if \(N\) vectors \(\{\mathbf{Re}\;\vec{\mu}_{0},\ldots\mathbf{Re}\;\vec{\mu}_{N-1}\}\) obey three scalar constraints.
In other words, the complex vector equation (92) reduces to three constraints for \(\vec{\lambda}\) and another three constraints for \(\mathbf{Re}\ \vec{\mu}_{k}\). The complex vector \(\vec{\lambda}\) is left with three free components, and the vectors \(\{\mathbf{Re}\ \vec{\mu}_{0},\dots\mathbf{Re}\ \mu_{N-1}\}\) are left with \(3N-3\) free components out of \(3N\).
The solution of these equations, which we find in Appendix B, has the form
\[\vec{\lambda} =\sum_{l=0}^{N-1}\hat{\Lambda}_{l}\cdot\mathbf{Re}\ \vec{\mu}_{l}; \tag{93}\] \[\mathbf{Im}\ \vec{\mu}_{k} =\sum_{l=0}^{N-1}\hat{S}_{kl}\cdot\mathbf{Re}\ \vec{\mu}_{l} \tag{94}\]
with real \(3\times 3\) matrices \(\hat{S}_{kl}\), and complex \(3\times 3\) matrices \(\hat{\Lambda}_{l}\); these matrices depend on the current values of all the vectors \(\vec{F}_{k}\).
### Linear constraints and zero modes
In addition, we have found three linear constraints on \(\mathbf{Re}\ \vec{\mu}_{l}\), related to three complex null vectors of a block matrix \(\hat{H}\) involved in the equation for \(\vec{\lambda}\).
The vector \(\vec{\lambda}\) is defined modulo superposition of elements of these three zero modes
\[\vec{\lambda}\Rightarrow\vec{\lambda}+\sum_{i=1}^{3}c_{i}\vec{\psi}_{i} \tag{95}\]
Due to the closure of the original loop \(C\in\mathbb{R}_{d}\), the translation of \(\vec{\lambda}\) by arbitrary complex vector does not change the circulation in (54). This translation of \(\vec{\lambda}\) leads to the global translation of our momentum curve \(\vec{P}(\theta)\), preserving the circulation over the closed loop in space.
We resolved this ambiguity of \(\vec{\lambda}\) by choosing the pseudo-inverse of the degenerate matrix \(\hat{H}\) when computing the coefficients \(\hat{\Lambda}_{l}\).
The three constraints on infinitesimal rotations have a form (A182). These constraints define a subspace \(\mathcal{S}\) of the whole space \(\mathbb{R}_{3}^{\otimes N}\) of our rotation vectors \(\mathbf{Re}\ \vec{\mu}_{k}\) (dual to elements of Lie algebra on each \(SO(3)\))
\[\mathcal{S}:\sum_{k}\vec{\Theta}_{ik}\cdot\mathbf{Re}\ \vec{\mu}_{k}=0;\ i=1,2,3; \tag{96}\] \[\vec{\Theta}_{ik}=\mathbf{Re}\ \big{(}\vec{\psi}_{i}^{*}\hat{W}_{k} \big{)}; \tag{97}\]
The rotation vectors \(\mathbf{Re}\ \vec{\mu}_{k}\) vary in the quotient space
\[\mathcal{F}=\Big{(}\mathbb{R}_{3}^{\otimes N}\Big{/}\mathcal{S}\Big{)} \tag{98}\]
The null-vectors \(\vec{\psi}_{i}\) and coefficients \(\hat{W}_{k}\), depending on the current positions \(\vec{F}_{k}\) are computed using recurrent equations in [32].
We get numerical results on a laptop for arbitrary \(N<100\). Larger values of \(N\) would require a supercomputer.
### Brownian motion on turbulent manifold
Now, we are ready to write down the SDE for the evolution of our complex curve using the stochastic process \(d\vec{\xi}_{l}=\mathbf{Re}\;\vec{\mu}_{l}\):
\[d\vec{q}_{k}=\sum_{l=0}^{N-1}\hat{T}_{kl}\cdot d\vec{\xi}_{l}; \tag{99a}\] \[\hat{T}_{kl}=\delta_{kl}\delta_{k}+i\hat{q}_{k}\cdot\hat{S}_{kl};\] (99b) \[d\vec{F}_{0}=\sum_{l=0}^{N-1}\hat{\Lambda}_{l}\cdot d\vec{\xi}_{ l};\] (99c) \[\vec{F}_{k}=\vec{F}_{0}+\sum_{l=0}^{k-1}\vec{q}_{l};\] (99d) \[\sum_{k=0}^{N-1}\vec{\Theta}_{ik}\cdot d\vec{\xi}_{k}=0;\ i=1,2,3; \tag{99e}\]
These constrained stochastic differential equations describe the evolution of the point on our fixed manifold \(\mathbb{T}_{3}(N)\) of closed complex curves subject to the loop equations (51), starting with one of the symmetric fixed points (72)
\[\left.\vec{F}_{k}\right|_{\tau=0}=\vec{\Phi}_{k}\;\text{or}\;\vec{\Phi}_{N-k}^ {*} \tag{100}\]
The constrained SDE was studied in the mathematical literature [33].
We use a standard method of the projection of the Brownian motion to a quotient space. Let us introduce new stochastic real vector variables \(d\hat{\eta}=\{d\vec{\eta}_{0},\ldots d\vec{\eta}_{N-1}\}\in\mathbb{R}_{3}^{\otimes N}\) and project out the constraints as follows (in matrix notations)
\[d\hat{\xi}=d\hat{\eta}-\mathcal{P}\cdot d\hat{\eta}; \tag{101}\] \[\mathcal{P}=\hat{\Theta}^{\dagger}\cdot\left(\hat{\Theta}\cdot \hat{\Theta}^{\dagger}\right)^{-1}\cdot\hat{\Theta}; \tag{102}\]
The variables \(d\hat{\eta}\) are assumed to be delta correlated (in proper units of stochastic time)
\[\left\langle\frac{d\hat{\eta}}{d\tau}\otimes\frac{d\hat{\eta}}{d\tau^{\prime} }\right\rangle=\hat{I}\delta(\tau-\tau^{\prime}) \tag{103}\]
Here \(\hat{I}\) is a unit matrix in \(3N\) dimensions.
It is straightforward to check that \(d\vec{\xi}\) satisfies the constraints for arbitrary \(d\hat{\eta}\)
\[\hat{\Theta}\cdot d\hat{\xi}=0 \tag{104}\]
The variables \(d\hat{\xi}\) do not change when variables \(d\hat{\eta}\) are shifted by superposition of transposed constraints
\[\delta d\hat{\eta}=\hat{\Theta}^{\dagger}\cdot d\vec{w}; \tag{105}\] \[\delta d\hat{\xi}=0 \tag{106}\]
So, our stochastic process \(d\hat{\eta}\) has some redundant (gauge) degrees of freedom \(d\vec{w}\).
The variables \(d\hat{\xi}\) evolve in the quotient space \(\mathcal{F}\), covering it with an \(O(3)\) invariant measure. This invariance is easy to check by noticing that all the matrices \(\hat{\Theta},\hat{\Lambda},\hat{S},\hat{T}\) in our equations are made of rotation-covariant parameters in the linearized recurrent equations. These parameters are direct products of vectors times some dot products of other vectors.
In mathematical terms, \(d\hat{\eta}\) is a Wiener process in \(\mathbb{R}_{3}^{\otimes N}\) with a unit variance matrix, and \(d\hat{\xi}\) is a Brownian motion in the quotient space \(\mathcal{F}\). This quotient space evolves with stochastic time, as the constraint matrix \(\hat{\Theta}\) depends on current values of all vectors \(\vec{F}\).
The projection can be used to redefine the matrices
\[\mathcal{T} =\hat{T}-\mathcal{P}\cdot\hat{T}; \tag{107}\] \[\mathcal{L} =\hat{\Lambda}-\mathcal{P}\cdot\hat{\Lambda}; \tag{108}\]
after which our SDE takes a usual form
\[d\vec{q}_{k} =\sum_{l=0}^{N-1}\mathcal{T}_{kl}\cdot d\vec{q}_{l}; \tag{109}\] \[d\vec{F}_{0} =\sum_{l=0}^{N-1}\mathcal{L}_{l}\cdot d\vec{q}_{l};\] (110) \[\vec{F}_{k} =\vec{F}_{0}+\sum_{l=0}^{k-1}\vec{q}_{l}; \tag{111}\]
We propose this stochastic process in a limit \(N\to\infty\) as a mathematical definition of the fixed manifold of decaying Turbulence.The proof of this conjecture and extension to higher dimensions is left for a detailed mathematical study, which is beyond the scope of this work.We coded these SDE in [32] using _Mathematica_. This code may be useful for theoretical development, but the optimized computations should be translated into Python and C++ and run on a supercomputer or a Tensorflow cluster.Before even attempting such a computation, the random walk algorithm must be optimized. Its computational complexity grows as \(N^{4}\) per time step. These issues will be addressed in a subsequent publication, where we modify the random walk to reduce the \(N^{4}\) complexity to a linear one and optimize it for massively parallel execution on a supercomputer cluster.Once we fix the initial value at one of the two mirror fixed points \(\vec{\Phi}_{k},\vec{\Phi}_{N-k}^{*}\), the evolution is unambiguous, unlike the global description of the manifold in Section 3, where we had to choose between four solutions of two quadratic equations for the point \(\{a_{1},a_{2},\beta\}\) in de Sitter space \(dS_{2}\).We are still left with a choice of one of the two mirror solutions or, in the general case, the coefficients of their linear superposition in the Wilson loop.Such linear superposition will still solve the loop equation (7a), as this equation is **linear** in loop space. This superposition is found in the next Section.
### Mirror symmetry and inequality for the Wilson loop
There is an obvious problem with the solution we have found. The loop equation for \(\vec{P}(\theta)\) is complex, and so is the solution, particularly the vorticity in (19). Since the equation for \(\vec{P}\) is nonlinear, we cannot take a real part of \(\vec{P}\).The negative imaginary part of the circulation in momentum space may violate inequality \(|\nabla[\gamma,C]|\leq 1\).Here is how we suggest to solve this problem.In the previous Sections, we described two mirror solutions, originating in (72) and evolving by an SDE (99a).For any particular loop, we have to choose the solution with the positive imaginary part of the circulation
\[\Psi[\gamma,C]=\left\langle\exp\left(\frac{\imath\gamma\Gamma}{ \nu}\right)\Theta(\mathbf{Im}\ \Gamma)\right\rangle+\{\vec{P}(\theta)\Leftrightarrow\vec{P}^{*}(2\pi-\theta)\}; \tag{112a}\] \[\Gamma=\oint d\vec{C}(\theta)\cdot\vec{P}(\theta); \tag{112b}\]
The averaging \(\langle\dots\rangle\) corresponds to averaging over the stochastic process or, equivalently, over the stochastic time \(\tau\). On top of that, there is averaging over global rotation \(\vec{F}_{k}\Rightarrow\hat{O}\cdot\vec{F}_{k}\) over the group measure for \(\hat{O}\in SO(d)\).
At any moment of stochastic time, the inequality restricts the loop \(C\), but not the momentum loop \(\vec{P}\): for some loops \(C\), the circulation \(\Gamma\) has a positive imaginary part; for other loops, the reflected circulation \(\hat{\Gamma}\) does.
This choice is like selecting a decaying wave function for the bound state in the Schrodinger equation for a quantum potential problem.
The theta functions in this solution represent certain boundary conditions for the loop functional in the areas (if they exist) where \(\mathbf{Im}\ \Gamma=0\) or \(\mathbf{Im}\ \hat{\Gamma}=0\).
Let us study this constraint in more detail. The general solution of the loop equation involves the above-mentioned averaging over the global rotation of the whole momentum loop \(\vec{P}(\theta)\).
We can write the solution as follows (in three dimensions)
\[\Psi[\gamma,C]=\left\langle\int_{O(3)}\frac{d\Omega}{|O(3)|}\exp \Big{(}\frac{\gamma}{\nu}\Gamma_{\Omega}\Big{)}\Theta(\mathbf{Im}\ \Gamma_{\Omega})+\{\vec{P}(\theta) \Leftrightarrow\vec{P}^{*}(2\pi-\theta)\}\right\rangle; \tag{113}\] \[\Gamma_{\Omega}=\oint d\theta\vec{C}^{\prime\prime}(\theta)\cdot \hat{\Omega}\cdot\vec{P}(\theta) \tag{114}\]
We use the quaternionic representation for the group measure in AppendixA to elaborate this group integral, including extra requirements of the positive imaginary part of the circulation.
The reflected solution is treated the same way, with
\[\bar{\Gamma}_{\Omega}=\oint d\theta\vec{C}^{\prime}(\theta)\cdot\hat{\Omega} \cdot\vec{P}^{*}(2\pi-\theta) \tag{115}\]
After the addition of the reflected solution, the Wilson loop acquires the reflection symmetry
\[\Psi[\gamma,\vec{C}(2\pi-\theta)]=\Psi^{*}[\gamma,\vec{C}(\theta)] \tag{116}\]
which symmetry has to be obeyed by a Wilson loop for any real velocity field.
Each of the two complementary terms in \(\Psi[\gamma,C]\) is bounded by \(\nicefrac{{1}}{{2}}\), which provides desired inequality \(|\Psi[\gamma,C]|<1\) after the addition of the reflection.
Verifying the normalization condition \(\Psi[0]=1\) for the loop reduced to a point \(C=0\) is straightforward.
### Vorticity distribution and energy dissipation
The simplest quantity to compute in our theory is the local vorticity distribution.
As we shall see, it determines the energy dissipation rate.
The local vorticity for our decaying solution of the loop equation
\[\vec{\omega}=\frac{-i\vec{F}(\theta)\times\Delta\vec{F}(\vec{\theta})}{2(t+t_ {0})}; \tag{117}\]
Here \(\theta\) is an arbitrary point at the loop, which makes this expression a random variable.
Note that viscosity is canceled here, as it should be by dimensional counting (vorticity has the dimension of \(1/t\)).
In our random walk representation, the complex vorticity operator
\[\vec{\omega}_{k} =\frac{-t\vec{G}_{k}}{2(t+t_{0})}; \tag{118}\] \[\vec{G}_{k} =\vec{F}_{k}\times\vec{F}_{k+1}=\vec{F}_{k}\times\vec{q}_{k}; \tag{119}\]
The time derivative of energy density in our theory is
\[E^{\prime}(t)=-\frac{\nu\kappa}{4{(t_{0}+t)}^{2}}; \tag{120}\] \[\kappa=-\frac{1}{N}\sum\frac{\vec{G}_{k}^{2}+(\vec{G}_{N-k}^{*})^{2 }}{2}=-\frac{1}{N}\sum\mbox{\bf Re }\vec{G}_{k}^{2}; \tag{121}\]
Solving this equation with boundary value \(E(t=\infty)=0\) we relate \(t_{0}\) to mean initial energy
\[\langle E(t)\rangle=\frac{\nu\langle\kappa\rangle}{4{(t_{0}+t)}}; \tag{122}\] \[t_{0}=\frac{\nu\langle\kappa\rangle}{4{(E_{0})}} \tag{123}\]
The probability distribution of \(\kappa\) and its mean value \(\langle\kappa\rangle\) can be computed using our random walk. For the anomalous dissipation, we need the mean enstrophy to diverge [2; 18] so that viscosity is compensated in the extreme turbulent limit.
\[\langle\kappa\rangle=+\infty \tag{124}\]
As we shall see later, in Section 4, this happens in our numerical simulations.
The microscopic picture of this infinite enstrophy differs from the singular vortex line.
In the Euler theory, divergence came from the singularity of the classical field. However, in our dual theory, it comes from the large fluctuation of the fractal curve in momentum space.
### The Group average of the Wilson loop and manifest inequality
We can now write down our result for the Wilson loop in decaying Turbulence as a functional of the contour \(C\). We limit ourselves to the three-dimensional case:
\[\Psi[\gamma,C]_{t\to\infty}=\left\langle\exp\left(\frac{-t\oint d\theta\vec{C }(\theta)\cdot\vec{F}^{\prime}(\theta)}{\sqrt{2\nu(t+t_{0})}}\right)\Theta( \Gamma)\right\rangle_{F}+\mbox{\bf reflected}; \tag{125}\]
The finite steps approximation we considered above
\[\left\langle\exp\left(-\frac{t\oint d\theta\vec{C}(\theta)\cdot \vec{F}^{\prime}(\theta)}{\sqrt{2\nu(t+t_{0})}}\right)\right\rangle=\] \[\lim_{N\to\infty}\left\langle\exp\left(-\frac{t}{2}\frac{\sum_{k= 0}^{N-1}\left(\vec{C}_{k+1}+\vec{C}_{k}\right)\cdot\vec{q}_{k}}{2\sqrt{2\nu(t +t_{0})}}\right)\right\rangle; \tag{126}\] \[\vec{C}_{k}=\vec{C}\left(\frac{2\pi k\pi}{N}\right); \tag{127}\]
For the simplest circular loop in an \(xy\) plane, we have
\[\vec{C}(\vec{\theta})=r\{\cos\theta,\sin\theta,0\}; \tag{128}\] \[\frac{\vec{C}_{k}+\vec{C}_{k+1}}{2}=R\cos\left(\frac{\pi}{N} \right)\left\{\cos\left(\frac{\pi(2k+1)}{N}\right),\sin\left(\frac{\pi(2k+1)}{ N}\right),0\right\} \tag{129}\]
We observe that even at the large time \(t\gg t_{0}\) when the asymptotic fractal curve is already in place, there is a region of parameters
\[r\sim\sqrt{vt} \tag{130}\]
where the Wilson loop is a nontrivial universal function of a single variable. We compute these group integrals in Appendix A
\[\Psi[\gamma,C]\to W(\hat{R})+\textbf{reflected}; \tag{131}\] \[R_{\alpha\beta}=\frac{r}{4\sqrt{2\nu t}}\Big{(}T_{ij}^{\alpha \beta}+T_{ij}^{\beta\alpha}\Big{)}X_{ij};\] (132) \[T_{ij}^{\alpha\beta}=\pi\,\sigma_{i}\sigma_{i}\tau_{\alpha}\tau_ {\beta}^{\dagger};\] (133) \[\tau_{\alpha}=\{1,i\vec{\sigma};\;\alpha=(0,1,2,3);\] (134) \[\hat{X}=\cos\Big{(}\frac{\pi}{N}\Big{)}\sum_{k=0}^{N-1}\vec{q}_{k }\otimes\bigg{\{}\cos\bigg{(}\frac{\pi(2k+1)}{N}\bigg{)},\sin\bigg{(}\frac{\pi (2k+1)}{N}\bigg{)},0\bigg{\}}; \tag{135}\]
The function \(W(\hat{R})\) only depends on invariants. For our complex symmetric matrix, these invariants can be chosen as four eigenvalues \(\tau_{i}\) of its imaginary part, plus ten independent components \(r_{ij}=r_{ji}\) of the real part in the basis of the imaginary part
\[\textbf{Im}\ \hat{R}\cdot\vec{n}_{i}=\tau_{i}\vec{n}_{i};\ i=1 \ldots 4; \tag{136}\] \[r_{ij}=\vec{n}_{i}\cdot\textbf{Re}\ \hat{R}\cdot\vec{n}_{j} \tag{137}\]
The reflected term involves the complex conjugate fractal curve \(\vec{q}_{N-k}^{*}\). However, this reflected term is **not** a complex conjugate of the first one.
Thus, the Wilson loop is a complex function despite its reflection invariance. In other words, the dissipative effects are present in a big way in our solution.
We can compute our prediction for this function by numerically simulating the SDE for our vectors \(\vec{F}_{k}\) and wait for the results with physical or numerical experiments in conventional three-dimensional decaying Turbulence.
### Correlation functions
The simplest observable quantities we can extract from the loop functional are the vorticity correlation functions [2], corresponding to the loop \(C\) backtracking between two points in space \(\vec{r}_{1}=0,\vec{r}_{2}=\vec{r}\), see Fig.1. The vorticity operators are inserted at these two points.
Figure 1: Backtracking wires corresponding to vorticity correlation function.
The correlation function reduces to a random walk with a complex weight
\[\left\langle\vec{\omega}(\vec{0})\otimes\vec{\omega}(\vec{r}) \right\rangle=\frac{1}{4(t+t_{0})^{2}}\] \[\sum_{n,m}\left\langle\frac{\vec{F}_{m}\times\vec{F}_{m+1}\otimes \vec{F}_{n}\times\vec{F}_{n+1}}{N^{2}}\exp\left(\frac{t\vec{r}\cdot\left(\vec{ S}_{n,m}-\vec{S}_{m,n}\right)}{2\sqrt{v(t+t_{0})}}\right)\right\rangle+\text{\bf reflected}; \tag{138}\] \[\vec{S}_{m,n}=\frac{\sum_{m}^{n}\vec{F}_{k}}{n-m\pmod{N}}; \tag{139}\]
The averaging \(\left\langle\dots\right\rangle\) in these formulas involves group integration \(\int_{O(3)}d\Omega\) with \(\vec{F}_{k}\Rightarrow\vec{\Omega}\cdot\vec{F}_{k}\).
The positivity restrictions are inserted here as a theta function of the positive imaginary part of the circulation, in our case,
\[\vec{r}\cdot\left(\mathbf{Im}\ \vec{S}_{n,m}-\mathbf{Im}\ \vec{S}_{m,n}\right)>0 \tag{140}\]
With these restrictions, the absolute value of the Wilson loop is bounded by \(1\) from above.
We present these computations in Appendix A.
Presumably, the vorticity vectors \(\vec{G}_{m}=\vec{F}_{m}\times\vec{F}_{m+1}\) as well as the vectors \(\vec{S}_{m,n}\) are distributed by some power laws in our random walk on a fixed manifold; this would lead to scaling laws with some fractal dimensions.
The numerical simulation of this correlation function would require significant computer resources.
Still, these resources are much more modest than those for full \(d\) dimensional simulations of the Navier-Stokes equation.
In our theory, the dimension of space enters as the number of components of the one-dimensional fluctuating field \(\vec{F}(\theta)\) rather than the number of variables \(\vec{r}\in\mathbb{R}_{d}\) in the fluctuating velocity field \(\vec{v}(\vec{r})\).
Also, note that our quantum problem of the complex random walk naturally fits quantum computer architecture. Thus, in the future, when large quantum computers would become available for researchers, we can expect a real breakthrough in numerical simulations of the loop equation.
## 4 Open loop computations
We wrote a _Mathematica_"program [34] generating our random walk, starting with a random complex vector \(\vec{F}_{0}\) and using random orthogonal \(SO(3)\) matrix \(O_{k}\) at every step. If there is more than one real solution for \(\beta\), we have chosen the shortest step in Euclidean metric \(\left|\vec{q}\right|^{2}=1+2\beta^{2}\), i.e., the one with minimal \(\left|\beta\right|\).
We have chosen the simplest circular coordinate loop \(C\) in (128) and imposed the inequality \(\mathbf{Im}\ \Gamma>0\) on the last step.
For 1000 steps, it takes a few seconds on a laptop to compute the whole path. We generate a parallel table of 100000 paths, each with 1000 steps, with various random initial vectors \(\vec{F}_{0}\) with a random set of rotation matrices for each step.
The path's closure requires a numerical solution of the SDE (99a), which we plan to implement later on a supercomputer. This open path simulation only covers the big space of the direct product of rotation matrices at every step; the true turbulent fixed point corresponds to the projection of this space onto the closure condition. We cannot do it at the global level, only at the level of the SDE described in the previous Section.
Thus, this open-path simulation cannot be used for predictions of fractal dimensions in the scaling laws; this has to wait until the SDE simulation is performed at the supercomputer.
With these comments in mind, let us analyze the open curves' fractal properties, discarding the closure conditions.
The simplest quantity to compute is a fractal dimension \(d_{f}\) of this random walk, defined as
\[\frac{1}{d_{f}}=\lim_{N\rightarrow\infty}\frac{d\log\Bigl{|}\vec{F}_{N}-\vec{F}_{0 }\Bigr{|}}{d\log N} \tag{141}\]
The ordinary Brownian motion (linear random walk) has \(d_{f}=2\), but our random walk is very different, mainly because the Euclidean distance of an elementary step \(\Bigl{|}\vec{F}_{k+1}-\vec{F}_{k}\Bigr{|}\) in De Sitter space is unlimited from above (though it is limited by \(1\) from below).
Here is the plot of \(\log\Bigl{|}\vec{F}_{N}-\vec{F}_{0}\Bigr{|}\) vs \(\log N\) (Fig. 2).
The statistical data for parameters
\[\begin{array}{l|llll}&\text{Estimate}&\text{Standard Error}&\text{t- Statistic}&\text{P-Value}\\ \hline 1&-2.33876&0.0241623&-96.7941&4.3388931215906555^{\star\wedge}\text{-}42\\ \bar{\xi}&0.976443&0.00457433&213.461&2.1004008453460444^{\star\wedge}\text{-}53 \end{array} \tag{142}\]
This data is compatible with \(d_{f}=1.02412\pm 0.005\).
The distribution of the Euclidean length of each step \(\Bigl{|}\vec{F}_{k+1}-\vec{F}_{k}\Bigr{|}\) (Fig. 3).
The statistical table for the parameters of this fit
\[\begin{array}{l|llll}&\text{Estimate}&\text{Standard Error}&\text{t- Statistic}&\text{P-Value}\\ \hline 1&13.0661&0.00203512&6420.3&0.\\ \log(\text{step})&-2.00076&0.000737843&-2711.64&0.\end{array} \tag{143}\]
The mean is finite, \(\langle step\rangle=1.07056\), but the variance of the step is divergent.
Such a slow decay of the step distribution undermines the concept of a finite fractal dimension as defined in (141). The linear fit is inadequate for such large statistics.
With large statistics, one can reach a perfect fit by adding the next correction to the linear log-log law
\[\log\Bigl{|}\vec{F}_{N}-\vec{F}_{0}\Bigr{|}\approx a+b\log N+c\log\log N \tag{144}\]
Figure 2: Logarithm of Euclidean distance for our random walk in six-dimensional space \(\mathbf{Re}\ \vec{F},\mathbf{Im}\ \vec{F}\) as a function of a logarithm of the number of steps. The linear fit corresponds to fractal dimension \(d_{f}=1.02412\pm 0.005\)
The fit at larger interval of \(N\) becomes perfect, with a very different coefficient \(b\) in front of \(\log N\):
\[\begin{array}{l|llll}&\text{Estimate}&\text{Standard Error}&\text{t-Statistic}& \text{P-Value}\\ \hline 1&11.4051&0.067611&168.688&1.201777144099837*\textbackslash{-}128\\ \xi&4.85103&0.0218164&222.357&4.3433787030416533*\textbackslash{-}141\\ \log(\xi)&-20.5553&0.109809&-187.192&2.4775697694919745*\textbackslash{-}133 \end{array} \tag{145}\]
This data fit is shown at 4.
Our random walk with unbounded step size differs from an ordinary fractal curve. The fractal dimension does not properly describe this random object as the distance grows by a more complex law than a pure power of the number of steps.
Another interesting distribution is the enstrophy density \(\kappa\) in (120).
The CDF is shown in Fig.5. The tail is compatible with \(\kappa^{-0.936266}\) decay, corresponding to the \(\kappa^{-1.936266}\) decay of the PDF. The mean value and all higher moments diverge, leading to anomalous dissipation.
Figure 4: Fitting the random walk distance as a power of number \(N\) of steps times a power of \(\log N\).
Figure 3: The logarithmic plot of the CDF for the Euclidean length \(s\) of each step of our complex random walk. The tail of the CDF decays as \(s^{-2}\), indicating the probability distribution with power tail in PDF \(\kappa\)\(s^{-3}\).
The statistical table for the parameters of this fit
\[\begin{array}{l|l l l}&\text{Estimate}&\text{Standard Error}&\text{t-Statistic}& \text{P-Value}\\ \hline 1&17.1851&0.00330287&5203.09&0.\\ \log(\kappa)&-0.936266&0.000320989&-2916.81&0.\end{array} \tag{146}\]
The computation of the Wilson loop and related correlation functions of vorticity needs an ensemble of closed fractal loops with various sets of random matrices.
The closure condition for the loop would require some computational effort because the probability of the random curve with fractal dimension \(d_{f}\sim 1.\) returning to an initial point goes to zero with the increased number of steps.
An alternative approach of starting with a large closed loop \(\vec{F}_{k};\vec{F}_{N}=\vec{F}_{0}\) and randomizing it point by point while preserving its closure.
This approach would replace an SDE (99a) with a Monte-Carlo process in a closed polygon space. Each step would correspond to a small shift of a few subsequent vertices \(\vec{F}_{k+1}\ldots\vec{F}_{k+L}\) of the polygon preserving the sequence's first and last vertex \(\vec{F}_{k},\vec{F}_{k+L+1}\). This small shift must also preserve the recurrent equations (52) involving this sequence.
The first approximation to this shift would be our solution in AppendixB of the linearized equations. Then, a Newton iteration will finalize this shift to fulfill the quadratic relations (51).
These extra layers of computational complexity would require a supercomputer, which we plan to do later.
## 5 Discussion
### The Duality of Turbulence
We have presented an analytical solution of the Navier-Stokes loop equations for the Wilson loop in decaying Turbulence as a functional of the shape and size of the loop in arbitrary dimension \(d>2\).
The solution expresses the probability distribution and expected value for the Wilson loop at any given moment \(t\) in terms of a nonlinear SDE for the dual loop in complex momentum space as a function of auxiliary time \(\tau\). The loop is approximated as a polygon with \(N\to\infty\) sides.
Our solution also depends on the arbitrary dimensionless positive constant \(\gamma\), corresponding to the frequency of the Fourier transforms from the Wilson loop to the PDF of
Figure 5: The logarithmic plot of the CDF for the \(\kappa\) density. The tail of the CDF decays as \(\kappa^{-0.936266}\), compatible with the \(\kappa^{-1.956266}\) power tail for PDF. The mean value of enstrophy and all higher moments diverge.
circulation. This parameter explicitly enters our reduced loop equations for a momentum-space fractal curve.
Compared to the original Navier-Stokes equation, this is the reduction of \(d\Rightarrow 1\) of the dimension of space. This SDE is straightforward to simulate by a Monte Carlo method.
The equivalence of a strong coupling phase of the fluctuating vector field to quantum geometry is a well-known phenomenon in gauge theory (the ADS/CFT duality), ringing a bell to the modern theoretical physicist.
In our case, this is a much simpler quantum geometry: a fractal curve in complex space.
An expert in the traditional approach to Turbulence may wonder why the Loop equation's solutions have any relation to the velocity field's statistics in a decaying turbulent flow.
Such questions were raised and answered in the last few decades in the gauge theories, including QCD[6; 8; 9; 10] where the loop equations were derived first [4; 5].
Extra complications in the gauge theory are the short-distance singularities related to the infinite number of fluctuating degrees of freedom in quantum field theory. The Wilson loop functionals in coordinate space are singular in the gauge field theory and cannot be multiplicatively renormalized.
Fortunately, there is no short-distance divergence in the Navier-Stokes equations nor the Navier-Stokes loop dynamics. The Euler equations represent the singular limit, which, as we argued, should be resolved using singular topological solitons regularized by the Burgers vortex.
In the present theory, we keep viscosity constant and do not encounter any short-distance singularities. The anomalous dissipation is achieved in our solution via a completely different mechanism.
The loop equation describes the gauge invariant sector of the gauge field theory. Therefore, the gauge degrees of freedom are lost in the loop functional. However, the gauge-invariant correlations of the field strength are recoverable from the solutions of the loop equation[4; 5].
### Stokes-type functionals and vorticity correlations
There is no gauge invariance regarding the velocity field in fluid dynamics (though there is such invariance in the Clebsch variables [2]). The longitudinal, i.e., a potential part of the velocity, has a physical meaning - it is responsible for pressure and energy pumping. This part is lost in the loop functional, but is recoverable from the rotational part (the vorticity) using the Biot-Savart integral.
In the Fourier space, the correlation functions of the velocity field are algebraically related to those of vorticity \(\vec{v}_{k}=\frac{\vec{k}\times\vec{\alpha}_{k}}{\vec{R}^{2}}\). Thus, the general solution for the Wilson loop functional \(\Psi[\gamma,C]\) allows computing both vorticity and velocity correlation functions.
The solution of the loop equation with finite area derivative, satisfying Bianchi constraint, belongs to the so-called Stokes-type functionals [4], the same as the Wilson loop for Gauge theory and fluid dynamics.
As we discussed in detail in [2; 4; 5], any Stokes-type functional \(\Psi[\gamma,C]\) satisfying boundary condition at shrunk loop \(\Psi[0]=1\), and solving the loop equation can be iterated in the nonlinear term in the Navier-Stokes equations (which would apply at large viscosity).
The resulting expansion in inverse powers of viscosity (weak Turbulence) exactly coincides with the ordinary perturbation expansion of the Navier-Stokes equations for the velocity field, averaged over the distribution of initial data or boundary conditions at infinity.
We have demonstrated in [1; 2] (and also here, in Section 1.3) how the velocity distribution for the random uniform vorticity in the fluid was reproduced by a singular momentum loop \(\vec{P}(\theta)\).
The solution for \(\vec{P}(\theta)\) in this special fixed point of the loop equation was random complex and had slowly decreasing Fourier coefficients, leading to a discontinuity \(\operatorname{sign}(\theta-\theta^{2})\). The solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field, and the solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field.
The solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field, and the solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field. The solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field, and the solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field.
The solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field, and the solution for \(\vec{P}(\theta)\) in this special fixed point is a smooth function of the velocity field.
\(\theta^{\prime}\)) in a pair correlation function (38). The corresponding Wilson loop was equal to the Stokes-type functional (30).
Our general Anzatz (14) satisfies the loop equation and boundary condition at \(\Psi[C=0]=1\). It has a finite area derivative, which obeys the Bianchi constraint, making it a Stokes-type functional.
The exact solution for \(\vec{P}(\theta)\) in decaying Turbulence which we have found in this paper, leads to the Stokes functional \(\Psi[\gamma,C]\) satisfying the boundary value \(\Psi[0]=1\) at the shrunk loop.
Therefore, it represents a statistical distribution in a turbulent Navier-Stokes flow, corresponding to the degenerate fixed point of the Hopf equation for velocity circulation. It sums up all the Wylde diagrams in the limit of vanishing random forces plus nonperturbative effects, which are missed in the Wylde functional integral. Whether this exact solution is realized in Nature remains to be seen.
### Random walk around the Turbulent manifold
The fixed point we have found is infinitely more complex than the randomly rotated fluid; our curve \(\vec{P}(\theta)\) has a discontinuity at every \(\theta\), corresponding to a distributed random vorticity.
This solution is described by a fractal curve in complex \(d\) dimensional space, a limit of a random walk with nonlinear algebraic relations between the previous position \(\vec{F}_{k}\) and the next one \(\vec{F}_{k+1}\). These relations are degenerate: each step \(\vec{q}_{k}=\vec{F}_{k+1}-\vec{F}_{k}\) is characterized by an arbitrary element \(\hat{O}_{k}\in\left(\nicefrac{{O(d)}}{{O(d-3)}}\right)\) and an arbitrary element \(\vec{w}_{k}\in\left(\nicefrac{{S^{d-3}}}{{\mathbb{Z}}^{2}}\right)\). This step also depends upon the previous position \(\vec{F}_{k}\), making this process a Markov chain.
The periodicity condition \(\vec{F}_{N}=\vec{F}_{0}\) provides a nonlinear equation for an initial position \(\vec{F}_{0}\) as a function of the above free parameters \(\hat{O}_{k},\vec{w}_{k}\).
This periodicity condition presents a hard problem, particularly in the limit \(N\to\infty\), when the probability of our random walk returning to the initial point after\(N\) steps rapidly diminishes with \(N\).
We found a way around this problem by utilizing an exact periodic solution (72) to the momentum loop equations (51). This analytical solution for arbitrary \(N\) is equivalent to a periodic Ising chain or a random walk on a circle with constant angular steps or random signs. This sequence of \(\vec{F}_{k}\) can serve as initial data for the SDE, which preserves the periodicity. In Appendix B, we constructed this SDE for \(d=3\), leaving the generalizations to mathematicians.
This SDE describes the Brownian motion of the rotation matrices \(\hat{O}_{k}\in SO(3)\) in our canonical representation (62) of the solution to the discrete loop equations (52). Each matrix moves independently, while the remaining parameters \(\{\alpha_{1},\alpha_{2},\beta\}\) move around de Sitter space \(dS_{2}\) to satisfy the loop equation (52).
The closure condition further restricts the set of \(N\) infinitesimal rotations \(\delta\vec{q}_{k}=\delta\vec{q}_{k}\times\vec{q}_{k}\); there are three linear relations between \(N\) vector parameters \(\delta\vec{\theta}_{k}\) of these rotations. We found the projection matrix required to project the whole array of vector rotations \(\delta\vec{\theta}_{k}\) onto the quotient space, satisfying the closure condition.
After this projection, we obtain the motion in the quotient space, where the closure condition is satisfied at every step. We have found the tangent space to our discreet loop equations and factored out the normal directions (i.e., the null space of linearized equations) like it is done with Brownian motion on a sphere.
Presumably, this SDE uniformly covers our fixed manifold \(\mathbb{T}_{3}(N)\) for arbitrary \(N\). The limit \(N\to\infty\) presents a computational challenge, and we are planning to address this challenge in the next publication using a supercomputer.
### Simplified version of numerical simulation
We simulated the open random walk (without the closure condition) in three dimensions and studied its statistical properties. We have chosen \(\gamma=1\) at this early stage of our studies; later, we investigate the \(\gamma\) dependence.
The distribution of lengths of steps in Euclidean six-dimensional space \(\mathbf{Re}\ \vec{F},\mathbf{Im}\ \vec{F}\) has a long tail \(PDF\propto x^{-2}\).
The fractal dimension is not an adequate characteristic for a random walk with such an intermittent step size, unbounded from above. The linear log-log fit as in (141) yields \(d_{f}\approx 1.20\), but this fit is imperfect with our large statistics.
As for the distribution of an enstrophy density, it has a power tail \(x^{-1.9}\) corresponding to an infinite mean value and all higher moments. This infinity is how anomalous dissipation manifests in our solution.
These numerical simulations must be repeated on a supercomputer with better statistics and more steps. There are many things to do next with this conjectured solution to the decaying turbulence problem; the first is to look for unnoticed inconsistencies.
One important step is yet to be made: the MC simulation of the SDE (99a). Let us assume that the qualitative properties and fractal dimensions we have found for the open fractal curves will stay the same or at least close.
### Preliminary comparison with experiments
As a first test of this hypothesis, let us compare it with various experimental data and those from DNS [35].
There is no agreement between these data, they vary in Reynolds number, and they have other differences related to the experimental setup. No value \(n\) for the decay power \(t^{-n}\) would fit all that data. However, a consensus seems to be around \(n\approx 1.2-1.4\), which means faster decay than we have.
We are skeptical about these data. As we recently learned [36], there is a regime change at large Reynolds numbers; the numbers achievable in modern DNS may belong to such a transitional regime.
Besides, fitting powers is not a reliable method of deriving physical laws.
For example, we took a formula \(1/(t-0.5)\), added random noise between \((-0.1,0.1)\) and fitted this data to \(bt^{-n}\). The best fit produced some fake power \(n\approx 1.43\) and some fake coefficient \(b\approx 1.88\) in front (see Fig.6)
Instead, one should compare a hypothetical theory with a null hypothesis by estimating the log-likelihood of both fits. In case the new theory is more likely as an explanation of the data, you may temporarily accept it until a better theory or better data will appear.
A good history lesson is fitting the power \(n\) in Newton's gravity law to explain the astronomic data for the Mercury perihelion before the General Relativity theory. A small correction to \(n=1\) "explained" the data, but this was useless without a theory.
Presumably, our fixed point corresponds to a true infinite Reynolds limit, as it is completely universal and does not depend on the Reynolds scales.
If you assume no hidden scales are left, our \(E\propto\nu/t\) law follows from dimensional analysis. Observed or simulated data with \(n>1\) all have the powers of some other dimensional parameters related to the Reynolds number. They rely on (multifractal versions of) K41 spectra and other intermediate turbulent phenomena.
We have an anomalous dissipation rate: the mean value of the vorticity square diverges, compensating for the viscosity factor in the energy decay in extreme turbulent limit.
This mechanism of anomalous dissipation differs from the one we studied in the Kelvinon [2; 18]. In those fixed points, the viscosity canceled in the dissipation rate due to the singular vorticity configurations with the thin vortex line resolved as a core of a Burgers vortex.
Here, in the dual theory of fractal momentum loop, the large fluctuations of this momentum loop would lead to the divergent expectation value of the enstrophy.
### Conclusion
Our solution is universal, rotational, and translational invariant. It has the expected properties of extreme isotropic Turbulence. Is it THE solution? Time will tell.
###### Acknowledgements.
I benefited from discussions of this theory with Sasha Polyakov, K.R. Sreenivasan, Greg Eyink, Luca Moriconi, Vladimir Kazakov, and Kartik Iyer. This research was supported by a Simons Foundation award ID 686282 at NYU Abu Dhabi.
## Data Availability
We generated new data for the nonlinear complex random walk and the enstrophy distribution using the _Mathematica_ code. This code, the data, and the three-dimensional plots are publicly available in Wolfram Cloud [32; 37; 38; 39]. These notebooks also present analytical computations needed to verify our symmetric solution of the loop equation and linearized equations used in the SDE.
|
2306.05702
|
Variable screening using factor analysis for high-dimensional data with
multicollinearity
|
Screening methods are useful tools for variable selection in regression
analysis when the number of predictors is much larger than the sample size.
Factor analysis is used to eliminate multicollinearity among predictors, which
improves the variable selection performance. We propose a new method, called
Truncated Preconditioned Profiled Independence Screening (TPPIS), that better
selects the number of factors to eliminate multicollinearity. The proposed
method improves the variable selection performance by truncating unnecessary
parts from the information obtained by factor analysis. We confirmed the
superior performance of the proposed method in variable selection through
analysis using simulation data and real datasets.
|
Shuntaro Tanaka, Hidetoshi Matsui
|
2023-06-09T06:46:07Z
|
http://arxiv.org/abs/2306.05702v1
|
# Variable screening using factor analysis for high-dimensional data with multicollinearity
###### Abstract
Screening methods are useful tools for variable selection in regression analysis when the number of predictors is much larger than the sample size. Factor analysis is used to eliminate multicollinearity among predictors, which improves the variable selection performance. We propose a new method, called Truncated Preconditioned Profiled Independence Screening (TPPIS), that better selects the number of factors to eliminate multicollinearity. The proposed method improves the variable selection performance by truncating unnecessary parts from the information obtained by factor analysis. We confirmed the superior performance of the proposed method in variable selection through analysis using simulation data and real datasets.
K +
Footnote †: journal:
0000-0002-4891-5588]H.Matsui
0000-0002-4891-5588]M.M.T.
Variable selection is also used in clinical models that predict possible future diseases [3] and in near-infrared spectroscopy analysis to measure food compositions [4].
It is difficult to apply the classical variable selection techniques such as stepwise regression to high-dimensional data. Methods using \(L_{1}\)-type regularization also fail to select variables for ultra-high dimensional data. While more recently, Sure Independence Screening (SIS) was proposed to greatly reduce the dimension of the predictors and select important variables [5]. SIS selects predictors in the order of their Pearson's correlations with the response in linear regression models. Although this is a simple technique, the probability that the set of variables selected by SIS contains a set of truly important variables converges to 1 as the sample size increases. Several extensions of SIS have been proposed. [6] extended the idea of SIS to generalized linear models, and [7] extended it to high-dimensional additive models. In addition, there are screening methods that use non-linear correlations instead of Pearson correlations. [8] proposed a method that is robust to outliers that uses Kendall's rank correlation coefficient. [9] used distance correlation, and [10] used the Hilbert-Schmidt Independence Criterion (HSIC). With these criteria, we can apply the screening methods without assuming any distribution for the variables. [11] also proposed a method for censored data. The development of screening methods was summarized in [12].
However, most of these screening methods have the problem that their performance degrades in the presence of multicollinearity. To solve this problem, [13] proposed a method called High-dimensional Ordinary Least squares Projection (HOLP), which accommodates highly multicollinear predictors by selecting variables in the order of their relations estimated by high-dimensional ordinary least squares. Factor Profiled Sure Independence Screening (FPSIS) proposed by [14] transforms the data for predictors by applying factor analysis, which reduces multicollinearity. Then we can select appropriate variables by applying SIS to the transformed data that correspond to unique factors. Preconditioned Profiled Independence Screening (PPIS) proposed by [15] improved the FPSIS transformation process to better reduce multicollinearity. PPIS eliminates unnecessary information from the predictors by using all of the common factors obtained from applying factor analysis to the predictors, whereas FPSIS uses only a subset of common factors.
However, PPIS seems to eliminate more information about predictors than necessary, which can degrade variable selection performance. To overcome this issue, we propose a method to improve the effectiveness of removing multicollinearity by modifying PPIS to select variables more accurately. We truncate some of the common factors eliminated in the PPIS transformation process to prevent excessive loss of information for variable screening. We call our proposed method Truncated PPIS (TPPIS). The reason why TPPIS improves the variable selection performance can be explained by a model based on the distribution of eigenvalues. The truncation part is determined objectively using the BIC-type criterion proposed by [14]. SIS is then applied to the data whose multicollinearity has been removed by the transformation process. Through analysis of simulated and real data, we show that TPPIS can transform data appropriately.
The remainder of this paper is organized as follows. Section 2 describes existing screening methods, and then the proposed method is described in Section 3. In Section 4, we confirm the performance of the screening method through a simulated data analysis, and then report the results of real data analysis in Section 5. Section 6 summarizes the main points.
## 2 Screening methods utilizing factor analysis
Suppose we have \(n\) sets of observations \(\{(y_{i},x_{i}),i=1,\ldots,n\}\), where \(y_{i}\in\mathbb{R}\) is a response and \(\mathbf{x}_{i}=(x_{i1},\ldots,x_{ip})^{T}\in\mathbb{R}^{p}\) is a vector of predictors. In particular, we assume that \(n<p\) and \(\mathbf{x}_{i}\) is standardized and \(y_{i}\) is centered. The relationship between \(y_{i}\) and \(\mathbf{x}_{i}\) is assumed to be represented by the following linear model.
\[y_{i}=\mathbf{x}_{i}^{T}\mathbf{\beta}+\varepsilon_{i},\]
where \(\mathbf{\beta}=(\beta_{1},\ldots,\beta_{p})^{T}\in\mathbb{R}^{p}\) are regression coefficients and \(\varepsilon_{i}\in\mathbb{R}\) is independent and identically distributed (i.i.d.) random noise following \(N(0,\sigma^{2})\). Let \(\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\in\mathbb{R}^{n}\), \(X=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})^{T}\in\mathbb{R}^{n\times p}\), and \(\mathbf{\varepsilon}=(\varepsilon_{1},\ldots,\varepsilon_{n})^{T}\in\mathbb{R}^ {n}\). Then the above linear model can be expressed as
\[\mathbf{y}=X\mathbf{\beta}+\mathbf{\varepsilon}. \tag{1}\]
Let \(\mathbf{\omega}=(\omega_{1},\ldots,\omega_{p})^{T}=X^{T}\mathbf{y}\in\mathbb{R}^{p}\) and define the importance of the \(j\)-th variable as \(|\omega_{j}|\) (\(1\leq j\leq p\)). SIS excludes predictors that are considered to be unnecessary by selecting the \(j\)-th variables in order of increasing \(|\omega_{j}|\). However, SIS does not work well in the presence of strong multicollinearity. For example, \(|\omega_{j}|\) becomes smaller even for important variables or \(|\omega_{j}|\) becomes larger even for unimportant variables.
In FPSIS [14], SIS is applied after a transformation process to remove multicollinearity by applying factor analysis. Let \(Z\in\mathbb{R}^{n\times d}\) be a matrix of vectors of \(d\) (\(<n\)) common factors of \(X\), \(B\in\mathbb{R}^{p\times d}\) be factor loadings, and \(\tilde{X}\in\mathbb{R}^{n\times p}\) be a matrix composed of unique factors. Then we can express their relationships as \(X=ZB^{T}+\tilde{X}\), where the columns of \(\tilde{X}\) are independent each other. Although \(Z\) is not uniquely determined due to the rotation invariance, a solution for \(Z\) can be obtained by singular value decomposition.
Let \(\mu_{1},\ldots,\mu_{n}\) be \(n\) singular values of \(X\), where \(\mu_{1}\geq\ldots\geq\mu_{n}>0\), since we assume \(n<p\) here. The singular value decomposition of \(X\) gives
\[X=UDV^{T}, \tag{2}\]
where \(U=(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})\in\mathbb{R}^{n\times n},\mathbf{u}_{l}=(u_{1l}, \ldots,u_{nl})^{T}\in\mathbb{R}^{n},D=\text{diag}(\mu_{1},\ldots,\mu_{n})\in \mathbb{R}^{n\times n},V=(\mathbf{v}_{1},\ldots,\mathbf{v}_{n})\in\mathbb{R}^{p\times n },\mathbf{v}_{l}=(v_{1l},\ldots,v_{pl})^{T}\in\mathbb{R}^{p}\) (\(l=1,\ldots,n\)), and \(U^{T}U=V^{T}V=I_{n}\). Let \(U_{1}=(\mathbf{u}_{1},\ldots,\mathbf{u}_{d})\in\mathbb{R}^{n\times d}\) denote the first \(d\) columns of the matrix \(U\) in (2). Then \(U_{1}\) can be regarded as one of the solutions of \(Z\). [14] decided the value of \(d\) by the following equation using the ratio of the singular values of \(X\):
\[d=\underset{1\leq l\leq n-1}{\text{argmax}}\frac{\mu_{l}^{2}}{\mu_{l+1}^{2}}. \tag{3}\]
The projection matrix onto the orthogonal complement of the linear subspace spanned by the column vectors of the matrix \(U_{1}\) is given by
\[Q_{F}=I_{n}-U_{1}(U_{1}^{T}U_{1})^{-1}U_{1}^{T}. \tag{4}\]
Left-multiplying both sides of (1) by \(Q_{F}\) gives
\[Q_{F}\mathbf{y}=Q_{F}X\mathbf{\beta}+Q_{F}\mathbf{\varepsilon}. \tag{5}\]
Let \(\hat{\mathbf{y}}=(\hat{y}_{1},\ldots,\hat{y}_{n})^{T}=Q_{F}\mathbf{y}\) and \(\hat{X}=(\hat{\mathbf{x}}_{1},\ldots,\hat{\mathbf{x}}_{n})=Q_{F}X\). \(\hat{X}\) is an approximation of the unique factors \(\tilde{X}\). The use of \(\hat{X}\) instead of \(X\) enables us to eliminate multicollinearity and to select appropriate variables. FPSIS calculates \(\mathbf{\omega}=(\omega_{1},\ldots,\omega_{p})^{T}=\hat{X}^{T}\hat{\mathbf{y}}\in \mathbb{R}^{p}\), and then selects variables where \(|\omega_{j}|\) is large in order.
PPIS [15] improved the FPSIS transformation process. First, after applying SVD to \(X\) as in (2), they divided each of the matrices \(U,D,V\) into two parts at the \(d\)-th column: \(U_{1}=(\mathbf{u}_{1},\ldots,\mathbf{u}_{d})\in\mathbb{R}^{n\times d}\), \(U_{2}=(\mathbf{u}_{d+1},\ldots\mathbf{u}_{n})\in\mathbb{R}^{n\times(n-d)}\), \(D_{1}=\mathrm{diag}(\mu_{1},\ldots,\mu_{d})\in\mathbb{R}^{d\times d}\), \(D_{2}=\mathrm{diag}(\mu_{d+1},\ldots,\mu_{n})\in\mathbb{R}^{(n-d)\times(n-d)}\), \(V_{1}=(\mathbf{v}_{1},\ldots,\mathbf{v}_{d})\in\mathbb{R}^{p\times d}\), \(V_{2}=(\mathbf{v}_{d+1},\ldots\mathbf{v}_{n})\in\mathbb{R}^{p\times(n-d)}\). Let
\[Q_{P}=U_{2}D_{2}U_{2}^{T}\left\{I_{n}-U_{1}(U_{1}^{T}U_{1})U_{1}^{T}\right\} \tag{6}\]
and replace \(Q_{F}\) with \(Q_{P}\) in (5). This is based on the Puffer transformation [16]. PPIS calculates \(\mathbf{\omega}=\hat{X}^{T}\hat{\mathbf{y}}\) as in FPSIS, where \(\hat{\mathbf{y}}=Q_{P}\mathbf{y}\)\(\hat{X}=Q_{P}X\), and then selects variables in order of the size of \(|\omega_{j}|\). The number of dimensions \(d\) of \(U_{1}\) is determined by (3) using the ratio of the singular values of \(X\). We explain the reasonableness of PPIS in Section 3.2 using a model based on the distribution of eigenvalues.
However, if the magnitudes of the singular values after the \(d\)-th are not sufficiently small compared to those before the \(d\)-th, \(\hat{X}\) is still multicollinear when we simply remove from \(X\) the effects that are related to the first \(d\) common factors of \(X\). Therefore, by removing the influence of the \(n\) common factors of \(X\) including the information after the \(d\)-th factor that is not used in FPSIS, \(\hat{X}\) becomes closer to the unique factors \(\tilde{X}\), which leads to the elimination of more multicollinearity.
## 3 Proposed method
### Tppis
We propose selecting the number of factors to eliminate more multicollinearity by modifying the transformation process in PPIS. Let \(\alpha\) be a tuning parameter that satisfies \(\alpha\in(0,1]\) and \(d<[n\alpha]\). After applying SVD to \(X\), as in (2), we divide \(U,D,V\) into three parts at the \(d\)-th column and the \([n\alpha]\)-th column: \(U=(U_{1},U_{2a},U_{2b})\), \(D=\mathrm{diag}(\mu_{1},\ldots,\mu_{n})\), \(V=(V_{1},V_{2a},V_{2b})\), \(U_{1}=(\mathbf{u}_{1},\ldots,\mathbf{u}_{d})\), \(U_{2a}=(\mathbf{u}_{d+1},\ldots,\mathbf{u}_{[n\alpha]})\), \(U_{2b}=(\mathbf{u}_{[n\alpha]+1},\ldots,\mathbf{u}_{n})\), \(D_{1}=\mathrm{diag}(\mu_{1},\ldots,\mu_{d})\), \(D_{2a}=\mathrm{diag}(\mu_{d+1},\ldots,\mu_{[n\alpha]})\), \(D_{2b}=\mathrm{diag}(\mu_{[n\alpha]+1},\ldots,\mu_{n})\), \(V_{1}=(\mathbf{v}_{1},\ldots,\mathbf{v}_{d})\), \(V_{2a}=(\mathbf{v}_{d+1},\ldots,\mathbf{v}_{[n\alpha]})\), and \(V_{2b}=(\mathbf{v}_{[n\alpha]+1},\ldots,\mathbf{v}_{n})\). Then we define the following projection matrix
\[Q_{T}=U_{2a}D_{2a}^{-1}U_{2a}^{T}\left\{I_{n}-U_{1}(U_{1}^{T}U_{1})U_{1}^{T} \right\}. \tag{7}\]
Using \(\hat{X}=Q_{T}X\) rather than \(Q_{P}X\), we can eliminate multicollinearity more accurately since \(Q_{T}\) leaves the information that corresponds to the unique factors by truncating \(U_{2b}\) and \(D_{2b}\) from \(U_{2}\) and \(D_{2}\), respectively. TPPIS calculates \(\hat{\mathbf{y}},\hat{X}\), and \(\mathbf{\omega}\) using the equation that replaces \(Q_{F}\) with \(Q_{T}\) in (5), and then selects variables where \(|\omega_{j}|\) is large in order.
Denote a set of \(k\) selected variables as
\[M_{k}=\{1\leq j\leq p:|\omega_{j}|\text{ is among the first $k$ largest of all }\}\]
and denote predictors whose columns are composed of \(M_{k}\) as \(X(M_{k})\in\mathbb{R}^{n\times k}\). We pre
dict the response using \(\mathbf{y}=X(M_{k})\mathbf{\hat{\beta}}(M_{k})\), where \(\mathbf{\hat{\beta}}(M_{k})\) is the least squares estimator of the regression coefficient of \(\hat{X}(M_{k})\), that is,
\[\hat{\mathbf{\beta}}(M_{k})=\left\{\hat{X}(M_{k})^{T}\hat{X}(M_{k})\right\}^{-1} \hat{X}(M_{k})^{T}\hat{\mathbf{y}}. \tag{8}\]
### Reasons why TPPIS improves the effectiveness of removing multicollinearity
We discuss the reason why TPPIS improves the effectiveness of removing multicollinearity and the variable selection performance. [15] indicates that the transformation process using \(Q_{P}\) of (6) works well for data that follow a highly multicollinear spike model. The spike model has the property that some eigenvalues of the variance-covariance matrix are larger than others. Suppose that the eigenvalues of a variance-covariance matrix \(X\), denoted by \(\Sigma_{p}\), can be divided into three size categories: large, medium, and small. Among \(p\) eigenvalues, let \(d\) be the number of large eigenvalues, \(m\) be the number of medium eigenvalues, and \(p-d-m\) be the number of small eigenvalues. Then the spike model assumes that \(\Sigma_{p}\) is represented as
\[\Sigma_{p}=\sum_{r=1}^{d}(\lambda_{r}+\sigma_{0}^{2})\mathbf{u}_{r}^{*}\mathbf{u}_{r}^ {*T}+\sum_{s=1}^{m}(\omega_{s}+\sigma_{0}^{2})\mathbf{u}_{d+s}^{*}{\mathbf{u}_{d+s}^{*} }^{T}+\sum_{t=1}^{p-d-m}\sigma_{0}^{2}\mathbf{u}_{d+m+t}^{*T}\mathbf{u}_{d+m+t}^{*T},\]
where \(\lambda_{1}\geq\ldots\geq\lambda_{d}>\omega_{1}\geq\ldots\geq\omega_{m}>0\), \(\sigma_{0}^{2}\) is a positive constant, and \(\{\mathbf{u}_{1}^{*},\ldots,\mathbf{u}_{p}^{*}\}\) constitute an orthonormal basis of \(\mathbb{R}^{p}\). In this case, \(X\) can be expressed as
\[X=\sum_{r=1}^{d}\sqrt{\lambda_{r}}\mathbf{z}_{r}\mathbf{u}_{r}^{*T}+\sum_{s=1}^{m} \sqrt{\omega_{s}}\mathbf{z}_{d+s}\mathbf{u}_{d+s}^{*}{}^{T}+\sigma_{0}^{2}\Lambda, \tag{9}\]
where \(\mathbf{z}_{w}\in\mathbb{R}^{n}\) (\(w=1,\ldots,d+m\)) are i.i.d. \(N(\mathbf{0},I_{n})\) vectors and \(\Lambda\in\mathbb{R}^{n\times p}\) has i.i.d. \(N(0,1)\) elements. The vectors \(\mathbf{z}_{r}\) and \(\mathbf{u}_{r}^{*}\) respectively represent a common factor and a factor loading of \(X\), and \(\sigma_{0}^{2}\Lambda\) represents a unique factor of \(X\). Let \(X_{1},X_{2},X_{3}\) be the first, second, and third terms of (9), respectively; that is, we can express (9) as \(X=X_{1}+X_{2}+X_{3}\).
Since \(Q_{F}\) in (4) is the projection matrix onto the orthogonal complement of the linear subspace spanned by the column vector \(U_{1}\in\mathbb{R}^{n\times d}\), \(Q_{F}\) can remove the effect of \(d\) common factors. That is,
\[Q_{F}X =Q_{F}(X_{1}+X_{2}+X_{3})\] \[\approx X_{2}+X_{3}.\]
The PPIS transformation process using \(Q_{P}\) in (6) can remove the effect of \(X_{1}\) and \(X_{2}\). However, since \(U_{2}\) and \(D_{2}\) in \(Q_{P}\) use all column vectors after the \(d\)-th column, some extra information seems to have been removed from the unique factor \(X_{3}\) that should have been left behind. \(Q_{T}\) in (7), which truncates \(U_{2b}\) from \(U_{2}\) and \(D_{2b}\) from \(D_{2}\), can improve variable selection performance by leaving the unique factors more accurately.
### Selection of tuning parameter
The performance of the proposed method strongly depends on the dimension \(d\) of \(U_{1}\), the tuning parameter \(\alpha\), and the number \(k\) of selected variables. We have to decide appropriate values for them. To do this, we use the BIC-type criterion adapted to high-dimensional data proposed by [14]. Using \(\hat{\boldsymbol{\beta}}(M_{k})\) in (8), the BIC-type criterion is given by
\[\text{BIC}(M_{k})=\log\left\{\left|\left|\boldsymbol{y}-X(M_{k})\hat{ \boldsymbol{\beta}}(M_{k})\right|\right|^{2}\right\}+(n^{-1}\text{log }p)|M_{k}|\,\log\,n. \tag{10}\]
We use grid search to find the optimal \(d\), \(\alpha\), and \(k\), selecting the values with which make BIC smallest as the optimal parameters.
## 4 Simulation examples
To investigate the effectiveness of the proposed TPPIS method, we compare TPPIS with the existing methods. After calculating the importance of each predictor on the response for each method, the number of variables is determined using the BIC-type criterion (10), and then the variable selection performance is verified.
### Settings for simulated data
We conducted four examples. The sample size \(n\) and the number of predictors \(p\) are set as \(n=100,300\), and \(p=1000\) as common values for each example, respectively. For the TPPIS parameter \(d\), we examined six patterns: \(0.2n,0.4n,0.6n,0.8n,1.0n\), and the value given by (3). In addition, we examined five values ranging from \(0.2\) to \(1.0\) in increments of \(0.2\) for \(\alpha\). For the number of variables, \(k\), we examined \(p\) values ranging from \(1\) to \(p\). We then select the \(d\), \(\alpha\), and \(k\) giving the smallest BIC as the optimal parameters.
* Example 1 For each \(i\) in \(1\leq i\leq n\), \[y_{i}=5x_{i1}+5x_{i2}+5x_{i3}-15x_{i4}+\varepsilon_{i},\] where \(\varepsilon_{i}\) are i.i.d. errors following \(N(0,1)\), \(\boldsymbol{x}_{i}=(x_{i1},\ldots,x_{ip})^{T}\) are i.i.d. predictors following \(N(\mathbf{0},\Sigma)\) and the variance-covariance matrix \(\Sigma=(\Sigma_{jk})_{j,k=1}^{p}\) satisfies \[\Sigma_{jj} =1,\] \[\Sigma_{jk} =\varphi\ (j\neq k,j\neq 4,k\neq 4),\] \[\Sigma_{4,k} =\Sigma_{j,4}=\sqrt{\varphi}\ (j,k\neq 4).\] We investigated three values for the parameter \(\varphi\): \(0.5\), \(0.7\), and \(0.9\).
* Example 2 For each \(i\) in \(1\leq i\leq n\), \[y_{i}=5x_{i1}+5x_{i2}+5x_{i3}-15x_{i4}+5x_{i5}+\varepsilon_{i}.\]
The setting is similar to that in Example 1, but the fifth variable is added. In addition, the variance-covariance matrix \(\Sigma\) of the predictor satisfies \(\Sigma_{5,j}=\Sigma_{j,5}=0\) (\(j\neq 5\)).
* Example 3 For each \(i\) in \(1\leq i\leq n\), \[y_{i}=5x_{i1}+5x_{i2}+5x_{i3}-15x_{i4}+5x_{i5}+\varepsilon_{i}.\] The regression model is the same as in Example 2, except that the sixth variable, which is not included in the regression model, satisfies \(x_{i6}=0.8x_{i5}+\delta_{i}\), where \(\delta_{i}\) follows i.i.d. \(N(0,0.01)\). Compared to Example 2, the data for the predictors are more multicollinear.
* Example 4 We consider the case where \(X\) follows a spike model (9), given by \[X=\sum_{r=1}^{d}\mathbf{z}_{r}\mathbf{b}_{r}^{T}+\sum_{s=1}^{m}n^{\frac{-(s+9)}{m+10}} \mathbf{z}_{d+s}\mathbf{b}_{d+s}^{T}+\tilde{X},\] where \(\mathbf{z}_{k}\in\mathbb{R}^{n}\) (\(k=1,\ldots,d+m\)) are i.i.d. vectors following \(N(\mathbf{0},I_{n})\), \(\mathbf{b}_{k}\in\mathbb{R}^{p}\) is a vector of i.i.d. \(N(0,1)\) elements, and \(\tilde{X}=(\tilde{\mathbf{x}}_{1},\ldots,\tilde{\mathbf{x}}_{n})^{T}\in\mathbb{R}^{n \times p}\) with \(\tilde{\mathbf{x}}_{i}=(\tilde{x}_{i1},\ldots,\tilde{x}_{ip})^{T}\in\mathbb{R}^{p}\), \(E(\tilde{x}_{ij})=0\), and \(\text{cov}(\tilde{x}_{ij_{1}},\tilde{x}_{ij_{2}})=I_{p}\). This case corresponds to equation (9) with \(\sqrt{\lambda_{r}}=1\) (\(1\leq r\leq d\)), \(\sqrt{\omega_{s}}=n^{\frac{-(s+9)}{m+10}}\) (\(1\leq s\leq m\)), and \(\sigma_{0}^{2}=1\). This model is the same as that used in the simulation by [15]. In this example, \(d\) is set to \(3\) and \(m\) is set according to \(4\) patterns: \(0.2n\), \(0.4n\), \(0.6n\), and \(0.8n\). The regression model is given by \[y_{i}=5x_{i1}+4x_{i2}+3x_{i3}+2x_{i4}+\varepsilon_{i},\] where \(\varepsilon_{i}\) are i.i.d. errors following \(N(0,\sigma^{2})\) with \(\sigma^{2}=\text{var}(X\mathbf{\beta})/5\) and \(\mathbf{\beta}=(5,4,3,2,0,\ldots,0)^{T}\in\mathbb{R}^{p}\).
In each example, we generate datasets \(100\) times for each combination of parameters. For each dataset, the numbers of selected predictors and the least squares estimator (8) is calculated. The number of variables is determined using the BIC in (10).
### Comparison methods
The proposed TPPIS method is compared with the existing SIS, FPSIS, and PPIS methods. In addition to the original FPSIS which selects the value of \(d\) using the ratio of eigenvalues (3), we also compared a modified FPSIS where \(d\) is selected by the BIC in (10) rather than (3). We denote this method as FPSIS\({}_{BIC}\). We test the values of \(d\) in FPSIS\({}_{BIC}\) with six patterns, as in the case of TPPIS.
### Score metric for screening
We evaluate the variable selection performance of the screening methods using the score based on the number of correctly and incorrectly selected variables. We refer
to necessary predictors as Positive (P) and unnecessary variables as Negative (N) in the regression model. Since the true regression coefficients of the simulated data are known, we can calculate True Positive (TP), False Positive (FP), True Negative (TN), False Negative (FN), Recall (TP/(TP+FN)), and Precision (TP/(TP+FP)).
The weighted F-score is weighted on the Recall side by the importance \(\theta\) as follows:
\[F\theta\text{-score} =\frac{1+\theta^{2}}{\frac{1}{\text{Precision}}+\frac{\theta^{2} }{\text{Recall}}}\] \[=\frac{(1+\theta^{2})(\text{Precision}\times\text{Recall})}{ \text{Recall}+\theta^{2}\times\text{Precision}},\]
where Precision = TP/(TP+FP) and Recall = TP/(TP+FN). Since the screening needs to select as many important variables with non-zero regression coefficients as possible, we use the F2-score, which treats Recall as important.
### Simulation results
The results of the variable selection for Example 1 are shown in Table 1. The numbers in the \(x_{(j)}\) column represent the total number of times that the \(j\)-th predictor variable is selected. For all settings, SIS never selected \(x_{(4)}\). This is because \(\mathbf{y}=(y_{1},\ldots,y_{n})^{T}\) and \((x_{14},\ldots,x_{n4})^{T}\) are uncorrelated due to the generation mechanism of the data, which gives a smaller \(|\omega_{4}|\). For other methods than SIS, the value of \(|\omega_{4}|\) is larger than that for SIS due to the transformation process by factor analysis. In particular, the proposed TPPIS obtains the largest \(x_{(4)}\). F2-scores for TPPIS are the highest under all settings. Although the best \(\alpha\) of TPPIS is 1 for the case \(\varphi=0.5\), the F2-scores for TPPIS are better than those for PPIS because TPPIS selects \(d\) by BIC. We confirmed that the performance of TPPIS in variable selection is improved compared to the existing methods. Figure 1 shows values of BIC and F2-scores for fixed \(d\) and different \(\alpha\) in TPPIS. This figure demonstrates that \(\alpha\) is selected appropriately by BIC.
The results for Example 2 are shown in Table 2. The table shows that in many cases the numbers in \(x_{(5)}\) are close to 100 because the fifth variable is uncorrelated with the other predictors. F2-scores for TPPIS are the highest in all cases.
Table 3 summarizes the result for Example 3. This shows that the numbers in \(x_{(5)}\) and the value of F2-score are smaller than those of Example 2 due to the addition of the sixth variable, which is highly correlated with the fifth variable. For the cases with \(\varphi=0.9\), FPSIS\({}_{BIC}\) and TPPIS, which determine \(d\) by BIC, give lower \(x_{(6)}\) values. It seems to be useful to use BIC to select \(d\) for data with multicollinearity. TPPIS gives the highest F2-score among all methods.
Table 4 shows the results for Example 4. In this example, the variables with large regression coefficients tend to be more important, resulting in \(x_{(1)}\geq x_{(2)}\geq x_{(3)}\geq x_{(4)}\) under many settings. F2-scores for PPIS and TPPIS are high because these methods are effective for the spike model. In particular, TPPIS gives the highest F2-scores for all settings.
## 5 Real data analysis
We apply the proposed screening methods to the analysis of two real data sets. For both datasets, we investigated TPPIS parameters \(d\) and \(\alpha\), as in Section 4.1, and then the \(d\) and \(\alpha\) values giving the lowest BIC are selected as the optimal parameters.
### Condition monitoring of hydraulic systems
We applied the screening methods to data on condition monitoring of a hydraulic system [17]. This dataset was obtained experimentally using a hydraulic test rig to measure values such as pressure, volumetric flow, and temperature while varying the settings of four different hydraulic components (coolers, valves, pumps, and accumulators). We use data with the sample size 1449, taken under stable system settings. The response is a value that expresses the degree of accumulator failure as a continuous value. A higher value is closer to normal condition with 130 being the optimal pressure, 115 being a slightly reduced pressure, 100 being a severely reduced pressure, and 90 being close to total failure. The predictors are the values measured by 17 sensors and form a total of 43680. We apply the five screening methods to analyze this dataset as in the section on examples of simulated data. The number of variables is determined using BIC.
Table 5 shows the results of the analysis of this dataset. From this result we find that TPPIS selects variables from the largest number of sensors. TPPIS selects variables 'volume flow sensors (FS)' and 'efficiency factor (SE)', which are not selected by the other methods. In addition, TPPIS gives the best BIC score among all methods. These results indicate that these sensors may relate to the condition of accumulators.
### S&p500
The S&P 500, one of the U.S. stock market indices, is obtained by weighting the market capitalization of 500 companies selected as representative of publicly traded companies. This analysis uses the data for the year 2020. The sample size is 253, which is the number of trading days. The response is the value of the S&P500, and the predictors are the stock price of each of the 500 companies that make up the S&P500. Note that the number of columns of predictors may be greater than 500 because some companies have multiple stocks, differentiated based on whether they include voting rights. Since the S&P500 is weighted by market capitalization, it is assumed that the stock price of the company with the highest market capitalization is selected as an important variable. The values of the S&P500 are taken from FRED [18], and the stock prices of the 500 companies that make up the S&P500 are taken from [19].
We applied five screening methods to this dataset and compared BIC and selected variables. The results for the S&P500 are shown in Table 6. TPPIS gives the best BIC score among all the methods. The 12 variables selected by TPPIS include companies with particularly large market capitalizations such as 'AAPL' (Apple), 'MSFT' (Microsoft) and 'AMZN' (Amazon).
## 6 Conclusion
We have proposed TPPIS, a variable screening method for high-dimensional data with strong multicollinearity. TPPIS improves the variable selection performance by using a BIC-type criterion to determine the number of common factors that have a role in removing multicollinearity. In the analysis of simulated data, TPPIS outperformed existing methods using factor analysis for variable selection. This suggests that TPPIS may be able to correctly select variables that are not considered important by existing methods.
The transformation process of TPPIS to remove multicollinearity from the data uses only information from the data corresponding to the predictors and we do not consider the relation to the response. Developing a transformation processing method that incorporates information from both types of data could further improve the variable selection performance. Although numerical examples confirmed that the performance of TPPIS is better than that of existing methods, no mathematical proof is provided. In the process of devising a proof, we may be able to identify the characteristics of the data for which TPPIS is most effective.
## Acknowledgment
This work was supported by JSPS KAKENHI Grant Numbers 19K11858 and 23K11005.
|
2307.15638
|
TriadNet: Sampling-free predictive intervals for lesional volume in 3D
brain MR images
|
The volume of a brain lesion (e.g. infarct or tumor) is a powerful indicator
of patient prognosis and can be used to guide the therapeutic strategy.
Lesional volume estimation is usually performed by segmentation with deep
convolutional neural networks (CNN), currently the state-of-the-art approach.
However, to date, few work has been done to equip volume segmentation tools
with adequate quantitative predictive intervals, which can hinder their
usefulness and acceptation in clinical practice. In this work, we propose
TriadNet, a segmentation approach relying on a multi-head CNN architecture,
which provides both the lesion volumes and the associated predictive intervals
simultaneously, in less than a second. We demonstrate its superiority over
other solutions on BraTS 2021, a large-scale MRI glioblastoma image database.
|
Benjamin Lambert, Florence Forbes, Senan Doyle, Michel Dojat
|
2023-07-28T15:56:04Z
|
http://arxiv.org/abs/2307.15638v1
|
# TriadNet: Sampling-free predictive intervals for
###### Abstract
The volume of a brain lesion (e.g. infarct or tumor) is a powerful indicator of patient prognosis and can be used to guide the therapeutic strategy. Lesional volume estimation is usually performed by segmentation with deep convolutional neural networks (CNN), currently the state-of-the-art approach. However, to date, few work has been done to equip volume segmentation tools with adequate quantitative predictive intervals, which can hinder their usefulness and acceptation in clinical practice. In this work, we propose **TriadNet**, a segmentation approach relying on a multi-head CNN architecture, which provides both the lesion volumes and the associated predictive intervals simultaneously, in less than a second. We demonstrate its superiority over other solutions on BraTS 2021, a large-scale MRI glioblastoma image database.
Keywords:Brain MRI Prediction Intervals Uncertainty Segmentation Deep Learning
## 1 Introduction
The lesional volume is a powerful and commonly used biomarker in brain MRI analysis and interpretation. Such an imaging biomarker is a guide to predict the patient's neurological outcome in Stroke [8] or to assess the grade of a Glioblastoma [3]. For Multiple Sclerosis (MS), the evolution of the lesional load between two patient's visits helps to assess the progress of the disease and to personalize his/her treatment [14] and even to predict the disability [19]. For neurodegenerative diseases such as Alzheimer's disease, the brain atrophy is quantified by estimating the volume of different anatomical regions (e.g. hippocampus or amygdala) compared to normative values [5].
Volume estimation is usually carried out through image segmentation, relying on Deep Convolutional Neural Networks (CNNs) trained on an annotated database, comprising both images and their corresponding manual delineations [10]. CNNs provide a mask, which is generally correct for easy detectable regions or lesions, but whose accuracy may be more uncertain when the zone to segment is disputable even for an expert. To help clinicians to focus on the more subtle regions, we propose to associate quantitative Predictive Intervals (PIs)
to volume estimation. Such PIs can straightforwardly be interpretated as uncertainty markers and facilitate the acceptance of advanced computerized tools by practitioners.
PI construction has been mainly studied in the context of 1D regression tasks [12, 17, 22] and applications in the context of medical image processing are very scarce. To compute PIs for lesion counting in 2D medical images, reference work proposes either a sampling approach or a regression model [6]. In the former, several plausible and diverse segmentation masks are generated for the same input image, forming a distribution over the quantity of interest (e.g lesion volume or number), from which the mean and the standard deviation can be extracted to define a PI. This Uncertainty Quantification (UQ) methodology offers several variants to generate the diverse set of predictions. Popular UQ methods regroup the Monte Carlo Dropout (MC) [7], Deep Ensemble [13], or Test Time Augmentation (TTA) [23]. Based on sampling, UQ methods are associated with an important computational burden to obtain the predictions. With the regression approach, a network is trained to directly predict the PI's components: the mean value as well as the lower and upper bounds from the data themselves. As no assumptions are made regarding the distribution of the regressed variable, this approach is referred to as Distribution-Free Uncertainty Quantification (DFUQ) [17]. In this direction, we introduce a sampling-free approach based on an original CNN architecture called TriadNet which exhibits the following assets:
* It enhances the 3D volume estimation with associated reliable PIs.
* It allows a fast and distribution-free estimation of PIs.
* The methodology is simple to implement and can be applied to any encoder-decoder segmentation architecture.
## 2 Problem Definition
We consider a 3D segmentation problem with \(N\) classes. Excluding the background class, we aim at estimating the true unknown volumes \(Y\in\mathbb{R}^{N-1}\) of each foreground classes based on the predicted segmentation. In this context, for an estimation \(X\) of the volume, seen as a random variable, we define a predictive interval \(\Gamma_{\alpha}(X)\) as a range of values that are conditionned to contain \(Y\), the actual volume, with a certain degree of confidence \(1-\alpha\) (e.g 90% or 95%). That is, given a series of estimated volumes \(X_{1}\ldots X_{n}\) and their associated ground truth volumes \(Y_{1}\ldots Y_{n}\), \(\Gamma_{\alpha}(\,\cdot\,)\) should be learned as to satisfy:
\[P(Y_{\text{test}}\in\Gamma_{\alpha}(X_{\text{test}}))\geq 1-\alpha \tag{1}\]
for any \((Y_{\text{test}},X_{\text{test}})\) following the same distribution as the \((Y_{i},X_{i})\)'s. This property is called the _marginal coverage_, as the probability is marginal over the entire test dataset [1].
Sampling-based PI estimation methods rely on the hypothesis that \(X\) follows a normal distribution for each predicted class. Under this assumption, the mean value \(\mu_{X}\) and standard deviation \(\sigma_{X}\) of the distribution are estimated by
sampling several distinct predictions for the same input, and PI are constructed as \(\Gamma_{\alpha}(X)=[\mu_{X}-z\sigma_{X},\mu_{X}+z\sigma_{X}]\) where \(z\) is the number of standard deviation, stipulating the degree of confidence of the interval. For instance, for a 90% confidence interval, \(z\) corresponds to 1.65. In contrast, direct PI estimation techniques (including regression approaches and our proposed TriadNet) directly output the mean value \(\mu\), the lower bound \(l_{b}\) and the upper bound \(u_{b}\) (\(l_{b}\leq\mu\leq u_{b}\)), without sampling.
## 3 Our solution: TriadNet
_Overview:_ TriadNet corresponds to a CNN model modified in order to produce three outputs for each segmented class: the mean, lower bound and upper bound masks (see Figure 1). To obtain these distinct masks, we propose a multi-head architecture as well as a novel learning objective, the TriadLoss. The masks are then used to directly estimate the class-wise mean volume as well as the lower and upper bounds, by summing the segmented voxels.
Figure 1: The Triadnet architecture. Each head yields a distinct mask for each class: lower bound, mean and upper bound masks. For ease of visualization, we only represent for a Glioblastoma application, the masks for the _edematous_ class.
TriadNet: the architecturerelies on the Attention Unet 3d (AttUNet3d) [16] as backbone. We modified it by duplicating the output convolutional block in order to obtain a total of 3 separate and identical heads. Each head generates a specific mask: respectively, one corresponding to the lower bound, one to the upper bound, and one for the mean value, by predicting a probabilistic distribution \(p_{n,i}\) over the \(N\) classes and for each voxel \(i\). This modification only slighly increase the complexity of the segmentation model, raising the number of parameters from 5 millions to 5.3 millions.
#### 3.3.2 TriadLoss: the objective function
is built on the observation that the lower bound mask should be more restrictive (_i.e._ higher precision and lower recall) than the mean mask. Similarly, the upper bound mask should be more permissive (_i.e._ higher recall and lower precision). To achieve this, we propose to rely on the Tversky loss [20], which provides a direct control on the trade-off between recall and precision. The Tversky loss \(T_{\alpha,\beta}\) is an extension of the popular Dice loss [15], with 2 extra hyperparameters \(\alpha\) and \(\beta\) which respectively control the weighting of False Positives (FP) and False Negatives (FN). With \(\alpha=\beta=0.5\), the Tversky loss is strictly equivalent to the standard Dice loss.
Writing \(p_{lower}\), \(p_{mean}\) and \(p_{upper}\) the outputs of each head and \(y\) the ground-truth segmentation, we defined the Triad loss as:
\[\text{TriadLoss}=T_{1-\gamma,\gamma}(p_{lower,y})+T_{0.5,0.5}(p_{mean,y})+T _{\gamma,1-\gamma}(p_{upper,y}) \tag{2}\]
with \(\gamma\) an hyperparameter in the range \(]0,0.5[\) controlling the penalties applied to FP and FN during the training of the lower and upper bound heads. In other words, the mean decoder was trained with a standard Dice Loss. To obtain more restrictive masks (and lower volumes), the lower bound decoder was trained to minimize FP at the expense of a higher FN rate. Similarly, to obtain more permissive masks (and larger volumes), the upper bound decoder sought to minimize FN at the expense of a higher number of FP.
## 4 Material and Methods
### Datasets
We illustrate our framework on a brain tumor segmentation task, using the open-source part of the BraTS 2021 dataset [2] containing 1251 patients. Four MRI sequences are available for each patient: FLAIR, T1, T2, and T1ce (T1w with contrast agent). The ground truth segmentation masks contain 4 classes: the background, the necrotic tumor core, the edematous, and the GD-enhancing (GDE) tumor. We randomly split the data into a training fold (651), a calibration fold (200), and a testing fold (400).
### Comparison with known approaches
We compared TriadNet with 3 sampling-based approaches: Confidence Thresholding (CT), Monte Carlo dropout (MC), and Test Time Augmentation (TTA), as well as a sampling-free PI estimation framework based on the training of a regression CNN (RegCNN).
**Confidence Thresholding (CT)** is a simple approach to obtain PI's from the output probability estimates produced by a trained segmentation model. For each class, the probability map is binarized with progressively increasing thresholds. As the threshold increases, fewer voxels are segmented, thus the volume decreases. As this method relies on the calibration of the output probabilities, we perform Temperature Scaling [9] on the trained segmentation model before performing CT.
**Monte Carlo Dropout (MC)** is based on the Dropout technique [21] which consists of turning a subset of the model parameters off, to prevent overfitting. The MC dropout technique proposes to keep dropout activated during inference, meaning that \(T\) forward steps of the same image through the MC dropout model will lead to \(T\) different segmentations (and thus volume estimates), as the dropout mask is randomly sampled at each step.
**Test Time Augmentation (TTA)** consists in using data augmentation to generate alternative versions of the input images. Each augmented image is processed by the segmentation model, yielding to a distinct estimation of the volumes. By repeating this process, a distribution over volumes can be obtained, from which the PI is derived.
**Regression CNN** (RegCNN) proposes to train a regression neural network to directly predict the lower, mean and upper bounds of the target quantity from the data itself [1, 6]. To achieve this, the Pinball loss \(P_{t}\) can be used to train the model to predict a desired quantile \(t\). In our study, the regressor took as input the MRI sequences and automated segmentation produced by a segmentation model and was trained to predict three scores for each segmentation class, namely the \(q_{\alpha/2}\), \(q_{0.5}\) and \(q_{1-\alpha/2}\) quantiles, allowing the construction of \((1-\alpha)\%\) confidence intervals. To do so, the regressor was trained with a compound loss \(L=P_{\alpha/2}+P_{0.5}+P_{1-\alpha/2}\) to learn each quantile.
### Post-hoc PI calibration
In practice, the predicted PIs may be inaccurate and not respect the desired _marginal coverage_ property. To alleviate this, PI post-hoc calibration is usually performed using a set-aside calibration dataset [1]. This calibration step aims at finding the optimal corrective value \(q\) such that the calibrated PIs achieve the desired \((1-\alpha)\%\) coverage on the calibration dataset.
In the case of sampling-based PI, the corrective value takes the form of a multiplicative factor applied to the standard deviation (Equation 3). Alternatively, if the PI estimation is direct, \(q\) corresponds to an additive factor applied
to the lower and upper bounds (Equation 4) :
\[\Gamma_{\alpha,\text{cal}}(X) =[\mu_{X}-q\sigma_{X},\mu_{X}+q\sigma_{X}] \tag{3}\] \[\Gamma_{\alpha,\text{cal}}(X) =[l_{b}-q,u_{b}+q] \tag{4}\]
### Evaluation
We performed all our experiments with \(\alpha=0.1\), meaning that we focussed on \(90\%\) PIs. Segmentation performance was assessed using the Dice score (DSC) between the predicted segmentation and the ground truth delineations (for TriadNet, the Dice was computed using the _mean_ predicted mask). We also used the Mean Average Error (MAE) between the estimated mean volumes and the true volumes to assess the reliability of the volume prediction.
Useful PIs should have two properties. They should i) achieve the desired _marginal coverage_ and ii) be the narrowest possible in order to be informative. To verify this, we computed two scores for PIs: the coverage error (\(\Delta f\)) and the interval width (\(W\)). \(\Delta f\) is defined as the distance between the empirical coverage and target coverage (\(90\%\)). \(W\) is the average distance between the lower and upper bounds. Note that a successful PI calibration should ensure \(\Delta f\geq 0\). However, as the width of intervals tend to augment with \(\Delta f\), a value close to \(0\) is preferred. To estimate computational efficiency, we also reported the average time to produce a segmentation and PI for one input MRI volume.
To assess the impact of the choice of the \(\gamma\) hyper-parameter in the TriadLoss on PI quality, we trained Triadnet models with varying \(\gamma\) values, ranging from \(0.1\) to \(0.4\). To obtain robust statistics, each model is trained \(5\) times and we reported the average and standard deviation for each metrics.
### Implementation Details
Three types of segmentation models are used in this study. First, _Baseline_ AttUnet3d was trained to serve as a common basis for the implementation of CT, TTA and RegCNN approaches. For MC, we trained a dedicated _Dropout_ AttUnet3ds by adding a dropout rate of \(20\%\) in each layer of the encoder and decoder. The last type of segmentation model was our proposed TriadNet. All models were trained with the ADAM optimizer [11], with a learning rate of \(2e-4\), using the Dice loss for _Baseline_ and _Dropout_ models and the TriadLoss for TriadNet. For CT-based PIs, we used \(20\) different thresholds uniformly distributed in the range \([0.01,0.99]\) to binarize the probability maps. For MC dropout, we performed \(T=20\) forward passes of the same input image with dropout activated to obtain the PIs. To implement the TTA baseline, we generated \(20\) random augmentations for each input MRI using flipping, rotation, translation and contrast augmentation with randomized parameters, implemented using the TorchIO Data Augmentation library [18]. Finally for RegCNN, we used an open-source regressor CNN implementation 4[4].
Footnote 4: [https://docs.monai.io/en/stable/_modules/monai/networks/nets/regressor.html](https://docs.monai.io/en/stable/_modules/monai/networks/nets/regressor.html)
## 5 Results and Discussion
Table 1 presents the performance of segmentation (DSC) and PIs for each approach and for all 3 segmented tumor tissues; and Table 2, the average computation time for each method. Finally, Figure 2 provides an illustration of PI computed by our proposed TriadNet on the test dataset.
Most methods provide PIs that, after calibration, achieve the target _marginal coverage_ property (\(\Delta f\geq 0\)). In terms of interval width (W), the narrowest intervals are provided by our proposed TriadNet parameterized by \(\gamma=0.2\), while MC dropout ranks as the second best approach. To estimate the significance of this result, two-sided paired t-test between both methods were performed, showing that TriadNet's PI are significantly narrower compared to MC dropout ones (\(p<0.05\) for each tumor class). The best volume estimation, computed using the MAE, is also obtained by TriadNet, while RegCNN estimation is systematically the worst. In terms of segmentation quality (DSC scores), all models achieve very similar performances. Finally, regarding computational efficiency
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & Method & \(\Delta f\) & W \(\downarrow\) & MAE \(\downarrow\) & DSC \(\uparrow\) \\ & & (\%\(\pm\)SD) & (\(mL\pm\)SD) & (\(mL\pm\)SD) & (\(\pm\)SD) \\ \hline \multirow{6}{*}{\begin{tabular}{l} \end{tabular} } & CT & 5.6 \(\pm\) 1.5 & 32.3 \(\pm\) 5.3 & 3.4 \(\pm\) 0.1 & **0.76 \(\pm\) 0.00** \\ & TTA & 6.3 \(\pm\) 1.1 & 25.0 \(\pm\) 3.7 & 3.5 \(\pm\) 0.1 & **0.76 \(\pm\) 0.00** \\ & RegCNN & 6.1 \(\pm\) 1.7 & 25.8 \(\pm\) 5.7 & 6.3 \(\pm\) 3.6 & **0.76 \(\pm\) 0.00** \\ & MC dropout & 5.6 \(\pm\) 0.6 & 20.6 \(\pm\) 2.1 & 3.3 \(\pm\) 0.1 & **0.76 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.1\)) & 4.5 \(\pm\) 0.8 & 14.5 \(\pm\) 1.4 & 3.4 \(\pm\) 0.1 & 0.75 \(\pm\) 0.00 \\ & TriadNet (\(\gamma=0.2\)) & **3.4 \(\pm\) 0.6** & **13.7 \(\pm\) 1.1** & **3.2 \(\pm\) 0.1** & **0.76 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.3\)) & 4.1 \(\pm\) 0.9 & 15.0 \(\pm\) 0.4 & 3.3 \(\pm\) 0.1 & **0.76 \(\pm\) 0.01** \\ & TriadNet (\(\gamma=0.4\)) & 4.4 \(\pm\) 0.4 & 16.6 \(\pm\) 0.8 & 3.4 \(\pm\) 0.1 & 0.75 \(\pm\) 0.00 \\ \hline \multirow{6}{*}{\begin{tabular}{l} \end{tabular} } & CT & 1.4 \(\pm\) 1.2 & 54.4 \(\pm\) 12.3 & 8.2 \(\pm\) 0.9 & **0.85 \(\pm\) 0.01** \\ & TTA & -1.3 \(\pm\) 2.3 & 34.9 \(\pm\) 2.5 & 7.7 \(\pm\) 0.2 & **0.85 \(\pm\) 0.01** \\ & RegCNN & 1.7 \(\pm\) 0.9 & 41.9 \(\pm\) 1.7 & 9.2 \(\pm\) 0.4 & **0.85 \(\pm\) 0.01** \\ & MC dropout & **-0.01 \(\pm\) 1.4** & 32.0 \(\pm\) 2.0 & 7.5 \(\pm\) 0.2 & 0.84 \(\pm\) 0.01 \\ & TriadNet (\(\gamma=0.1\)) & 0.9 \(\pm\) 0.7 & 31.8 \(\pm\) 0.8 & 7.4 \(\pm\) 0.5 & **0.85 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.2\)) & 1.6 \(\pm\) 1.2 & **30.2 \(\pm\) 1.7** & 7.2 \(\pm\) 0.2 & **0.85 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.3\)) & 3.2 \(\pm\) 1.6 & 35.7 \(\pm\) 4.3 & **7.1 \(\pm\) 0.2** & **0.85 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.4\)) & 1.4 \(\pm\) 0.7 & 31.1 \(\pm\) 2.2 & 7.5 \(\pm\) 0.3 & 0.84 \(\pm\) 0.01 \\ \hline \multirow{6}{*}{
\begin{tabular}{l} \end{tabular} } & CT & 3.6 \(\pm\) 1.4 & 17.8 \(\pm\) 3.5 & 2.0 \(\pm\) 0.0 & **0.85 \(\pm\) 0.01** \\ & TTA & 3.0 \(\pm\) 1.8 & 10.6 \(\pm\) 1.0 & 2.0 \(\pm\) 0.1 & **0.85 \(\pm\) 0.01** \\ & RegCNN & **0.7 \(\pm\) 0.5** & 22.2 \(\pm\) 5.9 & 3.2 \(\pm\) 0.3 & **0.85 \(\pm\) 0.01** \\ & MC dropout & 3.5 \(\pm\) 1.3 & 10.0 \(\pm\) 0.5 & 1.9 \(\pm\) 0.1 & **0.85 \(\pm\) 0.01** \\ & TriadNet (\(\gamma=0.1\)) & 3.7 \(\pm\) 1.1 & 11.0 \(\pm\) 0.6 & **1.8 \(\pm\) 0.1** & **0.85 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.2\)) & 4.2 \(\pm\) 1.4 & **8.9 \(\pm\) 0.5** & **1.8 \(\pm\) 0.1** & **0.85 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.3\)) & 4.1 \(\pm\) 0.4 & 9.3 \(\pm\) 0.6 & **1.8 \(\pm\) 0.1** & **0.85 \(\pm\) 0.00** \\ & TriadNet (\(\gamma=0.4\)) & 4.0 \(\pm\) 0.5 & 11.0 \(\pm\) 0.4 & 1.9 \(\pm\) 0.1 & **0.85 \(\pm\) 0.00** \\ \hline \end{tabular}
\end{table}
Table 1: Performances for each tumor tissue for each method. \(\Delta f\): coverage error, \(W\): average interval width. Mean scores obtained over 5 runs. SD: standard deviation.
(Table 2), RegCNN appears as the fastest approach, followed by TriadNet, both producing segmentation and associated PIs in less than one second for an input MRI volume. As expected, sampling approaches are much more time-consuming, with MC and TTA being respectively 10 and 24 times slower than our proposed TriadNet.
The choice of the \(\gamma\) parameter in the TriadLoss has proved to be important, with an optimal PI quality reached for \(\gamma=0.2\), equivalent to a weighting of 0.8 for FP and 0.2 for FN in the lower bound head; and 0.2 for FP and 0.8 for FN in the upper bound head. This setting allows the different masks (lower, mean and upper) to be different enough to allow a reliable PI estimation, which is not the case with higher \(\gamma\) values (\(\gamma=0.3\) and \(\gamma=0.4\)). However when \(\gamma\) is lower (\(\gamma=0.1\)), the penalty on FP and FN is too small, which yields to a larger amount of erroneous predictions, lowering PI quality.
## 6 Conclusion
In this work, we addressed the problem of constructing PIs associated to 3D brain MR segmented volumes. Our proposed TriadNet provides narrower and thus more informative intervals in practice compared to competing methods, while preserving the desired _marginal coverage_ property. Interestingly, it is also 10 times faster than the second best baseline, MC dropout, making it suitable for clinical routine. Finally, it only requires a minor modification of the segmentation architecture, which has no negative impact on segmentation quality. Future work
\begin{table}
\begin{tabular}{l c c c c c} & CT & TTA & RegCNN & MC & TriadNet \\ \hline Time (s\(\pm\)SD) \(\downarrow\) & 1.1 \(\pm\) 0.1 & 14.3 \(\pm\) 1.2 & **0.3 \(\pm\) 0.1** & 6.4 \(\pm\) 0.1 & 0.6 \(\pm\) 0.1 \\ \end{tabular}
\end{table}
Table 2: Average prediction time to obtain a segmentation of a 3D MRI volume associated to predictive intervals on the volumes. SD=standard deviation
Figure 2: Predictive intervals generated by TriadNet (\(\gamma=0.2\)) on the test dataset.
will investigate the robustness of TriadNet's predictive intervals in the presence of domain shift, and evaluate how our approach behaves with respect to the size of the target region, ranking from very small targets (e.g the hippocampus region or MS lesions) to very large (e.g the overall grey matter volume).
|
2305.03623
|
Optimizing Hyperparameters with Conformal Quantile Regression
|
Many state-of-the-art hyperparameter optimization (HPO) algorithms rely on
model-based optimizers that learn surrogate models of the target function to
guide the search. Gaussian processes are the de facto surrogate model due to
their ability to capture uncertainty but they make strong assumptions about the
observation noise, which might not be warranted in practice. In this work, we
propose to leverage conformalized quantile regression which makes minimal
assumptions about the observation noise and, as a result, models the target
function in a more realistic and robust fashion which translates to quicker HPO
convergence on empirical benchmarks. To apply our method in a multi-fidelity
setting, we propose a simple, yet effective, technique that aggregates observed
results across different resource levels and outperforms conventional methods
across many empirical tasks.
|
David Salinas, Jacek Golebiowski, Aaron Klein, Matthias Seeger, Cedric Archambeau
|
2023-05-05T15:33:39Z
|
http://arxiv.org/abs/2305.03623v1
|
# Optimizing Hyperparameters with Conformal Quantile Regression
###### Abstract
Many state-of-the-art hyperparameter optimization (HPO) algorithms rely on model-based optimizers that learn surrogate models of the target function to guide the search. Gaussian processes are the de facto surrogate model due to their ability to capture uncertainty but they make strong assumptions about the observation noise, which might not be warranted in practice. In this work, we propose to leverage conformal-ized quantile regression which makes minimal assumptions about the observation noise and, as a result, models the target function in a more realistic and robust fashion which translates to quicker HPO convergence on empirical benchmarks. To apply our method in a multi-fidelity setting, we propose a simple, yet effective, technique that aggregates observed results across different resource levels and outperforms conventional methods across many empirical tasks.
Machine Learning, ICML
## 1 Introduction
Hyperparameters play a vital role in the machine learning (ML) workflow. They control the speed of the optimization process (e.g., learning rate), the capacity of the underlying statistical model (e.g., number of units) or the generalization quality through regularization (e.g., weight decay). While virtually all ML pipelines benefit from having their hyperparameters tuned, this can be tedious and expensive to do, and practitioners tend to leave many of them at their default values.
Hyperparameter optimization (Feurer and Hutter, 2018) is a powerful framework to tune hyperparameters automatically with a large body of work to tackle this optimization problem, spanning from simple heuristics to more complex model-based methods. Random search-based approaches (Bergstra and Bengio, 2012) define a uniform distribution over the configuration space and repeatedly sample new configurations until an total budget is exhausted. Evolutionary algorithms modify a population of hyperparameter configurations by hand-designed mutations. Finally, model-based methods, such as Bayesian optimization (Snoek et al., 2012) use the collected data points to fit a surrogate model of the target objective, which informs the sampling distribution so that more promising configurations are selected over time.
Unfortunately, solving the HPO problem can become computationally expensive, as each function evaluation incurs the cost of fully training and validating the underlying machine learning model. Multi-fidelity HPO (Karnin et al., 2013; Li et al., 2017) accelerates this process by terminating the evaluation of under-performing configurations early and only training promising configurations to the end. Early stopping allows algorithms to explore more configurations within the total search budget. Baseline multi-fidelity methods can be made more efficient by relying on model-based sampling, exploiting past observations to select more promising configurations over time.
Model-based multi-fidelity methods constitute the state-of-the-art in HPO today, but still face some limitations. Concretely, we identify two major gaps with the current approaches. First, Bayesian optimization usually assume that the observation noise is homoskedastic and distributed as a Gaussian. This allows for an analytic expression of the likelihood for Gaussian processes, the most popular probabilistic model for Bayesian optimization (Snoek et al., 2012). However, most HPO problems exhibit heteroskedastic noise that can span several orders of magnitude (Salinas et al., 2020; Cowen-Rivers et al., 2020). For instance, the validation error of a model trained with SGD can behave very sporadically for large leaning rate and not taking this heteroskedasticity into account can severely hinder the performance of methods with Gaussian assumptions.
Second, it is difficult to model probabilistically the target metric both across configurations and resources (e.g., number of epochs trained), while at the same time retaining the simplicity of Gaussian processes. Previous work maintains separate models of the target at different resource levels (Falkner et al., 2018; Li et al., 2022), or use a single
joint model (Klein et al., 2020; Swersky et al., 2014). The former does not take into account dependencies across resource levels, even though these clearly exist. The latter has to account for the non-stationarity of the target function, and requires strong modeling assumptions in order to remain tractable.
With this paper, we propose a conformalized quantile regression surrogate model that can be used in the presence of any noise, possibly non Gaussian or heteroskedastic. We further propose a new way to expand any model-based single-fidelity HPO method to the multi-fidelity setting, by aggregating observed results across different resource levels. This strategy can be used with most single-fidelity method and greatly simplifies the model based multi-fidelity setup. We show that, in spite of its simplicity, our framework offers competitive results across most common single-fidelity methods and significant improvements over baselines when paired with conformalized quantile regression. Our main contributions are the following:
* We introduce a novel surrogate method for HPO based on conformalized quantile regression which can handle heteroskedastic and non-Gaussian distributed noise.
* We propose a simple way to extend any single-fidelity method into a multi-fidelity method, by using only the last observed datapoint for each hyperparameter configuration to train the function surrogate.
* We run empirical evaluations on a large set of benchmarks, demonstrating that quantile regression surrogates achieve a more robust performance compared to state-of-the-art methods in the single-fidelity case
* We show that our new multi-fidelity framework outperforms state-of-the-art methods across multiple single-fidelity surrogates.
The paper first reviews related work and then discuss our proposed method for single-fidelity optimization leveraging conformal quantile prediction. We then describe an extension to the multi-fidelity case, before evaluating the method on an extensive set of benchmarks.
## 2 Related work
Bayesian optimization is one of the most successful strategies for hyperparameter optimization (HPO) (Shahriari et al., 2016). Based on a probabilistic model of the objective function, it iteratively samples new candidates by optimizing an acquisition function, that balances between exploration and exploitation when searching the space. Typically, Gaussian processes are used as the probabilistic surrogate model (Snoek et al., 2012), but other methods, such as random forests (Hutter et al., 2011) or Bayesian neural networks (Springenberg et al., 2016; Snoek et al., 2015) are possible. Alternatively, instead of modeling the objective function, previous work (Bergstra et al., 2011; Tiao et al., 2021) estimate the acquisition function directly by the density ratio of well and poorly performing configurations.
Despite its sample efficiency, Bayesian optimization still requires tens to hundreds of function evaluations to converge to well-performing configurations. To accelerate the optimization process, multi-fidelity optimization exploits cheap-to-evaluate fidelities of the objective function such as training epochs (Swersky et al., 2014). Jamieson and Talwalkar (2016) proposed to use successive halving (Karnin et al., 2013) for multi-fidelity hyperparameter optimization which trains a set of configurations for a small budget and then only let the the top half configurations continue for twice as many resources. Hyperband (Li et al., 2017) calls successive halving as a sub-routine with varying minimum resources level, to avoid that configurations are terminated too early. Falkner et al. (2018) combined Hyperband with Bayesian optimization to replace the inefficient random sampling of configuration by Bayesian optimization with kernel density estimators (Bergstra et al., 2011). ASHA (Li et al., 2019) proposed to extend Hyperband to the asynchronous case when using multiple workers which led to significant improvements and Klein et al. (2020); Li et al. (2022) later combined this method with Gaussian process based Bayesian optimization. Instead of relying on a model-based approach, Awad et al. (2021) instead proposed to combine Hyperband with evolution algorithms.
An orthogonal line of work models the learning curves of machine learning algorithms directly; see Mohr and van Rijn (2022) for an overview. Previous work by Domhan et al. (2015) fits an ensemble of parametric basis functions to the learning curve of a neural network. This method can be plugged into any HPO approach such that evaluation of an network is stopped if it is unlikely to outperform previous configurations and the prediction of the model is returned to the optimizer. Klein et al. (2017) used a Bayesian neural networks to predict the parameters of these basis functions which is able to model the correlation across hyperparameter configurations. Wistuba and Pedapati (2020) proposed neural networks architectures that ranks learning curves across different tasks.
To avoid requiring Gaussian homoskedastic noise, several papers considered the use of quantile regression for HPO (Picheny et al., 2013; Salinas et al., 2020; Moriconi et al., 2020) but those approaches do not ensure that the provided uncertainties are well calibrated. Conformal prediction has been gaining traction recently in ML applications due to its ability to provide well calibrated uncertainty with widely applicable assumptions, in particular not requiring the pres
ence of a Gaussian homoskedastic distribution (Shafer and Vovk, 2007). To the best of our knowledge, conformal prediction has only been considered for single-fidelity HPO by (Stanton et al., 2022) and Doyle (2022). The former applies conformal correction to standard GP posteriors in order to improve model calibration on non-Gaussian noise, whereas we build our method on quantile regression which is already robust to non-Gaussian and heteroskedastic noise. The latter conducted a preliminary study showing that conformal predictors can outperform random-search on four datasets. The key difference to our method is that we utilize the framework of conformal quantile prediction from (Romano et al., 2019) which leverages quantile regression allowing to bypass the need to fit an additional model for the variance. In both cases, our work differs as we consider the asynchronous multi-fidelity case which allows the method to perform much better in presence of hundreds of observations.
## 3 Single-fidelity Hyperparameter Optimization
In the single-fidelity hyperparameter optimization setting, we are interested in finding the hyperparameter minimizing of a blackbox function \(f\):
\[x^{*}=\operatorname*{arg\,min}_{x\in\mathcal{X}}f(x)\]
where \(f(x)\in\mathcal{X}\) denotes the validation error obtained for a hyperparameter configuration \(x\). Hyperparameters may include the learning rate, number of layers and number of hidden dimensions of a transformer or a convolutional neural network. Given that evaluating \(f\) is typically expensive and gradients are not readily available, we look for gradient-free and sample efficient optimization methods.
Bayesian Optimization is arguably one of the most popular approaches owing to its ability to efficiently trade-off exploration and exploitation when searching the configuration space. In each iteration \(n\), a probabilistic model of the objective function \(f\) is fitted on the \(n\) previous observations \(\mathcal{D}=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\); the first initial configurations are typically drawn at random. To select the next point to evaluate, an acquisition function is then optimized to select the most promising candidate based on the probabilistic surrogate, for instance by picking the configuration that maximizes the expected improvement (Jones et al., 1998).
The standard approach to model the objective function uses Gaussian Processes due its computational tractability and theoretical guarantees (Srinivas et al., 2012). However, this approach assumes that the observation noise is Gaussian and homoskedastic. We next describe an approach to lift this restriction by leveraging quantile regression with conformal prediction as a probabilistic model for \(f\).
## 4 Conformal Quantile Regression
Preliminaries.For a real-valued random variable \(Y\), we denote by \(g(y)\) its probability density function and by \(F_{Y}(y)=\int_{-\infty}^{y}g(t)dt\) its cumulative distribution function (CDF). The associated quantile function of \(Y\) is then defined as follows:
\[F_{Y}^{-1}(\alpha)=\inf_{y\in\mathbb{R}}\{F_{Y}(y)\geq\alpha\}.\]
The quantile function allows to easily obtain confidence intervals. One can also easily sample from the distribution by first sampling a random quantile uniformly \(\alpha\sim\mathcal{U}([0,1])\) and then computing \(y=F_{Y}^{-1}(\alpha)\) which provides one sample \(y\) from the distribution \(Y\).
Quantile regression.Given data drawn from a joint distribution \((x,y)\sim F_{(X,Y)}\), quantile regression aims to estimate a given quantile \(\alpha\) of the conditional distribution of \(Y\) given \(X=x\), e.g. to learn the function
\[q_{\alpha}(x)=F_{Y|X=x}^{-1}(\alpha)\]
which predicts the quantile \(\alpha\) conditioned on \(x\). This problem can be solved by minimizing the quantile loss function (Bassett and Koenker, 1982) for a given quantile \(\alpha\) and some predicted value \(\hat{y}\):
\[\mathcal{L}_{\alpha}(y,\hat{y}):=\begin{cases}\alpha(y-\hat{y})&\text{if }y- \hat{y}>0,\\ (1-\alpha)(\hat{y}-y)&\text{otherwise.}\end{cases} \tag{1}\]
A critical property is that minimizing the quantile loss allows to retrieve the desired quantile in the sense that
\[\operatorname*{arg\,min}_{\hat{y}}\mathbb{E}_{y\sim F_{Y}}[\mathcal{L}_{ \alpha}(y,\hat{y})]=F_{Y}^{-1}(\alpha).\]
Given a set of \(n\) observations \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{n}\), one can thus estimate the quantile function by training a model \(\hat{q}_{\alpha}\) with parameters \(\theta\) that minimizes the quantile loss given by Eq. 1:
\[\theta^{*}=\operatorname*{arg\,min}_{\theta}\frac{1}{n}\sum_{i=1}^{n}\mathcal{ L}_{\alpha}(y_{i},\hat{q}_{\alpha}(x_{i})). \tag{2}\]
Quantile Regression Surrogate.We now explain how we can leverage quantile regression to build a probabilistic surrogate for Bayesian Optimization.
To this end, we estimate \(m\) models by minimizing Eq. 2 for equally spread-out quantiles \(\{\alpha_{1},\ldots,\alpha_{m}\}\) where
\(\alpha_{j}=j/(m+1)\) and where \(m>1\) is an even number. This allows to provide probabilistic predictions conditioned on any hyperparameter \(x\). We then use independent Thompson sampling as the acquisition function by taking advantage that samples can easily be obtained through predicted quantiles. Indeed, one can then draw a sample \(\tilde{y}(x)\) from the estimated conditional distribution of \(F_{Y|X}\) by simply sampling a quantile at random among the \(m\) quantiles predicted by the model \(\{\hat{q}_{\alpha_{1}}(x),\ldots,\hat{q}_{\alpha_{m}}(x)\}\).1
Footnote 1: One downside typically associated with fitting \(m\) models is that the quantiles predicted may not be monotonic (Gasthaus et al., 2019), however this problem does not occur in our case since we simply use samples from the predicted distribution.
To select the next configuration to evaluate, we then use independent Thompson sampling as the acquisition function. We first sample a set of \(N\) candidates \(\tilde{\mathcal{X}}=\{\tilde{x}_{1},\ldots,\tilde{x}_{N}\}\) uniformly at random and then sample a validation performance for each of those candidates to obtain \(\{\tilde{y}(\tilde{x}_{1}),\ldots,\tilde{y}(\tilde{x}_{N})\}\). We then return the configuration that has the lowest sampled value
\[x^{*}=\operatorname*{arg\,min}_{x\in\{\tilde{x}_{1},\ldots,\tilde{x}_{N}\}} \tilde{y}(x).\]
We illustrate this procedure in Figure 1 which shows how different quantiles are sampled in a toy example and how the next point is selected by picking the configuration with the lowest sampled predicted value. In this example, we consider \(f(x)\sim\mathcal{N}(0,\rho(x)^{2})\) as the function to minimize with \(\rho(x)=\sin(x)^{2}+\kappa\) where \(\kappa\) is set to \(\kappa=0.3\) to ensure positive variance. Samples of this function are shown in Figure 1, left. For this function, a standard GP is not able to represent the heteroskedastic variance and as such cannot favor any part of the space as shown by its uniformly distributed acquisition. However, while the mean of the function is always zero, the optimal points to sample are both situated at \(\pi/2\) and \(3\pi/2\) where the uncertainty is the highest. While, the GP cannot model this information given its homoskedastic noise, quantile regression is able to regress the conditional quantiles and therefore correctly identify the best regions to sample as evidenced by the acquisition function which peaks at the two best values for the next candidate.
Conformalizing predictions.While quantile regression can learn the shape of any continuous distribution given enough data, the model predictions are not guaranteed to be well calibrated given insufficient data.
More precisely, the quantiles estimated allows us to construct \(m/2\) confidence intervals
\[\mathcal{C}_{j}(x)=\left[\hat{q}_{\alpha_{j}}(x),\hat{q}_{1-\alpha_{j}}(x)\right]\]
for \(j\leq m/2\).2 For each confidence interval, we would like to have a miscoverage rate \(2\alpha_{j}\), i.e. the predictions should have probability at least \(1-2\alpha_{j}\) of being in the confidence interval \(\mathcal{C}_{j}(x)\),
Footnote 2: Note that the values chosen for the quantiles \(\alpha_{j}=j/(m+1)\) ensures that the quantile \(1-\alpha_{j}\) belongs to the \(m\) quantiles computed since \(1-\alpha_{j}=\alpha_{m-j}\).
\[\mathbb{P}[Y\in\mathcal{C}_{j}(x)]=1-2\alpha_{j}. \tag{3}\]
In the presence of heteroskedasticity, this requires to have the length of \(\mathcal{C}_{j}(x)\) to depend on \(x\) which is possible with the use of quantile regression as illustrated in Figure 1. However, the coverage statement of Eq. 3 cannot be guaranteed when fitting models on finite sample size. Miscalibrated intervals can be problematic for HPO, as it may lead to a model converging early to a suboptimal solution. To
Figure 1: Samples from the synthetic function \(f(x)\sim\mathcal{N}(0,\rho(x)^{2})\) to be minimized (left), illustration of the Thompson sampling procedure based on \(m=8\) predicted quantiles (middle) and acquisition function obtained for our method and a GP (right). When sampling, we sample one random quantile for each candidate and pick the best configuration obtained.
address this problem, we propose using the split conformal method from Romano et al. (2019) that allows to obtain robust coverage for each \(m/2\) of the predicted confidence intervals that we now describe.
The method consists in applying an offset on each confidence interval, which is estimated on a validation set. We divide the dataset of available observations \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{n}\) into a training set \(\mathcal{D}_{\text{train}}\) and a validation set \(\mathcal{D}_{\text{val}}\). After fitting each of the \(m\) quantile regression models \(\hat{q}_{\alpha_{j}}\) on the training set \(\mathcal{D}_{\text{train}}\), we compute conformity scores that measure how well the predicted conformal intervals fit the data for each sample in the validation set:
\[E_{j}=\left\{\text{max}(\hat{q}_{\alpha_{j}}(x_{i})-y_{i},y_{i}-\hat{q}_{1- \alpha_{j}}(x_{i}))\right\}_{i=1}^{|\mathcal{D}_{\text{val}}|}. \tag{4}\]
The intuition of the conformity scores is the following. First note that the sign of the score is positive when the target \(y_{i}\) is outside of the target and negative when the target falls inside the predicted interval. This allows the score to account for both overcoverage and undercoverage cases as we want to reduce the interval in cases of overcoverage and increase it in case of undercoverage. In addition, the score amplitude always measures the distance to the closest quantile of the confidence interval, i.e. the score amplitude of each sample is \(|y_{i}-q_{i}|\) where \(q_{i}\) is the closest quantile from \(y_{i}\) between \(\hat{q}_{\alpha_{j}}(x_{i})\) and \(\hat{q}_{1-\alpha_{j}}(x_{i})\).
Given this score we compute a correction \(\gamma_{j}\) which is set to
\[\gamma_{j}=(1-2\alpha_{j})\left(1+\frac{1}{|\mathcal{D}_{\text{val}}|}\right) \text{-th empirical quantile of }E_{j}. \tag{5}\]
The conformalized prediction interval for a new data point \((x,y)\) is then given by
\[\hat{\mathcal{C}_{j}}(x)=\left[\hat{q}_{\alpha_{j}}(x)-\gamma_{j},\hat{q}_{1- \alpha_{j}}(x)+\gamma_{j}\right]. \tag{6}\]
An important property of this procedure is that the corrected confidence intervals are guaranteed to have valid coverage, e.g. the probability that \(y\) belongs the prediction interval \(\hat{\mathcal{C}_{j}}\) can then be shown to arbitrarily close to \(1-2\alpha_{j}\)(Romano et al., 2019)
\[1-2\alpha_{j}\leq\mathbb{P}[Y\in\hat{\mathcal{C}_{j}}(x)]\leq 1-2\alpha_{j}+ \frac{1}{|\mathcal{D}_{\text{val}}|+1}.\]
Once the confidence intervals are readjusted by offsetting quantiles, we can sample new candidates to evaluate using a protocol based on Thompson sampling discussed in the previous section while being able to guarantee coverage properties of our predicted quantiles. The pseudo-code of the proposed algorithm to select the next configuration to evaluate is given in Algo. 1 where the key three steps are 1) fitting quantile regression models, 2) computing quantile adjustments and 3) sampling the best candidate with independent Thompson sampling.
## 5 Multi-fidelity and Successful Halving
Single-fidelity methods steer the search towards the most promising part of the configuration space based on the observed validation performance of hyperparameter configurations, however, this does not consider leveraging other available signals, such as the loss emitted at each epoch for a neural network trained with SGD. Multi-fidelity methods consider this additional information to further accelerate the search by cutting poor configurations preemptively.
Formally, multi-fidelity optimization considers the following optimization problem:
\[x^{*}=\operatorname*{arg\,min}_{x\in\mathcal{X}}f(x,r_{\text{max}})\]
where \(f(x,r_{\text{max}})\) denotes the blackbox error obtained for a hyperparameter \(x\) at the maximum budget \(r_{\text{max}}\) (for instance the maximum number of epochs) and we assume that \(r\in[r_{\text{min}},r_{\text{max}}]\). Typically, early values of \(f(x,r)\) for \(r<r_{\text{max}}\) are informative of \(f(x,r_{\text{max}})\), while being computationally cheaper to collect and can help us to cut poorly performing configurations.
Asynchronous Successive Halving.Asynchronous Successive Halving (ASHA) (Li et al., 2019) is a multi-fidelity method that can leverage several workers to evaluate multiple configurations asynchronously while stopping poor configurations early. The method starts by evaluating a list of random configurations in parallel for a small initial budget. When a result is collected from a worker, it is continued or stopped based on its result - the evaluation of the configuration continues if it is in the top results seen so far for a given fidelity and interrupted otherwise. Stopped configurations are replaced with new candidates sampled randomly and the process is iterated until the tuning budget is exhausted.
ASHA avoids synchronization points by evaluating each configuration based on the data available at the time, which can lead to false positives in the continuation decision. Indeed, some configurations may be continued due to poor competition rather than good performance and would have been stopped if more data was available. However, the avoidance of synchronization points efficiently deals with straggler evaluation and is one of the key components of the method's excellent performance in practice. The pseudo-code of the method is given in the appendix.
Model-based ASHA.One pitfall of ASHA is that candidates are sampled randomly at initialization and when workers become free after configurations are interrupted. However after spending some time in the tuning process, we gathered results which we would clearly like to be able to bias the search towards the most promising parts of the configuration space.
One challenge is that most single-fidelity model-based approaches regress a surrogate model \(f(x,r_{\text{max}})\) given observations at the final fidelity \(r_{\text{max}}\). It becomes then difficult to combine model-based and multi-fidelity approaches given that when we stop a poor configuration at a resource \(r<r_{\text{max}}\), we are unsure about what would have been the value at \(f(x,r_{\text{max}})\).
Bridging single and multi-fidelity methods.We propose a simple data transformation that allows to use any single fidelity method in the multi-fidelity setting. We denote the configurations and evaluations obtained at a given time as
\[\big{\{}(x_{i},\{f(x_{i},r_{1}),\ldots,f(x_{i},r_{n_{x_{i}}})\})\big{\}}_{i=1}^ {n}\]
where \(n\) denotes the number of configurations evaluated and \(n_{x}\) denotes the number of fidelities evaluated for a configuration \(x\).
We propose to consider the transformation that takes the last value of the time-observations of a given configuration. Namely, we propose to consider the transformation: \(z=f(x,r_{n_{x}})\) and then use a single-fidelity method rather than random-search to determine the best next configuration to evaluate given the observations \(\mathcal{D}=\{(x_{i},z_{i})\}_{i=1}^{n}\) while using ASHA for the stopping decisions.
Relying on the last observed value \(f(x,r_{n_{x}})\) rather than all fidelities \(f(x,r)\) for \(r\leq r_{n_{x}}\) significantly simplifies the multi-fidelity setup but obscures a portion of the available signal. However, as evaluating configuration candidates longer is expected to improve their result, the data transformation is effectively pushing the poor and well performing configurations further apart. Assuming configurations are stopped with a probability inversely correlated with their performance and their results would not cross if the training were to continue, the result-based ordering of configurations remains constant regardless of whether we use the last or final observation. This means that it remains possible under those assumptions to discriminate between promising and not-promising configurations. 3
Footnote 3: The accentuated spread between bad and good configurations can be mitigated by using quantile normalization as the transformation is invariant to monotonic changes. We do not report results for this approach as it adds a layer of complexity and performed on-par with just taking the last observations in our experiments.
In addition to working well with the Conformal Quantile Regression that we introduced, we will also show in our experiments that this simple transformation allows to combine single-fidelity methods with ASHA while reaching the performance of state-of-the-art dedicated model-based multi-fidelity methods.
## 6 Experiments
We evaluate our method against state-of-the-art HPO algorithms on a large collection of real-world datasets on both single and multi-fidelity tasks. The code to reproduce our results is available at [https://github.com/geoalgo/sync-tune/tree/icml_conformal](https://github.com/geoalgo/sync-tune/tree/icml_conformal).
Benchmarks.Our experiments rely on 13 tasks coming from FCNet (Klein and Hutter, 2019), NAS201 (Dong and Yang, 2020) and LCBench (Zimmer et al., 2021) benchmarks as well as NAS301 (Siems et al., 2020) using the implementation provided in (Pfisterer et al., 2022). All methods are evaluated asynchronously with 4 workers. Details on these benchmarks and their configuration spaces distributions are given in Appendix B.
Baselines.For single-fidelity benchmarks, we compare our proposed method (CQR) with random-search (RS) (Bergstra et al., 2011), Gaussian Process (GP) (Snoek et al., 2012), regularized-evolution
(REA) (Real et al., 2019) and BORE (Tiao et al., 2021). For multi-fidelity benchmarks, we compare against ASHA (Li et al., 2019), BOHB (Falkner et al., 2018), Hyper-Tune (HT) (Li et al., 2022) and Mobster (MOB) (Klein et al., 2020).
Experiment Setup.All tuning experiments run asynchronously with 4 workers and are stopped when \(200*r_{\text{max}}\) results were observed, which corresponds to seeing 200 different configurations for single-fidelity methods, or when the wallclock time exceeded a fixed budget. All runs are repeated with 30 different random seeds, and we report mean and standard errors. We use gradient boosted trees (Friedman, 2001) for the quantile-regression models with the same hyperparameter used for BORE. We use the simulation backend provided by Syne Tune (Salinas et al., 2022) on a AWS m5.4xlarge machine to simulate methods which allows to account for both optimizers and blackbox runtimes.
Metrics.We benchmark the performance of methods using normalized regret, ranks averaged over tasks and critical diagrams. The normalized regret, also called average distance to the minimum, is defined as \((y_{t}-y_{\text{min}})/(y_{\text{max}}-y_{\text{min}})\) where \(y_{\text{min}}\) and \(y_{\text{max}}\) denotes respectively the best and worse possible values of the blackbox \(f\) and \(y_{t}\) denotes the best value found after at a time-step \(t\) for a given method. To aggregate across tasks, we report scores at 50 fractions of the total budget for each tasks. We then average normalized regret over each budget proportion across tasks and seeds. We compute ranks for each method at all time-steps and also average those values over tasks and seeds. Critical diagrams show group of methods that are statistically tied together with a horizontal line using the statistical test proposed by (Demsar, 2006). They are computed over averaged ranks of the methods obtained for a fixed budget. The performance of all methods per task is also given in the appendix in Fig. 3, 4, 5.
Results discussion for single-fidelity.Aggregated metrics of single-fidelity methods are shown in Figure 2 top. Our proposed method (CQR) outperforms all baselines in term of both rank and regret. In particular, critical diagrams shows that our method outperforms all baselines at 50% of the tuning budget and is only tied statistically to (BORE) given 100% of the budget. Our results also show that GP-based tuning performs really well in the low sample regime where it can rely on its informative prior. After enough observations, our data-driven approach starts to outperform Bayesian competitor as it can model irregular noise better.
Analyzing surrogate performance.To better understand performance gains, we now analyze the properties of different surrogate models in more detail. In Tab. 1, we compare the surrogate accuracy (RMSE), the quality of their uncertainty estimate (calibration error) and their runtime. In particular, we measure those metrics for different number of samples \(n\). In each case, we draw a random subset of size \(n\) to train the surrogate model and then evaluate the three metrics on remaining unseen examples. Results are averaged across seeds and benchmarks and we normalize the target with quantile normalization. Results per task are also given in the appendix as well as the definition of calibration error metric.
We compare three surrogates: the baseline GP, conformalized quantile regression CQR as well as quantile regression QR. Compared to GP, the RMSE of the boosted-trees surrogates is always better, which is expected as boosted-trees are known for their good performance on tabular data. However, a critical aspect in HPO is to provide good uncertainty estimates in addition to good mean predictors as uncertainty plays a critical in balancing exploration versus exploitation.
To measure the quality of uncertainty estimates, we analyze the calibration error of the different surrogates which measures how much over or under confident are each predicted quantiles of the surrogate predictive distribution. When few observations are available (e.g. when \(n\leq 64\)), the quality of uncertainty estimates of GP is better compared to both boosted tree-methods, which is expected as GP can rely on their prior in this regime whereas data-driven QR and CQR lack the amount of data to estimate quantile levels well enough. The lack of data also means that CQR cannot adjust confidence intervals accurately given that its validation set is too small and its calibration performance just matches QR. However, as the number of samples increases, the calibration of tree-based methods quickly becomes better, which underlines that quantile regression better fits the noise function observed in the benchmarks. As expected given the theoretical coverage guarantees, the calibration of CQR exceeds the calibration of QR given sufficient data making it a better suited surrogate for HPO.
Results discussion for multi-fidelity.Next, we analyze the performance in the multi-fidelity setting in the middle of Fig. 2 where we show the performance of (CQR+MF)
\begin{table}
\begin{tabular}{l|c c|c c c|c c c} \hline \hline & \multicolumn{4}{c}{RMSE \(\downarrow\)} & \multicolumn{4}{c}{Calibration error \(\downarrow\)} & \multicolumn{4}{c}{Runtime \(\downarrow\)} \\ model & GP & QR & CQR & GP & QR & CQR & GP & QR & CQR \\ \(n\) & & & & & & & & & \\ \hline
16 & 1.01 & 0.78 & 0.81 & 0.06 & 0.13 & 0.13 & 1.11 & 1.13 & 1.06 \\
64 & 0.92 & 0.57 & 0.58 & 0.04 & 0.10 & 0.08 & 1.71 & 1.48 & 1.43 \\
256 & 0.85 & 0.43 & 0.44 & 0.06 & 0.05 & 0.04 & 2.27 & 1.75 & 1.71 \\
1024 & 0.58 & 0.37 & 0.37 & 0.11 & 0.04 & 0.03 & 2.03 & 2.23 & 2.16 \\ \hline \hline \end{tabular}
\end{table}
Table 1: RMSE, Calibration error and runtime for different surrogates when increasing the number of samples.
which combines the single-fidelity method with the simple transformation described in section 5.
In contrast to single-fidelity, multi-fidelity optimization quickly yields many hundreds of observations and the majority of the tuning process lies in the high-data regime. In this setup, (CQR+MF) shows significant improvement in term of HPO performance. In particular, while most multi-fidelity approaches are not statistically distinguishable from ASHA, our proposed method (CQR+MF) offers statistically significant improvements over ASHA at all times and over all other model-based multi-fidelity methods after spending 50% and 100% of the total budget. We understand the improvement mainly comes from the multi-fidelity setting which offers more observations to the HPO tuning methods which plays into the strengths of the CQR surrogate illustrated in the previous paragraph in term of accuracy and calibration.
Ablation study.The surrogate analysis showed that conformal prediction improves the calibration of quantile regression but has little effect on the surrogate RMSE. To examine the benefit of this contribution on the HPO setting, we next evaluate quantile regression with and without applying conformal correction (QR+MF) in the bottom of Fig. 2. The performance of QR+MF is much worse than CQR+MF which highlights the benefit of the better uncertainty provided by conformalizing predictions.
Next, we investigate in Fig. 2 the performance of the best single-fidelity methods REA, GP and BORE extended to the multi-fidelity setting with our simple extension. As for QR+MF and CQR+MF, all those methods surrogates are trained using the last fidelity observed for each hyperparameter and the worst configurations are stopped with asynchronous successful halving. While those methods perform worse than CQR+MF, they all outperform ASHA in term of average rank and regret except for REA which we be
Figure 2: Performance for single fidelity (top) multi-fidelity (middle) and multi-fidelity variants (bottom) for average rank over all tasks (left), normalized regret (middle) and critical diagrams obtained at 50% and 100% of the total budget (right).
lieve is due to the lower performance of the method. Those simple extensions also match or improve over the performance of dedicated model-based multi-fidelity methods.
This illustrates the robustness of the proposed extension with respect to the choice of the single-fidelity method also shows the potential of future work to extend other advanced single fidelity methods - for instance multi-objective or constrained - to the multi-fidelity case.
## 7 Conclusion
We presented a new HPO approach that allows to use highly accurate tabular predictors, such as gradient boosted trees, while obtaining calibrated uncertainty estimates through conformal predictions. In addition, we showed that most single-fidelity methods can be extended to the multi-fidelity case by just using the last fidelity available while achieving good performance.
The method we proposed has a few limitations. For instance the use of Thompson Sampling may be less efficient in the presence of many hyperparameters, as such further work could consider extending the method with other acquisition functions, such as UCB. Further work could also investigate providing regret bounds or extension to support multi-objective or transfer learning scenarios.
|
2304.14076
|
Relativistic Moduli Space and critical velocity in kink collisions
|
We analyze the perturbative Relativistic Moduli Space approach, where the
amplitudes of the Derrick modes are promoted to collective coordinates. In
particular, we analyse the possibility to calculate the critical velocity,
i.e., the initial velocity of kinks at which single bounce scattering changes
into a multi-bounce or annihilation collision, in the resulting Collective
Coordinate Model (CCM). We find that for a growing number of modes the critical
velocity of the CCM approaches the full field theory value. This is verified in
the case of the $\phi^4$ model, where we reach a $99\%$ accuracy. We also see
such a convergence for a wide range of models belonging to the family of the
double sine-Gordon and Christ-Lee theories, especially in those cases where the
kinks do not reveal a too well pronounced half-kink inner structure.
|
C. Adam, D. Ciurla, K. Oles, T. Romanczukiewicz, A. Wereszczynski
|
2023-04-27T10:30:22Z
|
http://arxiv.org/abs/2304.14076v2
|
# Relativistic Moduli Space and critical velocity in kink collisions
###### Abstract
We analyze the perturbative Relativistic Moduli Space approach, where the amplitudes of the Derrick modes are promoted to collective coordinates. In particular, we analyse the possibility to calculate the critical velocity, i.e., the initial velocity of kinks at which single bounce scattering changes into a multi-bounce or annihilation collision, in the resulting Collective Coordinate Model (CCM). We find that for a growing number of modes the critical velocity of the CCM approaches the full field theory value. This is verified in the case of the \(\phi^{4}\) model, where we reach a 99% accuracy. We also see such a convergence for a wide range of models belonging to the family of the double sine-Gordon and Christ-Lee theories, especially in those cases where the kinks do not reveal a too well pronounced half-kink inner structure.
## I Introduction
When topological kinks [1; 2] are collided, they show a very complicated behavior. They can (quasi) elastically scatter and reappear in the final state with smaller or bigger energy loss. They can also annihilate, that is, completely decay into small waves, i.e., radiation, often via the formation of an intermediate long-living quasi-periodic state called oscillon (or bion). These two scenarios are supplemented by the appearance of few-bounces, where the colliding kink and antikink meet a few times before they manage to escape to infinity [3; 4; 5]. The resulting behavior in a particular process strongly depends on initial parameters like the velocities of the kinks [3; 4; 5] or the amplitudes of additional excitations [6]. This complex pattern of possible outcomes is encapsulated in the famous chaotic or even fractal structure in the final state formation [3; 4].
The explanation of this involved, chaotic pattern even for the simplest, prototypical kink-antikink collision in \(\phi^{4}\) theory, was a long standing challenge which only very recently led to a satisfactory resolution. Indeed, for more than 40 years it was expected than the fractal structure is related to the _resonant transfer mechanism_ which is nothing but a flow of energy between kinetic and internal degrees of freedom (DoF) [3; 4]. The kinetic DoF is just kinetic motion of the soliton connected with a zero mode arising from the translational invariance of the theory, while the internal DoF are typically normal or quasi-normal modes hosted by the solitons. In the simplest case, initially the colliding solitons have only kinetic energy which during the first collision is not only lost via the emission of radiation, but also excites the internal modes. This may result in a situation where the kink and antikink do not have enough kinetic motion to overcome their mutual attractive force and, therefore, have to collide again. Often, more collision means more radiation and further energy loss, but sometimes the energy stored in the internal DoF can return to the kinetic DoF, allowing the solitons to liberate and escape to infinities.
This simple and beautiful mechanism explains, e.g., the frequency of the oscillations in two and even three bounce windows in the \(\phi^{4}\) model. However, serious problems in the construction of a Collective Coordinate Model (CCM) based on these DoF [7], cast some doubts on the resonant transfer mechanism and/or the CCM approach itself [8]. Fortunately, it was recently shown that there exists a two-dimensional CCM based on only two collective coordinates, the position of the solitons and the amplitude of their internal mode (in this case called shape mode), which _qualitatively_ reproduces the full field theory dynamics [9].
This break through led to another important question, whether there may exist a CCM type approach which not only qualitatively agrees with the full theory, but which could be treated as a precision tool, allowing for a more detailed, _quantitative_ agreement. In particular, whether there is a CCM scheme in which (at least some) observables converge to the values computed in the full field theory.
The first step in this attempt has already been made, by the discovery of the so-called perturbative Relativistic Moduli Space (pRMS) approach, where instead of a very restricted number of normal bound modes one uses an, in principle, _arbitrary_ number of _Derrick modes_[10]. It has been shown that using two Derrick modes significantly improves the results of the CCM, indicating that such a converging framework may indeed exist.
It is the main objective of the current work to further explore this possibility. In particular, we will show that one of the most important observables, i.e., the _critical velocity_ which separates the region of the simplest one bounce scattering from multi-bounce/annihilation processes, can be very accurately captured by a CCM based on the pRMS, where the agreement gets better when we increase the number of Derrick modes. This will be established for a wide set of models, especially those for which kinks are localized around one center rather
than two or more. Thus, we find a first converging CCM like framework, at least for the critical velocity.
## II Perturbative relativistic moduli space
Let us begin with a general real scalar field theory in (1+1) dimensions
\[L=\int_{-\infty}^{\infty}\left(\frac{1}{2}\left(\partial_{\mu}\phi\right)^{2}-U( \phi)\right)dx, \tag{1}\]
where the field theoretical potential \(U\) defines a particular theory and has at least two vacua. This allows for the existence of topologically nontrivial solitons called (anti)kinks \(\Phi_{K(\bar{K})}(x;a)\) which obey the well-known static Bogomolny equation
\[\frac{d\phi}{dx}=\pm\sqrt{U(\phi)}. \tag{2}\]
Here \(a\in\mathbb{R}\) is the position of the soliton and its arbitrariness reflects the translational invariance of the field theory. Obviously, a static (anti)kink can be boosted in a Lorentz covariant way. Unfortunately, there are typically no analytical solutions describing a dynamical process involving both kink and antikink and, apart from the numerical treatment of the full equations of motion, not many methods are available. One important exception is the collective coordinate model (CCM) method which allows to reduce the infinitely many DoF of the original field theory to a discrete set of parameters, i.e., _moduli_, whose time evolution approximates the full PDE dynamics, see e.g., [11; 12].
Concretely, the infinite dimensional space of field configurations is reduced to an \(N\)-dimensional subspace spanned by a restricted set of configurations \(\mathcal{M}(X^{i})=\{\Phi(x;X^{i}),i=1..N\}\). The identification of such a set is usually a very nontrivial task. In the next step, the continuous parameters \(X^{i}\) are promoted to time dependent coordinates providing a mechanical-like system
\[L[\mathbf{X}]=\int_{-\infty}^{\infty}\mathcal{L}[\Phi(x;X^{i}(t))]\,dx=\frac{1 }{2}g_{ij}(\mathbf{X})\dot{X}^{i}\dot{X}^{j}-V(\mathbf{X})\,, \tag{3}\]
where
\[g_{ij}(\mathbf{X})=\int_{-\infty}^{\infty}\frac{\partial\Phi}{\partial X^{i}} \frac{\partial\Phi}{\partial X^{j}}\,dx \tag{4}\]
is the metric on the moduli space \(\mathcal{M}\) and
\[V(\mathbf{X})=\int_{-\infty}^{\infty}\left(\frac{1}{2}\left(\frac{\partial \Phi}{\partial x}\right)^{2}+U(\Phi)\right)\,dx \tag{5}\]
is an effective potential.
This approach works especially well for BPS models where there is no static force between solitons. E.g., it explains the famous \(\pi/2\) scattering of the Abelian Higgs vortices at critical coupling, or of BPS monopoles [11; 12]. Moreover, it is also very useful in non-BPS theories, where it gives a strong evidence that certain effects arising in soliton dynamics result from a coupling between kinetic and internal DoF. See, for example, the fractal structure in the final state formation in kink-antikink collisions in the \(\phi^{4}\) model [9]. Finally, the identification of the correct collective coordinates is of high importance for the semiclassical quantization of solitons [13; 14; 15; 16].
Typically, the construction of a moduli space for multi-kink processes begins with single kink states, \(\Phi_{K(\bar{K})}(x;a)\), which are then trivially superposed. For a realistic description of non-BPS processes, the inclusion only of the single (anti)kink solutions at different positions \(a\) is not enough and one needs to add single soliton modes \(\eta_{i}^{K(\bar{K})}(x;a)\). For example, for symmetric kink-antikink (KAK) collisions we choose
\[\Phi_{K\bar{K}}(x;a,\mathbf{X}) =\Phi_{K}(x;-a)+\Phi_{\bar{K}}(x;a)+\Phi_{vac}\] \[+\sum_{i=1}^{N}X^{i}\left(\eta_{i}^{K}(x;-a)+\eta_{i}^{\bar{K}}(x ;a)\right), \tag{6}\]
where \(\Phi_{vac}\) is a vacuum value added to provide the correct boundary conditions. Sometimes, even multi-soliton (delocalized) modes have to be included [17].
It is important to notice that in this construction the original Lorentz covariant field theory is lost, and we arrive at a non-relativistic CCM [1]. This is a valid approximation if the velocity of the solitons is small, otherwise a more refined approach is needed.
Such a construction exists and requires taking into account at least one additional, scale (Derrick) deformation, \(x\to bx\), where the scale factor \(b\in\mathbb{R}_{+}\)[18; 19]. In the single soliton sector it gives the following moduli space
\[\mathcal{M}_{K(\bar{K})}(a,b)=\{\Phi_{K(\bar{K})}(b(x-a))\}. \tag{7}\]
The resulting CCM possesses a stationary solution
\[\dot{a}=v,\ \ b=\frac{1}{\sqrt{1-v^{2}}}, \tag{8}\]
which describes the Lorentz contraction of a boosted (anti)kink.
This nice relativistic generalization meets, unfortunately, serious difficulties if applied to KAK collisions in theories where the kink and antikink are related by a simple change of sign
\[\Phi_{\bar{K}}(x)=-\Phi_{K}(x), \tag{9}\]
which happens, for example, in the \(\phi^{4}\) and sine-Gordon models. Indeed, then the KAK moduli space
\[\mathcal{M}_{K\bar{K}}(a,b)=\{\Phi_{K}(b(x+a))-\Phi_{K}(b(x-a))\} \tag{10}\]
reveals a singularity at \(a=0\), where two cone like surfaces (for \(a>0\) and \(a<0\)) are joined. This is
a genuine singularity which cannot be removed by any change of coordinates. As a consequence, the resulting CCM collapses at this point [10].
To cure this obstacle, a perturbative scheme called the _perturbative Relativistic Moduli Space_ approach (pRMS) has been introduced recently [10]. As its name suggests, the Derrick deformation is treated perturbatively, i.e., \(b=1+\epsilon\), where \(\epsilon\) is formally a small parameter. Then, we insert it into the single kink solution and expand it in powers of \(\epsilon\) to an arbitrary order \(N\)
\[\Phi_{K}(b(x-a))=\sum_{k=0}^{N}\frac{\epsilon^{k}}{k!}(x-a)^{k}\Phi_{K}^{(k)}( x-a)+o(\epsilon^{N}), \tag{11}\]
where \(\Phi^{(k)}\) is the \(k\)th derivative of the kink. Then we do the crucial step and treat each term in the expansion as an independent deformation, the \(k\)th Derrick mode. This means that we replace \(\epsilon^{k}\) by a new, independent collective coordinate \(C_{k}\) obtaining the following restricted set of configurations
\[\Phi_{K}(x;a,\mathbf{C})=\Phi_{K}(x-a)+\sum_{k=1}^{N}\frac{C_{k}}{k!}(x-a)^{k} \Phi_{K}^{(k)}(x-a). \tag{12}\]
Importantly, this framework introduces an _arbitrary_ number of collective coordinates. The first few may be assigned to normal modes (shape modes) of the kink, while higher ones have frequencies which are above the mass threshold and, therefore, can be associated with a sort of radiation, at least at short time scales. Of course, they do not represent true radiation, since all Derrick modes are bounded to the kink and cannot escape to infinite.
Similarly as in the non-perturbative relativistic set-up, there is a solution of the single soliton sector which describes an approximation of a boosted kink. Indeed, a CCM based on (12) has a stationary solution
\[\dot{a}=v,\ \ C^{k}=\tilde{C}^{k}, \tag{13}\]
where the \(\tilde{C}^{k}\) solve the following algebraic equations
\[\frac{v^{2}}{2}\frac{\partial g_{aa}(\mathbf{C})}{\partial C^{k}}=\frac{ \partial V(\mathbf{C})}{\partial C^{k}}. \tag{14}\]
It is straightforward to construct the KAK moduli space. Namely, it reads
\[\Phi_{K\bar{K}}(x;a,\mathbf{C}) = \Phi_{K}(x-a)-\Phi_{K}(x-a)+\Phi_{vac} \tag{15}\] \[+ \sum_{k=1}^{N}\frac{C_{k}}{k!}\left((x+a)^{k}\Phi_{K}^{(k)}(x+a)\right.\] \[- \left.(x-a)^{k}\Phi_{K}^{(k)}(x-a)\right).\]
It, again, leads to a singularity on the moduli space at \(a=0\), but now this is an apparent singularity which can be removed by a suitable change of coordinates [20]. Concretely, we apply the following redefinition \(C_{k}\to C_{k}/\tanh(a)\) and arrive at the following expression for the restricted set of configurations,
\[\Phi_{K\bar{K}}(x;a,\mathbf{C}) = \Phi_{K}(x-a)-\Phi_{K}(x-a)+\Phi_{vac} \tag{16}\] \[+ \sum_{k=1}^{N}\frac{C_{k}}{k!\tanh(a)}\left((x+a)^{k}\Phi_{K}^{( k)}(x+a)\right.\] \[- \left.(x-a)^{k}\Phi_{K}^{(k)}(x-a)\right).\]
which finally leads to a well defined CCM
\[L[a,\mathbf{C}]=\int_{-\infty}^{\infty}\mathcal{L}[\Phi_{K\bar{K}}(x;a, \mathbf{C})]dx. \tag{17}\]
This must be further equipped with suitable initial conditions. In all cases, we scatter initially unexcited solitons which are boosted towards each other with an initial velocity \(v_{in}\). Thus, we inherit the initial conditions from the stationary solution of the single soliton CCM
\[a(0)=a_{0},\ \ \dot{a}(0)=v_{in},\ \ C^{k}(0)=\tilde{C}^{k}(v_{in}),\ \ \dot{C}(0)=0, \tag{18}\]
where \(a_{0}\) is half of the initial separation of the kink and the antikink.
In the next sections, we will test this construction for three types of theories, the \(\phi^{4}\), Christ-Lee and double sine-Gordon models.
## III \(\phi^{4}\) model
In our first example, we continue the previous study of KAK collisions in the \(\phi^{4}\) model
\[U_{\phi^{4}}=\frac{1}{2}(1-\phi^{2})^{2}. \tag{19}\]
This is the prototypical kink process in a non-integrable theory whose explanation has been a challenge for many years.
The kink and antikink at the origin are given as
\[\Phi_{K(\bar{K})}(x)=\pm\tanh(x) \tag{20}\]
and they host only one bound mode, called shape mode, known in an exact form,
\[\eta_{sh}(x)=\sqrt{\frac{3}{2}}\frac{\sinh(x)}{\cosh^{2}(x)}, \tag{21}\]
with frequency \(\omega_{sh}=\sqrt{3}\). At \(\omega=4\) the continuum spectrum of scattering modes begins.
As is well known, the KAK scattering in the \(\phi^{4}\) model exhibits a chaotic structure in the final state formation [3; 4; 5; 8], where between the single-bounce scattering and the region of complete annihilation, one finds a very complicated, fractal pattern of few-bounce windows surrounded by other annihilation regions, see Fig. 1, upper panel. In bounce windows, the solitons for certain
collisions gain a sufficient amount of kinetic energy to overcome the attractive kink-antikink force and escape to infinities as free final states. In the annihilation regions, called bion chimneys, they form an imperfect version of the famous breather, i.e., oscillon, which here slowly decays to the vacuum by the emission of radiation. One very important observable is the _critical velocity_ which divides the one-bounce collisions from the chaotic regime. In this case, \(v_{crit}=0.2598\). We will model this highly complicated dynamics using the pRMS construction.
In Fig. 1 we show the KAK dynamics obtained in the CCM based on the pRMS (15). We vary the number of Derrick modes from one to four. The results for \(L[a,C^{1}]\) and \(L[a,C^{1},C^{2}]\) were originally presented in [10]. We see that even the simplest CCM which contains only the first Derrick mode qualitatively reproduces the full PDE dynamics. This qualitative agreement improves significantly if we include the second Derrick mode, see Fig. 1. Indeed, the previously observed unwanted three and four-bounce windows, which existed for \(v_{in}\in(0.11,0.2)\), disappear. Also an overall shift of the bounce structure to larger velocities is not present any longer. In fact, the results become in _quantitative_ agreement with the full field theory. This is a solid evidence
Figure 1: Kink-antikink collision in the \(\phi^{4}\) model: full theory (upper) and in CCM based on pRMS with one (central left), two (central right), three (lower left) and four (lower right) Derrick modes.
that the resonant energy transfer involving the kinetic and internal DoF is responsible for the fractal structure observed in the formation of the final state.
We also note that the second Derrick mode may to some extent play the role of radiation since its frequency lies above the mass threshold, \(\omega_{2}^{2}=6.9283\). Also the frequency of the first Derrick mode is closer to the frequency of the shape mode if we include the second Derrick mode. Namely, it shrinks from \(\omega_{1}^{2}=3.1011\) (only the first Derrick mode included) to \(\omega_{1}^{2}=3.0221\).
If we include the third Derrick mode, even more bounce windows disappear and the results seem to be a little bit worse than in the case with two Derrick modes, Fig. 1, lower left panel. The picture improves in we consider four Derrick modes, Fig. 1 lower right panel. The overall tendency is that higher rank bounce windows are suppressed and the CCM scattering is dominated by one, two or three bounce windows and bion chimneys. Thus, from the point of view of higher-bounce windows, the inclusion of too many Derrick modes does not seem to be an optimal strategy.
There is, however, an amazing improvement in the prediction of the critical velocity if we increase the number of Derrick modes, see Tab. 1. The agreement between the CCM models and the full field theory grows from \(90\%\) for one Derrick mode to a striking \(99\%\) if four Derrick modes are taken into account.
## IV Time dynamics of Derrick modes
In fact, the problem in the description of higher rank bounces as well as the significant improvement of the prediction of the critical velocity, may have the same origin. Namely, both seem to be related to the fact that the higher Derrick modes can be used to approximate radiation only in a short time scale. Initially, they effectively transfer energy from the kinetic motion and the first Derrick mode (which acts as the shape mode). However, in contrast to radiation, the higher Derrick modes are also confined to the (anti)kink. Thus, the fraction of energy stored in those modes is never released to infinity but, after not so long time, can enter again the center of the soliton, leading to an additional excitation of the zero and/or the first Derrick mode. Thus, medium and large time effects may be affected by this "bounded radiation" effectively provided by the higher order Derrick modes.
On the other hand, the critical velocity divides the regime with only one bounce from a more complicated behaviour. Thus, it is controlled by the behaviour of the kinks at the first bounce, that is, in the short time interval where the Derrick modes are still trustable.
To verify this heuristic argumentation, we analyze the time dynamics of the Derrick modes in the single soliton sector. First of all, let us observe that higher rank Derrick modes are more spread. This is shown in Fig. 2. That was the reason why these mo
Figure 4: Power spectrum of the evolution of the shape mode.
Figure 3: Shape mode evolution in the \(\phi^{4}\) model (blue curve) and in the CCM based on five Derrick modes (orange).
Figure 2: The first five Derrick modes of the \(\phi^{4}\) kink.
\begin{table}
\begin{tabular}{c c c} \hline \hline \(N\) & \(v_{crit}\) & agreement [\%] \\ \hline
1 & 0.2654 & 90.15 \\
2 & 0.2491 & 96.80 \\
3 & 0.2639 & 98.42 \\
4 & 0.2618 & 99.23 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the critical velocities for the KAK collision in \(\phi^{4}\) model obtained in the CCM based on \(N=1,2,3\) and 4 Derrick modes. The true critical velocity is \(v_{cr}=0.2598\).
surrogate of radiation. Namely, they can transfer energy from the center of the kink.
However, as is shown in Fig. 3, such an energy transfer occurs only for not too long time scales. Here we show the decay of the shape mode in the full field theory, and the one obtained in the single kink CCM model with the first five Derrick modes included. The initial shape mode amplitude is \(A(0)=0.5\). As proved in [21], the shape mode decays due to nonlinearities of the \(\phi^{4}\) model, which couples the normal mode to radiative modes. This leads to a \(t^{-1/2}\) decay of the shape mode amplitude at large time. In the CCM approach, we initially see a decay of the shape mode amplitude due to the flow of the energy to higher Derrick modes. This happens for a few shape mode oscillations. For later time, we observe the reversed energy transfer, and the amplitude of the shape mode increases. This is then repeated many times, leading to an apparent double oscillation structure. This may result in a too high value of the shape mode amplitude at the second and higher bounces, which at the end can be a source of the growing disagreement in their description.
The emergence of long time scale oscillations in the CCM approximation of the shape mode is also clearly visible in Fig. 4, where we plot the power spectrum for the solutions of Fig. 3. Indeed, beside a very good agreement in the frequency of the shape mode (and its higher harmonics) we see a peak obtained for the CCM dynamics at a small frequency.
## V Christ-Lee model
Now, we turn to a family of theories known as the Christ-Lee model [22],
\[U_{CL}=\frac{1}{2(1+\epsilon^{2})}(\epsilon^{2}+\phi^{2})(1-\phi^{2})^{2}. \tag{22}\]
This is a version of a sixth order potential which was extensively analyzed as a (1+1) dimensional analog of the bag model, with kinks playing the role of confined quarks in a solitonic baryon.
Here, \(\epsilon\in[0,\infty)\) is a parameter which allows for the interpolation between the standard \(\phi^{6}\) model (\(\epsilon\to 0\)) and the \(\phi^{4}\) model (\(\epsilon\to\infty\)). For any non-zero \(\epsilon\), this theory has two vacua at \(\phi=\pm 1\) and, therefore, supports a kink
\[\Phi_{K}=\frac{\epsilon\sinh(x)}{\sqrt{1+\epsilon^{2}\cosh^{2}(x)}} \tag{23}\]
and a symmetric antikink \(\Phi_{\bar{K}}(x)=-\Phi_{K}(x)\). As \(\epsilon\) decreases to \(0\), the (anti)kink reveals a composite structure with two well visible centers, which can be interpreted as half-kinks separated by a plateau of the false vacuum with \(\phi\approx 0\). The distance between the half-kinks increases as \(\epsilon\to 0\). In this limit, we obtain two infinitely separated \(\phi^{6}\) kinks, or speaking precisely, the mirror kink \((-1,0)\) and the kink \((0,1)\). Note that the half-kink and half-antikink are not symmetric in our sense, exactly as for the \(\phi^{6}\) kink and antikink. Of course, for any finite \(\epsilon\), the half-kinks are confined and cannot be arbitrarily separated to form free states.
Due to the emerging half-kink composite structure arising for small \(\epsilon\), kink-antikink collisions in this regime may potentially reveal some additional complexity if compared with the usual multi-kink scattering in \(\phi^{4}\) or \(\phi^{6}\) theories. Indeed, such a collision looks rather like a process of four half-kinks, where each half-kink collision not only excites bound modes but also leads to the emission of radiation, which can affect subsequent collisions. As is well known, collisions between excited solitons are more complicated than in the standard unexcited case [6; 23]. Therefore, it is a very nontrivial task to model them within any CCM.
Another factor which may increase the complexity of the collisions is the mode structure. For \(\epsilon=\infty\), where
Figure 5: The lowest frequency bound mode in the Christ-Lee model (blue) and the first Derrick mode (orange).
Figure 6: The critical velocity \(v_{cr}\) in the Christ-Lee model (green) and result obtained from the CCM based on one (blue) and two (orange) Derrick mode(s).
we recover the \(\phi^{4}\) model, there is only one bound mode, the shape mode. This modes exists for all finite \(\epsilon\), but its frequency decreases as \(\epsilon\to 0\). However, already for arbitrarily large but finite \(\epsilon\) another mode shows up from the mass threshold. When \(\epsilon\) decreases, more and more bound modes show up. They are of the same nature as the delocalized or trapped two-soliton bound modes observed in antikink-kink collisions in \(\phi^{6}\) theory, see [24] and [17]. In fact, the half-kinks are connected by the \(\phi=0\) false vacuum, whose mesonic excitations have a smaller mass than the small waves excited in the true vacua. Thus, an effective two-(half)soliton potential well appears. Its width grows with the distance between the half-kinks and, therefore, the number of hosted modes grows without limit as \(\epsilon\) decreases to \(0\). In Fig. 5 we plot the lowest energy mode only. Its frequency changes from \(\omega^{2}=3\) for \(\epsilon\rightarrow\infty\), where we find just the \(\phi^{4}\) shape mode, to \(\omega=0\) for \(\epsilon=0\), where we find two \(\phi^{6}\) kinks, which, if treated separately, do not possess any bound modes. In this limit, this mode tends to another zero mode as two half-kinks become independent kinks at \(\epsilon=0\). This behaviour is quite well approximated by the first Derrick mode. As we see in Fig. 5, its frequency very well agrees with the frequency of the shape mode until small \(\epsilon\) where a growing discrepancy is visible.
Following these observations, one can expect that KAK scatterings in the Christ-Lee model strongly depend on the value of \(\epsilon\), see [25], [26]. Again, an important observable is the critical velocity which separates one-bounce collisions, where the kink and antikink back scatter to infinities, from more involved dynamics. Obviously, the critical velocity also varies with the parameter of the model. In fact, the observed relation is very nontrivial, as is shown in Fig. 6, green curve.
Figure 8: KAK collision in the Christ-Lee model. Left: annihilation to two oscillons (\(\epsilon=0.55,v_{in}=0.24\)). Right: annihilation to three oscillons (\(\epsilon=0.80,v_{in}=0.14\)).
Figure 7: Kink-antikink collision in the Christ-Lee model. _Upper:_ full field theory computation for \(\epsilon=3\) (left) and \(\epsilon=0.5\) (right). _Lower:_ the CCM result based on pRMS with the first Derrick mode.
In the limit \(\epsilon\to\infty\) it approaches the \(\phi^{4}\) model value, \(v_{cr}=0.2598\). Then, for decreasing \(\epsilon\), \(v_{cr}\) also decreases until \(\epsilon=1.4\) where the critical velocity takes the minimal value, \(v_{cr}=0.07\). For even smaller \(\epsilon\), \(v_{cr}\) begins to grow, reaching a local maximum, and then, again decreases. Such a non-monotonous behaviour repeats as \(\epsilon\) tends to \(0\).
Now, we apply the pRMS construction to model KAK collisions. In particular, we will focus on the critical velocity. We find that the highly nontrivial \(v_{cr}(\epsilon)\) relation is already reproduced quite well by a CCM based on the pRMS with just one Derrick mode. See Fig. 6, where the blue curve representing the CCM result rises and decreases in good agreement with the PDE computations. An especially nontrivial fact is that we can quite well approximate the critical velocity even in the regime of a relatively small \(\epsilon\), where the kinks begin to exhibit a well pronounced double half-kink structure.
In general, the inclusion of the second Derrick mode improves the agreement between the CCM and PDE computations, see the orange curve in Fig. 6, which now lies much closer to the line obtained in the full theory. Thus, once again we find an evidence for the convergence of the presented framework. This is the case for models with \(\epsilon>0.4\). For smaller \(\epsilon\), the results get worse. This can probably be explained by the fact that in this regime the half-kink structure dominates in the scatterings. Therefore, the addition of the next Derrick mode of the full kink, which treats the two half-kinks forming a kink as one rigid structure, is not related to any actual situation in the full theory. As we already mentioned, in the small \(\epsilon\) limit the kink-antikink scattering is, in reality, a four soliton collision where some of the final states can be explained only in terms of half-kink processes. These are, e.g., annihilations of the kink-antikink pair to two and three oscillons, Fig. 8.
We underline that our results are obtained within our general pRMS scheme without the necessity to include any parameter-dependent fitted function [25]. The fact that we reproduced the critical velocity curve quite well using only the lowest Derrick normal mode suggests that the higher frequency modes, which appear as \(\epsilon\) decreases, may not be the main factors in scattering processes of the Christ-Lee kinks. It seems that, rather, the inner structure of the kinks plays the most significant role [26].
In Fig. 7 we also show scans of the KAK collisions for two values of \(\epsilon\). It is clearly seen that there is a good qualitative agreement in the single kink regime (\(\epsilon=3.0\)), where the results resemble the KAK scattering in \(\phi^{4}\) model, as well as in the half-kink regime (\(\epsilon=0.5\)). Of course, due to the lack of radiation, the CCM has a tendency to provide a larger number of bounce windows instead of the expected bion chimneys. We saw this feature already in the case of the \(\phi^{4}\) model.
## VI Double sine-Gordon model
The last family of models we want to analyze is the well-known double sine-Gordon model,
\[U_{2sG}=\tanh^{2}R\left(1-\cos\phi\right)+\frac{4}{\cosh^{2}R}\left(1+\cos \frac{\phi}{2}\right), \tag{24}\]
where the parameter \(R\in[0,\infty)\). This model interpolates between the ordinary sine Gordon, \(R\to\infty\), and another (rescaled) sine-Gordon model. The corresponding kink
\[\Phi_{K}=4\arctan\left(\frac{\sinh x}{\cosh R}\right) \tag{25}\]
is in fact a superposition of two sine-Gordon kinks located at \(x=\pm R\)
\[\Phi_{K}=4\arctan e^{x+R}-4\arctan e^{R-x}. \tag{26}\]
Thus, similarly as in the Christ-Lee model, the kink experiences the appearance of an inner, composite structure. Here, it occurs for growing \(R\).
Figure 10: The critical velocity \(v_{cr}\) in the double sine-Gordon model (green) and result obtained from the CCM based on one Derrick mode (blue) and two Derrick modes (orange).
Figure 9: The bound mode in the double sine-Gordon model (blue) and the first Derrick mode (orange).
If compared with the Christ-Lee model, the main difference is the fact that the double sine-Gordon kink may host only one bound massive mode for all \(R\in(0,\infty)\), see Fig. 9, blue curve. Since in the limited cases we obtain the sine-Gordon models, this shape mode disappears as \(R\to 0\) and \(R\to\infty\). Specifically, for \(R\to 0\), the shape mode becomes a non-normalizable threshold mode whose frequency tends to \(1\), which is the value at which the continuum spectrum begins. For \(R\to\infty\), the shape modes approaches the second zero mode since we tend to a model which is a double copy of the sine-Gordon model.
The bound mode is reasonably well approximated by the Derrick mode, see Fig. 9, orange curve. This works especially well for \(R\approx 1\). For a too small \(R\), the Derrick mode has its frequency bigger than \(1\). For \(R>2\), where the shape mode can be treated as a quasi-zero mode, the frequency of the Derrick mode is significantly above \(0\). This behaviour corresponds to the \(\epsilon\to 0\) limit in the Christ-Lee model.
Kink scattering processes in the double sine-Gordon model have been extensively studied in the literature [27; 28; 29; 30; 31]. The main message is that the chaotic structure in the final state formation, which reveals multi-bounce windows immersed between bion chimneys, ceases to exist if \(R\) is too small or too large, i.e., if we are too close to the ordinary sine-Gordon model. Concretely, bounce windows are observed for \(R\gtrsim 0.5\) and \(R\lesssim 2\). For smaller or bigger values of \(R\) the critical velocity rapidly decreases and no chaotic structures are found. Once again, the critical velocity shows a very nontrivial dependence on the parameter of the model, \(R\), see Fig. 10. Note that here the elastic scattering is a process where the solitons pass through each other and reappear as free final states.
The pRSM once again leads to CCMs which quite well capture the critical velocity. As expected from the previous analysis, this especially concern the regime where the kink is not divided into two half-kink. Thus, when \(R\lesssim 2\), see Fig. 10, where we compared the \(v_{cr}(R)\) curve obtained in the full field theory and in the CCM based on one and two Derrick modes. Even the first Derrick mode qualitatively captures the dependence of \(v_{cr}\) on \(R\). However, the inclusion of the second Derrick mode significantly improves the agreement. This provides further evidence for the convergence of our perturbative scheme.
In Fig. 11 we compare the KAK dynamics of the double sine-Gordon model with the CCM based on the first Derrick mode for \(R=0.5\) and \(R=1.5\).
## VII Summary
In the current work, we show that the perturbative Relativistic Moduli Space (pRMS) framework, based on collective coordinates provided by scaling Derrick modes, can be very successfully applied to model kink-antikink collisions in (1+1)-dimensional field theories. In particular, we found that the corresponding CCMs can reproduce the critical velocity \(v_{cr}\) amazingly well. \(v_{cr}\) is one of the most important quantities characterizing multi-kink collisions, which separates the one bounce
Figure 11: Kink-antikink collision in the double sine-Gordon model. _Upper:_ full field theory computation for \(R=0.5\) (left) and \(R=1.5\) (right). _Lower:_ the CCM result based on pRMS with the first Derrick model.
regime (where solitons meet only once and then are back scattered or pass through each other) from the multi-bounce regime (where kinks meet several times and, eventually, reappear in the final state as free particles of annihilate to the vacuum).
Importantly, increasing the set of collective coordinates by the inclusion of a bigger number of Derrick modes, we see that the critical velocity derived in the CCM shows a tendency to converge to the value obtained in the original field theory. Thus, the RMS approach seems to provide a convergent approximation for the critical velocity. To the best of our knowledge this is the _first example_ of a _convergent_ perturbative expansion based on the collective coordinates.
This result applies to theories with two qualitative features:
* the solitons do not reveal too well pronounced inner structure, manifested, e.g., as the existence of half-kink substructures;
* the solitons host a well pronounced (not necessary unique) bound mode.
As the first example, we considered the \(\phi^{4}\) theory where the critical velocity obtained in the CCM with the first four Derrick modes agrees with a more than 99% percent precision with the full field theory result. Next, we computed the critical velocity in the RMS framework for the Christ-Lee and double sine-Gordon family of models and found that the CCM computations capture the non-trivial dependence of \(v_{cr}\) on the parameter of the model already if only one Derrick mode is included. The addition of the second mode increases the agreement as long as the kinks do not exhibit a too well visible inner half-kink structure, that is, \(\epsilon\gtrsim 0.4\) for the Christ-Lee model and \(R\lesssim 2\) for the double sine-Gordon model.
It should be stressed that, within this range of parameters, the critical velocity is still given by a rather nontrivial, non-monotonous curve, which is very well reproduced in the CCM approach. In fact, at the boundaries of this range one can even recognize the appearance of the inner half-kink structure. Thus, the fact that our approach still keeps its predictive power is even more striking.
Moreover, our result may indicate that the non-monotonous dependence of the critical velocity on the parameter \(\epsilon\) in the Christ-Lee model is probably related to the appearance of the inner half-kink structure rather than to the growing number of normal modes. This is based on two observations. Firstly, a similar pattern occurs in the double sine-Gordon model where the number of internal modes remains constant for all values of the parameter \(R\) which also controls the half-kink structure. Secondly, our CCM model captures this non-monotonous dependence even if it is based only on the first Derrick mode. One can, therefore, conjecture that, in general, for a given field theory, the appearance of an inner substructure in the solitonic solution has probably a bigger impact on the kink dynamics than the existence of a larger number of normal modes.
It is also straightforward to see how one may further improve the CCM description in the composite kink limit. Namely, instead of treating the kink as one particle with its Derrick modes we should rather introduce collective coordinates associated with each half-kink independently. This approach should also apply to composite kinks in other field theories, e.g., [32; 33]. We remark that such a construction may be still applicable in the case where the half-kinks are (almost) on top of each other. Indeed, the motion of the half-kink may mimic the shape mode. Interestingly, such a decomposition of a kink into smaller structures resembles the ideas recently proposed in the so-called mechanization program [34].
There are several interesting directions in which our work may be developed. First of all, one should include more Derrick modes and test the convergence of the critical velocity at higher order. Secondly, one should improve the approach in the half-kink regime along the lines described above. Thirdly, one should use the CCMs obtained here and explain, hopefully in an analytical way, the shape of the \(v_{cr}\) curve. Indeed, it would be great to know which features of a kink field theory decide the value of the critical velocity.
Given their relevance for the understanding of the classical soliton dynamics, one may ask about the role of Derrick modes in the quantum kink models. A natural path for this study can be the recently developed manifestly finite approach by Evslin [35] in which various shape mode-meson processes have been already analyzed [36; 37]. Of course, a more direct approach i.e., canonical quantization of the analyzed here CCMs is also an interesting option. It would be very interesting to compare it with recent semicalssical results, see [38].
Looking from a wider perspective, our findings provide further solid arguments for the importance of the resonant energy transfer mechanism in solitonic collisions. They also contribute to the recent spectacular improvements in the application of the collective coordinate approach to kink collisions. In fact, quite recently not a single CCM was known which could genuinely model KAK scattering processes, see e.g., [8]. Nowadays, the number of theories where CCMs give good or even excellent predictions grows rapidly, see [9; 10; 17; 39]. All that shows that the moduli space description is a powerful technique, even for the analysis of such extremely complicated processes as kink collisions.
## Acknowledgements
The authors acknowledge financial support from the Ministry of Education, Culture, and Sports, Spain (Grant No. PID2020-119632GB-I00), the Xunta de
Galicia (Grant No. INCITE09.296.035PR and Centro singular de investigacion de Galicia accreditation 2019-2022), the Spanish Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042), and the European Union ERDF. DC and KO was supported by the Polish National Science Centre (Grant NCN 2021/43/D/ST2/01122).
## Appendix A Numerical approach
Integrating the equations of motion of a CCM is a challenging task from a numerical point of view. Even if both the moduli space metric and the effective potential are known in an analytical form, the floating point errors can be the source of many problems [9]. In the case of the \(\phi^{4}\) model, e.g., subtracting higher order derivatives led to a so-called catastrophic cancellation problem, especially near \(a=0\). It appeared to be more stable to evaluate the metric and potential numerically [10]. Such a procedure reduced the number of numerical artefacts and, surprisingly, did not lead to large computational time overhead. Unfortunately, some numerical problems remained, especially when the Derrick modes are divided by \(\tanh(a)\), which is required to resolve the null vector problem.
One remedy was to expand the profile functions as a Taylor series for \(|a|<a_{cut}\ll 1\). This, however, complicates the procedure and, what is more important, introduces small discontinuities at \(a=\pm a_{cut}\) which can lead to more numerical artefacts (due to the violation of conjectures for existence and uniqueness theorems).
For the cases where the moduli space has not too large dimension, the best approach is to store the equations of motions in the form of cubic splines. This proved to be very effective even for very complicated profiles such as instantons [39]. Unfortunately, for moduli spaces with more than two coordinates, such an approach would require storing a large amount of data and the direct integration of equations of motion at each time step was more efficient. Furthermore, the discontinuity issue still remains.
In order to avoid discontinuities at \(a=\pm a_{cut}\) we adopted a different approach. Note that the divisor \(\tanh(a)\) in (15) is somewhat arbitrary. Indeed, it can be any smooth function of \(a\) which obeys two properties. Namely, it tends to \(\pm 1\) as the kink-antkink distance goes to infinity, \(\tanh(a)\to\pm 1\) as \(a\to\pm\infty\), and it has a linear zero at \(a\to 0\). This can be achieved, e.g., by replacing the Derrick modes
\[D_{k}(x,a)= \tag{16}\] \[\frac{1}{k!}\left((x+a)^{k}\Phi_{K}^{(k)}(x+a)-(x-a)^{k}\Phi_{K}^ {(k)}(x-a)\right)\]
with the following smooth approximation
\[D_{k}(x,a)\to\frac{D_{k}(x)}{\cosh(\alpha a)}+D_{k}(x,a)\tanh(\alpha a), \tag{17}\]
where \(D_{k}(x)\) is the limit
\[D_{k}(x) =\lim_{a\to 0}\frac{D_{k}(x,a)}{a}\] \[=-\frac{2x^{k-1}}{k!}\left(k\Phi^{(k)}(x)+x\Phi^{(k+1)}(x)\right). \tag{18}\]
We fit the scaling constant \(\alpha\) in such a way that (17) approximates best (16). Such an approximation is always regular and smooth and avoids the division by \(\tanh(a)\) near \(a=0\). This procedure is more robust and almost completely eradicates numerical artefacts.
|
2308.09494
|
Numerical analysis of the Maxwell-Cattaneo-Vernotte nonlinear model
|
In the literature, one can find numerous modifications of Fourier's law from
which the first one is called Maxwell-Cattaneo-Vernotte heat equation. Although
this model has been known for decades and successfully used to model
low-temperature damped heat wave propagation, its nonlinear properties are
rarely investigated. In this paper, we aim to present the functional
relationship between the transport coefficients and the consequences of their
temperature dependence. Furthermore, we introduce a particular implicit
numerical scheme in order to solve such nonlinear heat equations reliably. We
investigate the scheme's stability, dissipation, and dispersion attributes as
well. We demonstrate the effect of temperature-dependent thermal conductivity
on two different initial-boundary value problems, including time-dependent
boundaries and heterogeneous initial conditions.
|
A. J. A. Ramos, A. D. S. Campelo, M. M. Freitas, R. Kovács
|
2023-08-18T12:09:42Z
|
http://arxiv.org/abs/2308.09494v1
|
# Numerical analysis of the Maxwell-Cattaneo-Vernotte nonlinear model
###### Abstract
In the literature, one can find numerous modifications of Fourier's law from which the first one is called Maxwell-Cattaneo-Vernotte heat equation. Although this model has been known for decades and successfully used to model low-temperature damped heat wave propagation, its nonlinear properties are rarely investigated. In this paper, we aim to present the functional relationship between the transport coefficients and the consequences of their temperature dependence. Furthermore, we introduce a particular implicit numerical scheme in order to solve such nonlinear heat equations reliably. We investigate the scheme's stability, dissipation, and dispersion attributes as well. We demonstrate the effect of temperature-dependent thermal conductivity on two different initial-boundary value problem, including time-dependent boundaries and heterogeneous initial conditions.
keywords: non-Fourier heat conduction, thermodynamic compatibility, irreversible thermodynamics Msc: [2010] 35E15, 65M06, 93D20 +
Footnote †: journal: Elsevier
## 1 Introduction
In recent years, numerous heat conduction models have been developed to provide a more efficient modeling tool for complex problems related to wave propagation under low-temperature conditions [1; 2], in rarefied media [3; 4; 5], in nanosystems [6; 7], or over-diffusion in complex heterogeneous material structures [8; 9]. The basic properties of the heat equations depend on the particular thermodynamic background. For instance, the approach of Rational Extended Thermodynamics (RET) [10; 11] exploits kinetic theory rigorously, thus it requires particular assumptions about the microscopic mechanisms, and resulting in a model with given transport coefficients. A continuum theory, on the contrary, does not need any prior assumption, and it remains arbitrary whether the continuum equations inherit the particular coefficients from RET. Such approaches are called Extended Irreversible Thermodynamics (EIT) [12; 13] and Non-Equilibrium Thermodynamics with Internal Variables (NET-IV) [14; 15; 16]. Furthermore, while RET derives the balances through a momentum series expansion of the Boltzmann transport equation, EIT and NET-IV starts with the balances and derives the constitutive equation from the second law of thermodynamics, using Onsagerian relations. These procedures
are discussed in detail in [17]. Here, we want to focus on the simplest heat equation beyond Fourier, called Maxwell-Cattaneo-Vernotte (MCV) or Cattaneo equation [18, 19], following from a continuum theory. Hence, it reads
\[\tau q_{t}+q=-\lambda T_{x}, \tag{1}\]
where \(\tau\) and \(\lambda\) are the relaxation time and thermal conductivity coefficients, which are not related to any microscopic mechanism. Furthermore, for our purpose, a one-dimensional rigid, isotropic conductor is satisfactory, in which \(q\) and \(T\) are the heat flux and temperature. In the indices, \(t\) and \(x\) stand for the corresponding partial derivative. Although the validity of the MCV equation is restricted to low-temperature situations [20], e.g., it can model a damped wave propagation (second sound) well, and thus its role remain marginal in standard engineering practice, there are crucial properties need to be understood.
We place the emphasis on nonlinearities, particularly, on the temperature-dependent coefficients. In the previous work of Rogolino and Kovacs [21], based on the Onsagerian form of the MCV equation,
\[\left(\frac{1}{T}\right)_{x}-\rho(T)m\,q_{t}-l(T)q=0,\quad m>0,\quad l>0, \tag{2}\]
it has been underlined that the coefficients are not independent of each other. For instance, assuming a linear \(T\)-dependent thermal conductivity,
\[\hat{\lambda}=\lambda_{0}+a(T-T_{0}),\quad\lambda_{0}=\hat{\lambda}(T_{0})>0,\quad a\in\mathbb{R}, \tag{3}\]
influences the relaxation time as well. Since \(m>0\) is a constant (otherwise further terms would enter the constitutive equation (2)), the mass density \(\rho\) must depend on the temperature as well in order to achieve the desired \(\tau=\tau(T)\) dependence due to \(l(T)\)
\[\frac{\rho(T)m}{l(T)}q_{t}+q=-\frac{1}{l(T)}\frac{1}{T^{2}}T_{x},\quad\tau(T) =\frac{\rho(T)m}{l(T)},\quad\lambda(T)=\frac{1}{l(T)}\frac{1}{T^{2}}. \tag{4}\]
Consequently,
\[l(T)=\frac{1}{\lambda_{0}+a(T-T_{0})T^{2}}\qquad\text{and}\qquad\rho(T)=\frac {\tau}{m}l(T), \tag{5}\]
and due to \(\rho(T)\), mechanics should be involved into the modeling. In other words, despite having the simplest extension of Fourier's law, and adding a straightforward temperature dependence in the thermal conductivity, leads to a complicated thermo-mechanical model. Furthermore, in [21], an explicit finite difference technique is utilized for which one needs to determine the stability criteria, and the numerical solutions are suffered from artificial dispersion error.
Since such model is nonlinear, the stability properties are not straightforward to determine. In the present paper, we want to provide further insights into the numerical solution of the nonlinear MCV equation, however, for our purposes, we need to simplify Eq. (4). Although any simplification inevitably truncate the physical
content of this nonlinear model, it is satisfactory to keep the relaxation time \(\tau\) to be constant, and we highlight that it limits the physical validity of the model. This yields
\[\tau q_{t}+q+\big{[}\lambda_{0}+a(T-T_{0})\big{]}T_{x}=0. \tag{6}\]
After some manipulations we have
\[\tau q_{t}+q+aT\,T_{x}+\lambda T_{x}=0, \tag{7}\]
where \(\lambda=\lambda_{0}-aT_{0}\). In the present paper, we limit ourselves on Eq. (7), and introduce an implicit numerical approach to efficiently handle the nonlinear term \(aTT_{x}\). We prove that the implicit discretization is unconditionally stable, thus it is now free from stability issues. Additionally, we also prove that the numerical solution has minimal distortion by dissipation and it is also free from dispersion errors.
## 2 Initial and boundary conditions
In the following, we consider the MCV model in the form
\[\rho cT_{t}+q_{x}=0\quad\text{in}\quad(0,\ell)\times(0,\infty), \tag{8}\] \[\tau q_{t}+q+aT\,T_{x}+\lambda T_{x}=0\quad\text{in}\quad(0,\ell) \times(0,\infty), \tag{9}\]
where Eq. (8) supplements the constitutive equation, called balance equation of internal energy \(e\), for which we assumed that \(e=cT\) with \(c\) being the specific heat, and all volumetric heat sources are omitted. The length of the conducting medium is \(\ell\). We consider two types of boundary conditions:
\[\textbf{Boundary type I:}\quad\begin{cases}q(0,t)=0,\quad\text{for all}\quad t \geq 0,\\ q(\ell,t)=0,\quad\text{for all}\quad t\geq 0,\end{cases} \tag{10}\]
\[\textbf{Boundary type II:}\quad\begin{cases}q(0,t)=\begin{cases}1-\cos\big{(}2 \pi t/t_{p}\big{)},\quad\text{if}\quad 0<t\leq t_{p},\quad t_{p}>0\\ 0,\quad\text{if}\quad t>t_{p},\\ \end{cases}\\ q(\ell,t)=0,\quad\text{for all}\quad t\geq 0,\end{cases} \tag{11}\]
for which we assign also two types of initial conditions:
\[\textbf{Initial condition I:}\quad T(x,0)=T_{0}(x),\quad q(x,0)=q_{0}(x), \quad x\in(0,\ell), \tag{12}\]
\[\textbf{Initial condition II:}\quad T(x,0)=T_{0},\quad q(x,0)=q_{0}\equiv 0, \quad x\in(0,\ell). \tag{13}\]
The type I. initial and boundary conditions represent a situation of heterogeneous initial state, the spatial-dependent temperature distribution induce a non-homogeneous heat flux field. We observe the time evolution
of such system with adiabatic boundary conditions, therefore the resulting temperature distribution is not influenced by environmental conditions such as heat convection.
The type II. setting, however, displays the usual conditions of a heat pulse experiment, the initial steady-state is excited by a heat pulse with duration of \(t_{p}\). The present particular form of Eq. (11) is advantageous from a numerical point of view. The heat flux boundary is initiated with zero derivative, therefore artificial oscillations of such source can be avoided.
## 3 Numerical linearization method
Let us rewrite the system of equations (8)-(9) in the form
\[\rho cT_{t}+q_{x}=0\quad\text{in}\quad(0,\ell)\times(0,\mathcal{T }), \tag{14}\] \[\tau q_{t}+q+\frac{a}{2}\big{(}T^{2}\big{)}_{x}+\lambda T_{x}=0 \quad\text{in}\quad(0,\ell)\times(0,\mathcal{T}), \tag{15}\]
and use an implicit finite difference method to discretize the system (14)-(15). More precisely, we consider \(J,N\in\mathbb{N}\), and set \(\Delta x=\dfrac{\ell}{J+1},\Delta t=\dfrac{\mathcal{T}}{N+1}\) and we introduce a uniform mesh
\[0=x_{0}<x_{1}<\cdot\cdot\cdot<x_{j}=j\Delta x<\cdot\cdot\cdot<x_ {J}<x_{J+1}=\ell,\quad j=0,1,...,J+1, \tag{16}\] \[0=t_{0}<t_{1}<\cdot\cdot\cdot<t_{n}=n\Delta t<\cdot\cdot\cdot<t _{N}<t_{N+1}=\mathcal{T},\quad n=0,1,...,N+1. \tag{17}\]
where \(\mathcal{T}\) denotes the entire time interval used in the simulations, as well as the indices \(j\) and \(n\) stand for the corresponding space and time steps. We construct the implicit numerical scheme as
\[\rho c\dfrac{T_{j,n}-T_{j,n-1}}{\Delta t}+\dfrac{q_{j+1,n}-q_{j,n }}{\Delta x}=0,\quad j=0,1,...,J,\quad n=1,2,...,N, \tag{18}\] \[\tau\dfrac{q_{j,n}-q_{j,n-1}}{\Delta t}+q_{j,n}+\dfrac{a}{2} \dfrac{T_{j,n}^{2}-T_{j-1,n}^{2}}{\Delta x}+\lambda\dfrac{T_{j,n}-T_{j-1,n}}{ \Delta x}=0,\quad j=1,2,...,J,\quad n=1,2,...,N, \tag{19}\]
supplemented with the discrete boundary,
\[\textbf{Boundary type I:}\quad\begin{cases}q_{0,n}=0,\quad\text{for all}\quad n=0,1,...,N+1,\\ q_{J+1,n}=0,\quad\text{for all}\quad n=0,1,...,N+1,\end{cases} \tag{20}\]
\[\textbf{Boundary type II:}\quad\begin{cases}q_{0,n}=\begin{cases}1-\cos\big{(}2 \pi t_{n}/t_{p}\big{)},\quad\text{if}\quad 0<n\leq p,\quad p\in\mathbb{N},\\ 0,\quad\text{if}\quad n>p,\\ \\ q_{J+1,n}=0,\quad\text{for all}\quad n=0,1,...,N+1,\end{cases} \tag{21}\]
and initial conditions,
\[T_{j,0}=T_{j}^{0},\quad q_{j,0}=q_{j}^{0},\quad\text{for all}\quad j=0,1,...,J+1. \tag{22}\]
The scheme, in its present state, is also nonlinear due to the terms \(T_{j,n}^{2}\), corresponding to the unknown temperature value at the new time instant. However, while preserving that nonlinearity in (19), we can devolve it from \(n\) to \(n-1\). Let us assume that \(T(x,t)\) is sufficiently regular and use the Taylor expansion to write
\[T^{2}(x,t-\Delta t)=T^{2}(x,t)-\Delta t\frac{\partial T^{2}(x,t)}{\partial T} \frac{\partial T}{\partial t}+O(\Delta t^{2}).\]
Using backward difference to approximate \(T_{t}\), it yields
\[T^{2}(x,t-\Delta t)=T^{2}(x,t)-2T(x,t)\big{(}T(x,t)-T(x,t-\Delta t)\big{)}+O( \Delta t^{2}).\]
Furthermore,
\[T^{2}(x,t) = -T^{2}(x,t-\Delta t)+2T(x,t)T(x,t-\Delta t)+O(\Delta t^{2}) \tag{23}\] \[= T^{2}(x,t-\Delta t)+2T(x,t-\Delta t)\big{(}T(x,t)-T(x,t-\Delta t )\big{)}+O(\Delta t^{2})\]
holds. Now we use the Taylor expansion given in (23) to obtain an approximation for the nonlinear term \((T_{j,n}^{2}-T_{j-1,n}^{2})/\Delta x\) given in (19), that is, we consider
\[\frac{T_{j,n}^{2}-T_{j-1,n}^{2}}{\Delta x} \approx \frac{T_{j,n-1}^{2}-T_{j-1,n-1}^{2}}{\Delta x}+2\frac{T_{j,n-1} \big{(}T_{j,n}-T_{j,n-1}\big{)}-T_{j-1,n-1}\big{(}T_{j-1,n}-T_{j-1,n-1}\big{)} }{\Delta x}. \tag{24}\]
Substituting (24) into (19),
\[\rho c\frac{T_{j,n}-T_{j,n-1}}{\Delta t}+\frac{q_{j+1,n}-q_{j,n}} {\Delta x}=0,\quad j=0,1,...,J,\quad n=1,2,...,N, \tag{25}\] \[\tau\frac{q_{j,n}-q_{j,n-1}}{\Delta t}+q_{j,n}+\frac{a}{2}\frac{ T_{j,n-1}^{2}-T_{j-1,n-1}^{2}}{\Delta x}+a\frac{T_{j,n-1}\big{(}T_{j,n}-T_{j,n-1} \big{)}-T_{j-1,n-1}\big{(}T_{j-1,n}-T_{j-1,n-1}\big{)}}{\Delta x}\] \[+\lambda\frac{T_{j,n}-T_{j-1,n}}{\Delta x}=0,\quad j=1,2,...,J, \quad n=1,2,...,N. \tag{26}\]
so that the nonlinearity does not affect the calculation of the new time instants, this is a sort of numerical linearization we performed.
For any \(n\), we can introduce \(\phi_{j}:=T_{j,n}-T_{j,n-1}\) and \(\psi_{j}:=q_{j,n}-q_{j,n-1}\), obtaining the system
\[\rho c\phi_{j}+r(\psi_{j+1}-\psi_{j})=-r(q_{j+1,n-1}-q_{j,n-1}), \quad j=0,1,...,J, \tag{27}\] \[2(\tau+\Delta t)\psi_{j}+2r(\lambda+aT_{j,n-1})\phi_{j}-2r( \lambda+aT_{j-1,n-1})\phi_{j-1}=-ar(T_{j,n-1}^{2}-T_{j-1,n-1}^{2})\] \[-2\lambda r(T_{j,n-1}-T_{j-1,n-1})-2\Delta tq_{j,n-1}\quad j=1,2,...,J, \tag{28}\]
for \(\{\phi_{0},\phi_{1},...,\phi_{J}\}\) and \(\{\psi_{1},\psi_{2},...,\psi_{J}\}\), with \(r=\Delta t/\Delta x\). In this case, the representation of type I. and type
II. boundary conditions are given by
\[\textbf{Boundary type I:}\quad\begin{cases}\psi_{0}=0,\\ \psi_{J+1}=0,\end{cases} \tag{29}\]
\[\textbf{Boundary type II:}\quad\begin{cases}\psi_{0}=\begin{cases}\cos \big{(}2\pi t_{n-1}/t_{p}\big{)}-\cos\big{(}2\pi t_{n}/t_{p}\big{)},\quad\text{ if}\quad 0<n\leq p,\quad p\in\mathbb{N},\\ 0,\quad\text{if}\quad n>p,\\ \psi_{J+1}=0.\end{cases} \tag{30}\]
This is more suitable to rewrite the difference equations into a matrix form, therefore, let us rewrite the scheme (27)-(28) in an equivalent vectorial form, using the matrices
\[\textbf{A}:=\left(\begin{array}{cccccc}1&0&0&0&\cdots&0\\ -1&1&0&\ddots&\ddots&\vdots\\ 0&-1&\ddots&\ddots&\ddots&0\\ 0&\ddots&\ddots&\ddots&0&0\\ \vdots&\ddots&\ddots&-1&1&0\\ 0&\cdots&0&0&-1&1\\ 0&\cdots&0&0&0&-1\end{array}\right)_{J+1\times J},\quad\textbf{B}:=\left( \begin{array}{cccccc}b_{0}&c_{1}&0&0&\cdots&0&0\\ 0&b_{1}&c_{2}&\ddots&\ddots&\vdots&0\\ 0&0&\ddots&\ddots&\ddots&0&0\\ 0&\ddots&\ddots&\ddots&c_{1}&0&0\\ \vdots&\ddots&\ddots&0&b_{J-2}&c_{J-1}&0\\ 0&\cdots&0&0&0&b_{J-1}&c_{J}\end{array}\right)_{J\times J+1}\]
and
\[\textbf{C}:=\left(\begin{array}{cccccc}-1&1&0&0&\cdots&0&0\\ 0&-1&1&\ddots&\ddots&\vdots&0\\ 0&0&\ddots&\ddots&\ddots&0&0\\ 0&\ddots&\ddots&\ddots&1&0&0\\ \vdots&\ddots&\ddots&0&-1&1&0\\ 0&\cdots&0&0&0&-1&1\end{array}\right)_{J\times J+1},\,\textbf{D}:=\left( \begin{array}{cccccc}T_{0,n-1}&0&0&0&\cdots&0\\ 0&T_{1,n-1}&0&\ddots&\ddots&\vdots\\ 0&0&\ddots&\ddots&\ddots&0\\ 0&\ddots&\ddots&\ddots&0&0\\ \vdots&\ddots&\ddots&0&T_{J-1,n-1}&0\\ 0&\cdots&0&0&0&T_{J,n-1}\end{array}\right)_{J+1\times J+1},\]
where \(b_{j-1}=-2r(\lambda+aT_{j-1,n-1})\) and \(c_{j}=2r(\lambda+aT_{j,n-1})\), hence \(\Phi=(\phi_{0},\phi_{1},...,\phi_{J})^{\top}\), \(\Psi=(\psi_{1},\psi_{2},...,\psi_{J})^{\top}\), \(\mathbb{T}^{n-1}=(T_{0,n-1},T_{1,n-1},...,T_{J,n-1})^{\top}\) and \(\mathbb{Q}^{n-1}=(q_{1,n-1},q_{2,n-1},...,q_{J,n-1})^{\top}\).
_Boundary type I._
The scheme (27)-(28) with boundary type I. takes the following vector form:
\[\begin{cases}\Phi+\dfrac{r}{\rho c}\textbf{A}\Psi=-\dfrac{r}{\rho c}\textbf{A }\mathbb{Q}^{n-1},\\ \Psi+\dfrac{1}{2\big{(}\tau+\Delta t\big{)}}\textbf{B}\Phi=-\dfrac{r}{2\big{(} \tau+\Delta t\big{)}}\textbf{C}\Big{(}a\textbf{D}+2\lambda\textbf{I}_{J+1} \Big{)}\mathbb{T}^{n-1}-\dfrac{\Delta t}{\big{(}\tau+\Delta t\big{)}}\mathbb{ Q}^{n-1},\end{cases} \tag{31}\]
where \(\mathbf{I}_{J+1}\) is an identity matrix of order \(J+1\). Combining the above equations, we obtain
\[\begin{cases}\Phi=\dfrac{r}{\rho c}\mathbf{A}\bigg{(}\dfrac{r}{2( \tau+\Delta t)}\mathbf{G}-\dfrac{1}{\tau+\Delta t}\mathbf{F}-\mathbb{Q}^{n-1} \bigg{)}\\ \Psi=\dfrac{1}{\tau+\Delta t}\mathbf{F}-\dfrac{r}{2(\tau+\Delta t)}\mathbf{G}, \end{cases} \tag{32}\]
in which
\[\mathbf{E}:=\mathbf{I}_{J}-\dfrac{r}{2\rho c(\tau+\Delta t)}\mathbf{BA},\quad \mathbf{F}:=\mathbf{E}^{-1}\bigg{(}\dfrac{r}{2\rho c}\mathbf{BA}-\Delta t\, \mathbf{I}_{J}\bigg{)}\mathbb{Q}^{n-1},\quad\mathbf{G}:=\mathbf{E}^{-1} \mathbf{C}\bigg{(}a\mathbf{D}+2\lambda\mathbf{I}_{J+1}\bigg{)}\mathbb{T}^{n-1} \tag{33}\]
and \(\mathbf{I}_{J}\) is an identity matrix of order \(J\). Finally, the solution of the numerical scheme (18)-(22) is given by
\[\begin{cases}\mathbb{T}^{n}=\mathbb{T}^{n-1}+\Phi,\quad n=1,2,...,N,\\ \mathbb{Q}^{n}=\mathbb{Q}^{n-1}+\Psi,\quad n=1,2,...,N,\\ \mathbb{T}^{0}=(T^{0}_{0},T^{0}_{1},...,T^{0}_{J})^{\top},\quad\mathbb{Q}^{0} =(q^{0}_{1},q^{0}_{2},...,q^{0}_{J})^{\top}.\end{cases} \tag{34}\]
_Boundary type II._
The scheme (27)-(28) with boundary type II. takes the following vector form:
\[\begin{cases}\Phi+\dfrac{r}{\rho c}\mathbf{A}\Psi=\begin{cases}- \frac{r}{\rho c}\mathbf{A}\mathbb{Q}^{n-1}+\frac{r}{\rho c}\Big{(}1-\cos\big{(} 2\pi t_{n}/t_{p}\big{)}\Big{)}\mathbf{L},\quad\text{if}\quad 0<n\leq p,\quad p \in\mathbb{N}\\ -\frac{r}{\rho c}\mathbf{A}\mathbb{Q}^{n-1},\quad\text{if}\quad n>p,\\ \Psi+\dfrac{1}{2\big{(}\tau+\Delta t\big{)}}\mathbf{B}\Phi=-\dfrac{r}{2\big{(} \tau+\Delta t\big{)}}\mathbf{C}\Big{(}a\mathbf{D}+2\lambda\mathbf{I}_{J+1} \Big{)}\mathbb{T}^{n-1}-\dfrac{\Delta t}{\big{(}\tau+\Delta t\big{)}}\mathbb{Q }^{n-1},\end{cases} \tag{35}\]
where \(\mathbf{L}=(1,0,\cdot\cdot\cdot,0)_{1\times J+1}^{\top}\) and \(\mathbf{I}_{J+1}\) is an identity matrix of order \(J+1\). Combining the above equations we obtain
\[\begin{cases}\Phi=\begin{cases}\frac{r}{\rho c}\mathbf{A}\bigg{(} \frac{r}{2(\tau+\Delta t)}\mathbf{G}-\frac{1}{\tau+\Delta t}\mathbf{F}-\mathbb{ Q}^{n-1}\bigg{)}+\frac{r}{\rho c}\Big{(}1-\cos\big{(}2\pi t_{n}/t_{p}\big{)} \Big{)}\mathbf{L}\\ \quad\quad+\dfrac{r^{2}}{2\rho^{2}c^{2}(\tau+\Delta t)}\Big{(}1-\cos\big{(}2 \pi t_{n}/t_{p}\big{)}\Big{)}\mathbf{A}\mathbf{E}^{-1}\mathbf{BL},\quad\text{ if}\quad 0<n\leq p,\quad p\in\mathbb{N}\\ \\ \frac{r}{\rho c}\mathbf{A}\bigg{(}\frac{r}{2(\tau+\Delta t)}\mathbf{G}-\frac{ 1}{\tau+\Delta t}\mathbf{F}-\mathbb{Q}^{n-1}\bigg{)},\quad\text{if}\quad n>p, \end{cases} \tag{36}\] \[\begin{cases}\Psi=\begin{cases}\frac{1}{\tau+\Delta t}\mathbf{F}- \frac{r}{2(\tau+\Delta t)}\mathbf{G}-\frac{r}{2\rho c(\tau+\Delta t)}\Big{(}1- \cos\big{(}2\pi t_{n}/t_{p}\big{)}\Big{)}\mathbf{E}^{-1}\mathbf{BL},\quad \text{if}\quad 0<n\leq p,\quad p\in\mathbb{N}\\ \\ \frac{1}{\tau+\Delta t}\mathbf{F}-\frac{r}{2(\tau+\Delta t)}\mathbf{G},\quad \text{if}\quad n>p,\end{cases}\]
with
\[\mathbf{E}:=\mathbf{I}_{J}-\dfrac{r}{2\rho c(\tau+\Delta t)}\mathbf{BA},\quad \mathbf{F}:=\mathbf{E}^{-1}\bigg{(}\frac{r}{2\rho c}\mathbf{BA}-\Delta t\, \mathbf{I}_{J}\bigg{)}\mathbb{Q}^{n-1},\quad\mathbf{G}:=\mathbf{E}^{-1}\mathbf{ C}\bigg{(}a\mathbf{D}+2\lambda\mathbf{I}_{J+1}\bigg{)}\mathbb{T}^{n-1}, \tag{37}\]
and \(\mathbf{I}_{J}\) is an identity matrix of order \(J\). Finally, the solution of the numerical scheme (18)-(22) is given by
\[\begin{cases}\mathbb{T}^{n}=\mathbb{T}^{n-1}+\Phi,\quad n=1,2,...,N,\\ \mathbb{Q}^{n}=\mathbb{Q}^{n-1}+\Psi,\quad n=1,2,...,N,\\ \mathbb{T}^{0}=(T_{0}^{0},T_{1}^{0},...,T_{J}^{0})^{\top},\quad\mathbb{Q}^{0}= (q_{1}^{0},q_{2}^{0},...,q_{J}^{0})^{\top}.\end{cases} \tag{38}\]
## 4 Stability, dissipation and dispersion
Let us investigate the scheme (25)-(26), using the conventional Neumann method [22]. Although it is developed for linear equations, it is still of good use for such a nonlinear situation since only the known values of the temperature field are nonlinear, not the new, hence unknown ones. Following this procedure, we assume that
\[T_{j,n}=T_{0}\xi^{n}e^{ikj\Delta x},\quad q_{j,n}=q_{0}\xi^{n}e^{ikj\Delta x}, \tag{39}\]
where \(T_{0}\) and \(q_{0}\) are the initial amplitudes of the corresponding field quantity, \(i\), \(k\) and \(\xi\) are the imaginary unit, wave number, and the wave amplitude, respectively. It is clear that to achieve a stable numerical solution, one needs \(|\xi|\leq 1\), otherwise, the amplitude will grow up without limit. Substituting Eq. (39) into (25) and (26) is not a linearization, it results in a nonlinear algebraic equation for \(\xi\). The substitution yields
\[T_{0}\frac{\rho c}{\Delta t}\big{(}1-\xi^{-1}\big{)}+q_{0}\frac{1}{\Delta x} \big{(}e^{ik\Delta x}-1\big{)}=0, \tag{40}\]
\[q_{0}\left(\frac{\tau}{\Delta t}\big{(}1-\xi^{-1}\big{)}+1\right)+T_{0}\frac{ a}{\Delta x}\left(\frac{1}{2}\xi^{n-1}e^{ikj\Delta x}\big{(}1-e^{-2ik\Delta x} \big{)}+\big{(}\xi^{n-1}-\xi^{n-2}\big{)}\big{(}1-e^{-ik\Delta x}\big{)}+ \frac{\lambda}{a}\big{(}1-e^{-ik\Delta x}\big{)}\right)=0. \tag{41}\]
Eqs. (40) and (41) can be rewritten in a matrix form as well such as \(\mathbf{M}\cdot\mathbf{f}=\mathbf{0}\) with \(\mathbf{f}=(T_{0},q_{0})\), and thus \(\det(\mathbf{M})=0\) provides a characteristic polynomial for \(\xi\), \(p(\xi)\),
\[p(\xi)= \frac{e^{-i\Delta xk}}{2\Delta t^{2}\Delta x^{2}\xi^{2}}\Big{(}- \Delta t^{2}e^{-i\Delta xk}\left(e^{i\Delta xk}-1\right)^{2}\left(ae^{i\Delta x jk }\xi^{n}+ae^{i\Delta x(1+j)k}\xi^{n}+2e^{i\Delta xk}\left(\lambda\xi^{2}+a( \xi-1)\xi^{n}\right)\right)\] \[+2\rho c\Delta t\Delta x^{2}e^{i\Delta xk}(\xi-1)\xi+2\rho c\tau \Delta x^{2}e^{i\Delta xk}(\xi-1)^{2}\Big{)}. \tag{42}\]
It is worth noting that there are terms with \(\xi^{n}\), i.e., it suggests that the stability properties may depend on the actual time step. However, the stability condition means \(|\xi|\leq 1\), so that the scheme is meaningful only when it leads to stable solutions. When it does, then \(\xi^{n}\to 0\), so that the remaining part of \(p(\xi)\) determines the stability properties, and therefore must provide \(|\xi|\leq 1\) automatically. In order to prove it, we use the Jury criteria, i.e.,
1. \(p(\xi=1)\geq 0\);
2. \(p(\xi=-1)\geq 0\);
3. \(|a_{0}|\leq a_{m}\);
for a polynomial in the form of \(p(\xi)=a_{m}\xi^{m}+\cdots+a_{0}\). Indeed, with the terms \(\xi^{n}\to 0\), Eq. (42) simplifies to a polynomial with coefficients of
\[a_{0}=\frac{\rho c\tau}{\Delta t^{2}},\quad a_{1}=-\rho c\frac{ \Delta t+2\tau}{\Delta t^{2}},\quad a_{2}=\frac{-2\lambda\big{(}\cos{(k\Delta x )}-1\big{)}\Delta t^{2}+\Delta t\Delta x^{2}\rho c+\Delta x^{2}\rho c\tau}{ \Delta t^{2}\Delta x^{2}}. \tag{43}\]
Since \(-1\leq\cos(k\Delta x)\leq 1\), thus both situations must be checked. Straightforward calculations show that all enumerated conditions are automatically satisfied, so that the assumption \(|\xi|\leq 1\) is valid, and \(\xi^{n}\to 0\) indeed.
The numerical dissipation is also characterized by \(\xi\). If \(|\xi|=1\), then the scheme is called conservative, free from dissipation errors, otherwise the scheme is called dissipative. The dispersion error is strongly related to the imaginary part of \(\xi\). For a more detailed numerical characterization of such artificial errors, we refer to [23]. For the present nonlinear scheme, we can numerically investigate Eq. (42), and study its absolute value and imaginary parts. For this reason, let us assign the following values for the coefficients, so that \(\rho=2.5\cdot 10^{3}\) kg/m\({}^{3}\), \(c=700\) J/(kg K), \(\tau=0.27\) s, \(\lambda=5.5\) W/(m K), \(a=2\) W/(m K\({}^{2}\)). For the relaxation time, we used [8] for a realistic value for a rock material; and let \(\Delta x=0.01\) m, \(\Delta t=0.001\) s. Fig. 1 shows the behavior of the wave amplitude, being close to 1, viz., conservative. Moreover, its imaginary part is practically zero over the entire region (such order of magnitude can also emerge from numerical errors of the root finding procedure), so that we do not expect a dispersion error as well.
## 5 Numerical simulation
### Numerical simulation: Boundary type I.
In this section, we implement the same parameters and solve the difference equations for both types of boundary conditions as a brief demonstration, for both linear (\(a=0\)) and nonlinear (\(a>0\)) situations. We use the following initial conditions associated to the discrete nonlinear system (34)
\[T_{j}^{0}=T_{b}+\frac{T_{f}}{2}\cos{\Big{(}\frac{\pi x_{j}}{ \ell}\Big{)}},\quad q_{j}^{0}=\frac{a\pi T_{f}}{2\ell}\bigg{[}T_{b}+\frac{T_{ f}}{2}\cos{\Big{(}\frac{\pi x_{j}}{\ell}\Big{)}}\bigg{]}\sin{\Big{(}\frac{\pi x _{j}}{\ell}\Big{)}}+\frac{\lambda\pi T_{f}}{2\ell}\sin{\Big{(}\frac{\pi x_{j} }{\ell}\Big{)}},\quad j=0,1,...,J, \tag{44}\]
Figure 1: The behavior of both roots as a function of the wave number. Left: the absolute value of the wave amplitude \(\xi\). Right: The imaginary part of \(\xi\). The wave number reaches \(500\pi\).
where let \(T_{b}=15\)\({}^{\circ}\)C and \(T_{f}=30\)\({}^{\circ}\)C, so that initial temperature distribution can be realistic for a practical situation. We also want to highlight here that the initial heat flux field is determined in agreement with the nonlinear constitutive relation in order to avoid incompatibility and nonphysical behavior. Figures 2 and 3 demonstrates the characteristic differences between the nonlinear and linear cases. It is apparent that due to the temperature dependent thermal conductivity, the initial heat flux field and its time evolution cannot be symmetric.
Figure 3: Simulation results for boundary type I. for the linear case with \(a=0\) W/(m K\({}^{2}\)). Figures (a, b) demonstrate the time evolution of the temperature field. Figure (c,d) show the corresponding heat flux field.
Figure 2: Simulation results for boundary type I. for the nonlinear case with \(a=2\) W/(m K\({}^{2}\)). Figures (a, b) demonstrate the time evolution of the temperature field. Figure (c,d) show the corresponding heat flux field.
### Numerical simulation: Boundary type II
We use the following initial conditions associated to the discrete system (34)
\[T_{j}^{0}=0,\quad q_{j}^{0}=0,\quad j=0,1,...,J, \tag{45}\]
with the same parameters as previously. Below are the simulations for the nonlinear and linear cases. Figures 4 and 5 present the corresponding time evolution temperature and heat flux fields for time-dependent boundary condition, for both linear and nonlinear cases.
Since the rear side temperature history has of greater practical importance in case of this settings, we also compare the linear and nonlinear solutions for boundary type II, see Figure 6 for details. It reveals that such a nonlinear behavior can significantly distort the wave signal. Furthermore, as the thermal conductivity is progressively behaves with respect to temperature, the propagation speed of the wave front becomes higher than in the linear case, this expectation is also apparent in Fig. 6. Furthermore, the wave amplitude decreases since the thermal diffusivity becomes larger, thus it notably dampens that wave front. The steady-state, however, must not change with identical heat capacities as the boundaries are adiabatic and the heat pulse provides the same energy.
Figure 5: Simulation results for boundary type II. for the linear case with \(a=0\) W/(m K\({}^{2}\)). Figures (a, b) demonstrate the time evolution of the temperature field. Figure (c,d) show the corresponding heat flux field.
## 6 Summary
In the present paper, we have investigated the transient behavior of the nonlinear Cattaneo equation, including a temperature-dependent thermal conductivity. Additionally, we have proposed an implicit numerical scheme, which is unconditionally stable and also free from numerical dispersion. Such scheme has enabled to use relatively steep temperature-dependence in thermal conductivity without introducing significant artificial distortion into the numerical solution.
For demonstration, we solved two different settings. In the first one, we simulated the evolution of an inhomogeneous initial temperature distribution. The correct setting has required the determination of a compatible initial heat flux field. It is clear how the nonlinearity distorts the symmetry. However, such initial state cannot reflect the influence of nonlinearities on wave propagation. For this reason, we also included a more practical setting in the second simulation, using a heat pulse boundary condition. Since such experiment is used to determine the material properties based on the temperature history, we also demonstrated the effects of such nonlinearity on the rear side temperature evolution. The simulation shows that the wave front can be considerably damped by the increasing thermal diffusivity, however, the front becomes faster as well. Therefore the simulations are physically sound, and the present numerical scheme provides a basis for future research.
## Funding
Project no. TKP-6-6/PALY-2021 has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021-NVA funding scheme. The research was funded by the Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the National Research, Development and Innovation Office-NKFIH FK 134277.
Figure 6: Demonstrating the differences between the linear and nonlinear cases for \(a=0\) W/(m K\({}^{2}\)) and \(a=2\) W/(m K\({}^{2}\)).
## Declarations
**Conflict of interest** The author declares no competing interests.
|
2305.18565
|
PaLI-X: On Scaling up a Multilingual Vision and Language Model
|
We present the training recipe and results of scaling up PaLI-X, a
multilingual vision and language model, both in terms of size of the components
and the breadth of its training task mixture. Our model achieves new levels of
performance on a wide-range of varied and complex tasks, including multiple
image-based captioning and question-answering tasks, image-based document
understanding and few-shot (in-context) learning, as well as object detection,
video question answering, and video captioning. PaLI-X advances the
state-of-the-art on most vision-and-language benchmarks considered (25+ of
them). Finally, we observe emerging capabilities, such as complex counting and
multilingual object detection, tasks that are not explicitly in the training
mix.
|
Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, AJ Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut
|
2023-05-29T18:58:38Z
|
http://arxiv.org/abs/2305.18565v1
|
# PaLI-X: On Scaling up a Multilingual Vision and Language Model
###### Abstract
We present the training recipe and results of scaling up PaLI-X, a multilingual vision and language model, both in terms of size of the components and the breadth of its training task mixture. Our model achieves new levels of performance on a wide-range of varied and complex tasks, including multiple image-based captioning and question-answering tasks, image-based document understanding and few-shot (in-context) learning, as well as object detection, video question answering, and video captioning. PaLI-X advances the state-of-the-art on most vision-and-language benchmarks considered (25+ of them). Finally, we observe emerging capabilities, such as complex counting and multilingual object detection, tasks that are not explicitly in the training mix.
## 1 Introduction
The success of scaling language models [1; 2; 3; 4] makes it appealing to similarly scale Vision-Language (V&L) models, and investigate the improvements, capabilities, and emergent properties of such models. Inspired by the work in [5], we present PaLI-X, a multilingual vision and language model with reusable scaled-up components, consisting of a pretrained large-capacity visual encoder (using [6] as the starting point) and a pretrained language-only encoder-decoder (using [7] as the starting point), further trained at-scale on a vision-and-language data mixture using a combination of self-supervision and full-supervision signals.
One clear pattern that emerges from the combination of results from PaLI [5] and the work we present in this paper is that scaling _both_ V&L components together brings increases in performance across a wide range of tasks. We show this by comparing against the same benchmarks used for PaLI (Fig. 1, Left), and also against new benchmarks for which the new capabilities of PaLI-X are evaluated (e.g., ChartQA, AI2D, DocVQA, InfographicVQA, as well as video understanding tasks). We observe that scaling leads to large improvements over the results of the PaLI model, and also over specialized large-scale models that are trained specifically to solve certain tasks, often with the help of (often much larger) text-only LLMs [8]. In particular, we find that increasing both the effective capacity of the vision component (which [9] does more unilaterally) and of the language component
(which [10] also does unilaterally) is beneficial; the new PaLI-X model provides more balanced parameter allocation than any other prior work (roughly 40%-60% split of the total capacity).
Aside from confirming the impact of scale, the original contribution of PaLI-X consists in leveraging the mixture-of-objectives proposed in [7] for vision-and-language modeling, and showing that it results in a model that improves both state-of-the-art results and the Pareto frontier for fine-tuning and few-shot configurations (Fig. 1, Right).
We also observe emergent properties based on PaLI-X's results compared to previous models with similar architecture but smaller sizes. For instance, we report drastically improved performance on the counting ability (See Table 1 and Appendix B), both for the plain variety (count all instances of a class) and the complex variety (count instances based on a natural language description), that are not attributable to training design1. Additionally, we present qualitative insights into the model's performance (Appendix A), with an emphasis on multilingual transfer learning such as the ability to detect objects using non-English labels (Fig. 2), and the ability to switch between the language of text present in the image (e.g., English) and the language of the generated image caption (e.g., Romanian).
Footnote 1: Plain counting is usually achievable via good object detection, while complex counting requires a fine-grained understanding of the alignment between language-based specifications and visually-based occurrences.
Our technical contributions include the following:
1. We scale a Vision-Language model to achieve outstanding performance on a wide variety of benchmarks. We observe that scaling _both_ the Vision & Language components is advantageous and report that performance remains unsaturated at this scale.
2. We show that training such a model with a mixture of objectives that combines prefix-completion and masked-token completion improves the Pareto frontier for fine-tuning vs few-shot performance at this scale.
3. We show that a high-capacity vision encoder (ViT-22B) can be effectively co-trained for image classification and OCR label classification2 to achieve significant improvements on V&L tasks for which the understanding of text-within-image is crucial. Footnote 2: We use OCR tokens produced by the GCP Vision API over the training images as targets.
4. Overall, PaLI-X improves SoTA results via fine-tuning on 15+ benchmarks, and we show that it is the first of its kind to simultaneously adapt via multitask fine-tuning to a diverse set of benchmarks without significant performance degradation.
## 2 Related Work
Similar to large language models such as GPT4 [12] and PaLM [1], the benefit of scale has also been observed in recent vision and vision-language models. Flamingo [10] used a frozen language
Figure 1: [Left] Comparing PaLI-X against PaLI on image-captioning and VQA benchmarks. [Right] The Pareto frontier between few-shot and fine-tuned performance, comparing PaLI-X with PaLI [5], Flamingo [10], and Kosmos-1 [11].
component and demonstrated the benefit of scaling up this part up to 70B parameters on the few-shot multimodal capabilities, while the vision encoder is fixed with 435M parameters. GIT [9], on the other hand, explored scaling of the vision component up to 4.8B parameter, with a 300M parameter language decoder. PaLI [5] explored jointly scaling the vision and language component, to 4B and 17B, respectively, and showed that scaling both components benefits a wide range of vision-language tasks. All these models took advantage of vision and language unimodal pretrained models as backbones to start multimodal training. Recently, on the vision model side, a vision transformer with 22B parameter has been introduced [6]. In this work, we make use of a ViT-22B model specifically tuned for OCR capability to explore scaling Vision-Language models to even larger parameter regime.
As first shown in [13], _large_ language models are sometimes able to solve new unseen tasks at inference as long as a few examples -or _shots_- are provided as inputs. This is usually referred to as in-context learning [14]. Follow-up work proposed improved ways to split and prompt the shots, such as Chain of Thought [15] or Least-to-Most prompting [16]. So far, the vast majority of this work has been done in the context of language inputs [17]. In this work, we explore multimodal in-context learning with pairs of images and captions. Our work is aligned in spirit to Flamingo [10] that uses interleaved image text pairs in the same web page and in-context tuning [18] during pre-training. We first group the image-text pairs by url and split each group to a "shots" set and a "target" set. Then we use the few examples in the "shots" set as input features to predict the examples in the target set.
Besides solving vision-language tasks in multiple domains, recent VLMs also attempted solving these tasks at once instead of fine-tuning on each individual benchmark. Unified-IO [19] performed multitask fine-tuning and reported solid results across 16 benchmarks. Spotlight [20] reported that inside the UI domain, multitask fine-tuning can achieve a performance close to task-specific fine-tuning. In this work, we show that PaLI-X can be simultaneously fine-tuned with a diverse set of benchmarks in multiple domains without performance degradation.
## 3 Model
### Architecture
The PaLI-X model architecture follows the encoder-decoder architecture: image(s) are processed by a ViT encoder, with the resulting visual embeddings fed to an encoder-decoder backbone, along with embeddings from additional text input (e.g., question / prefix / prompt). More details are provided in Appendix A.
Visual componentOur visual backbone is scaled to 22B parameters, as introduced by [6], the largest dense ViT model to date. To equip the model with a variety of complex vision-language tasks, we specifically focus on its OCR capabilities. To that end, we incorporate an OCR-based pretraining as follows: images from the WebLI dataset [5] are annotated with OCR-text detected by GCP Vision API; the encoder is then further pre-trained with a mixture of the original JFT-based classification task and a new OCR-based classification task (whether or not a given token occurred in the image according to OCR results). See Appendix A for additional details on the visual component. PaLI-X is designed to take \(n>=1\) images as inputs (for few-shot and video understanding), with tasks involving a single image as the \(n=1\) case. For \(n>1\), each image is independently processed by the ViT module, and the patch-level embeddings coming out of ViT are flattened and concatenated to form the visual input (See Appendix A). Note that similar to the single-image case, there is no pooling over the spatial dimension before visual embeddings are aggregated over the temporal dimension. That is, for an \(n\)-frame input with \(k\)-patches per frame, the resulting visual input has \(n*k\) tokens.
Overall modelThe encoder-decoder backbone is initialized from a variant of the UL2 [7] encoder-decoder model that uses 32B parameters. The architecture of this variant has 50 layers in both encoder and decoder (up from 32 layers in [7]), and is pretrained on a mixture of text data similar to [7]. The visual embeddings, after going through a projection layer, are concatenated with the token embeddings of the text input, and fed to the encoder-decoder backbone. Most of the pretraining tasks (with the exception of the masked image token task) predict text-only output from this multimodal input. The text input to the model typically consists of a prompt that marks what type of task it is (e.g., "_Generate caption in \(\langle\operatorname{lang}\rangle\)_" for captioning tasks) and encode necessary textual input for the task (e.g., "_Answer in \(\langle\operatorname{lang}\rangle\): [question]_" for VQA tasks). For tasks that need OCR capabilities, we experiment with either relying solely on the text-encoding capabilities of the vision encoder, or optionally including tokens extracted by an upstream OCR system fed as additional text inputs.
Few-shot formulationIn the few-shot setting, for a given _target example_ the model receives a number of "labeled" examples (in the form of additional \(\langle\)image, text\(\rangle\) pairs) that we refer to as _shotsexemplars_. The hypothesis is that information contained in these exemplars provides the model with useful context to generate predictions for the target example. Formally, the input with \(N\) shots is a sequence \((t_{1},\dots,t_{N},t_{T},i_{1},\dots,i_{N},i_{T})\), where \(t_{1}:t_{N}\) and \(i_{1}:i_{N}\) are texts and images for the \(N\) shots, and \(t_{T}\) and \(i_{T}\) are the text (prompt) and image for the target example. PaLI-X processes this input as follows: all images, including the target one, are first independently processed by the visual encoder, and the resulting patch-level embeddings are flattened and concatenated to form the visual input sequence. After going through a projection layer, they are concatenated with the text embeddings to form the multimodal input sequence used by the encoder. We implement additional optimizations including distributing the exemplars between the encoder and the decoder, and an attention re-weighting mechanism (see Appendix B).
### Pretraining Data and Mixture
The main pretraining data for our model is based on WebLI [5], consisting of roughly one billion images with alt-texts from the web and OCR annotations (using the GCP Vision API), covering over 100 languages. In addition to WebLI \(\langle\)image, text\(\rangle\) pairs, we introduce here _Episodic WebLI_ data, where each episode corresponds to a set of such pairs. We aim to have each episode contain loosely related images (i.e., they are clustered according to their URL field), so as to encourage attention among examples in an "episode". We find this new dataset (with 75M episodes and around 400M images in total) important for developing the few-shot capabilities of the model.
The pretraining mixture consists of the following data and objectives: (i) span corruption on text-only data (15% of tokens); (ii) split-captioning on WebLI alt-text data [21; 5]; (iii) captioning on CC3M [22] on native and translated alt-text data (over the same 35 languages covered by \(\mathrm{XM3600}\)[23]); (iv) split-ocr [24] on WebLI OCR annotations; (v) visual-question-answering objective over \(\langle\)image, question, answer\(\rangle\) pairs generated using the VQ\({}^{2}\)A method [25] over the CC3M training split, over native and translated text (same 35 language pairs); (vi) visual-question-generation objective, using the same pairs as above; (vii) visual-question-answering objective over \(\langle\)image, question, answer\(\rangle\) pairs using the Object-Aware method [26] (English only); (viii) captioning on Episodic WebLI examples (target alt-text predicted from the remaining alt-text and images); (ix) visual-question-answering on 4-pair examples (resembling Episodic WebLI and using VQ\({}^{2}\)A-CC3M pairs), with the answer target conditioned on the other pairs of \(\langle\)image, question, answer\(\rangle\) data. (x) pix2struct objective, introduced in [27], targeting page layout and structure using screenshot images paired with DOM-tree representations of html pages. (xi) Captioning on short video data, using the VTP data [10] (using four frames per video). (xii) object-detection objective on WebLI data, whereby an OWL-ViT model [28] (L/14) is used to annotate WebLI images, resulting in hundreds of pseudo object labels and bounding boxes per image. (xiii) image-token prediction objective, whereby we tokenize WebLI images (256\(\times\)256 resolution) using a ViT-VQGAN [29] model with patch size 16\(\times\)16 (256 tokens per image); this objective is framed as a 2D masked-token task (i.e., fill-in the missing grid pieces, with the corresponding image pixels also masked). Note that the image-token prediction objective is added mainly as a condition to check whether it adversarially impacts the performance on language-output tasks; our ablation experiments show that is does not.
### Training Stages
Our model is trained in two stages. In stage 1, the visual encoder (after mixed-objective training) is kept frozen, while the rest of the parameters are trained on a total of 2.2B examples at the base resolution 224\(\times\)224 (native to ViT-22B), using the entire mixture. In stage 2, it continues training using only the OCR-related objectives (pix2struct and split-ocr) plus the object detection objective; this is done in several substages, during which image resolution is gradually increased to 448\(\times\)448, 672\(\times\)672 and finally 756\(\times\)756.
Experiments
### Image Captioning and Visual Question Answering
Our results demonstrate that the larger capacity in PaLI-X scales well in both its vision and language components, and it is particularly beneficial for more challenging scene-text and document understanding tasks. Our model outperforms the SOTA on diverse vision-language tasks, with significant margins in some cases.
Benchmark datasetsThe Image Captioning and VQA benchmarks used for evaluation is summarized in Appendix B, including 6 Image Captioning benchmarks (COCO (Karpathy split [30]), NoCaps [31], TextCaps [32], VizWiz-Cap [33], Screen2Words [34], Widget-Cap [35]) and 13 VQA benchmarks (VQAv2 [36], OKVQA [37], TallyQA [38], TextVQA [39], VizWiz-VQA [40], STVQA [41], OCRVQA [42], InfographicVQA [43], DocVQA [44], AI2D [45] ChartQA [46], OVEN [47], InfoSeek [48]). These tasks span a wide range of visual domains, from natural images, illustrations to documents and user interfaces (UIs). We also include results of multilingual captioning on \(\mathrm{XM3600}\) in Appendix B.
#### 4.1.1 Per-task fine-tuning results
Experimental setupWe fine-tune PaLI-X with frozen ViT-22B; the learning rate follows a linear decay from initial value 1e-4 for all fine-tuning experiments. See Appendix B for more details.
First, we present benchmarks results for the condition where external OCR systems are not used (Table 1, see Appendix B for an extended table.). The trend is that PaLI-X matches or improves SoTA results on these benchmarks, with a particularly significant improvement on the TallyQA benchmark over MoVie [51] (specialized counting model), at +11.1 for simple counting questions (e.g., "how many giraffes") and +18.8 for complex counting questions (e.g., "how many giraffes are drinking water"); there are significant improvements over PaLI [5] as well, indicating that scale plays an important role in the ability of such models to perform counting tasks. We additionally note the state-of-the-art result on VQAv2 at 86.1 accuracy, achieved with an open-vocabulary generative
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & COCO & NoCaps & \multicolumn{2}{c}{VQAv2} & \multicolumn{2}{c}{OKVQA} & \multicolumn{2}{c}{TallyQA} \\ \cline{2-10} Model & Karp.-test & val & test & test-dev & test-std & val & simple & complex \\ \hline GIT2 [9] (5.1B) & 145.0 & 126.9 & **124.8** & 81.74 & 81.92 & - & - & - \\ Flamingo [10] (80B) & 138.1 & - & - & 82.0 & 82.1 & 57.8\({}^{*}\) & - & - \\ BEiT-3 [49] (1.9B) & 147.6 & - & - & 84.2 & 84.0 & - & - & - \\ PaLM-E [50] (562B) & 138.7 & - & - & 80.0 & - & **66.1** & - & - \\ MoVie [51] & - & - & - & 69.26 & - & - & 74.9 & 56.8 \\ PaLI [5](17B) & 149.1 & **127.0** & 124.4 & 84.3 & 84.3 & 64.5 & 81.7 & 70.9 \\ \hline PaLI-X (55B) & **149.2** & 126.3 & 124.3 & **86.0** & **86.1** & **66.1** & **86.0** & **75.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on COCO Captions (Karpathy split), NoCaps, VQAv2 [36], OKVQA [37], and TallyQA [38] with end-to-end modeling without OCR pipeline input (“simple” and “complex” are test subsplits).
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & Text & VizWiz & Text & VizWiz & ST & OCR & Info & Doc & \multirow{2}{*}{AI2D} & \multirow{2}{*}{Chart} & \multirow{2}{*}{Screen2} & \multirow{2}{*}{Widget} & \multirow{2}{*}{OVEN} & Info \\ Model & Caps & Cap & VQA & VQA & VQA & VQA & VQA & VQA & & & & & & & & & & & & & & & & \\ \hline \multicolumn{10}{l}{_**with OCR pipeline input**} \\ \hline SoTA & 160.4 & 124.7 & 73.67 & 73.3 & 79.9 & 67.5 & 47.4 & 84.7 & 38.5 & 45.5 & - & - & - & - \\ PaLI-X & [5] & [5] & [52] & [5] & [5] & [53] & [54] & [54] & [45] & [46] & - & - & - & - \\ PaLI-X & **163.7** & **125.7** & **80.78** & **74.6** & **84.5** & **77.3** & **54.8** & **86.8** & **81.4** & **72.3** & - & - & - & - \\ \hline \multicolumn{10}{l}{_**without OCR pipeline input**} \\ \hline SoTA & 145.0 & 120.8 & 67.27 & 70.7 & 75.8 & 71.3 & 40.0 & 76.6 & 42.1 & 70.5 & 109.4 & 141.8 & 20.0 & 17.7 \\ & [9] & [9] & [5] & [9] & [27] & [27] & [27] & [27] & [8] & [27] & [20] & [47] & [48] \\ PaLI-X & **147.0** & **122.7** & **71.44** & **70.9** & **79.9** & **75.0** & **49.2** & **80.0** & **81.2** & **70.9** & **127.9** & **153.0** & **23.1** & **21.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on benchmarks more focused on text understanding capabilities. For OVEN [47] & InfoSeek [48], we follow the proposed 224\(\times\)224 resolution settings for fair comparison.
approach, and the performance on OKVQA at 66.1 accuracy, matching the much-larger PaLM-E [50] model performance.
Next, we examine text-heavy V&L benchmarks, for which upstream OCR systems can be used to improve performance. As shown in Table 2, PaLI-X improves SoTA for all Captioning and VQA benchmarks across the board, either without or with additional OCR input (using GCP Vision API). For instance, a significant jump of +42.9 points is observed on AI2D3, a multiple-choice benchmark where choices are provided along with each question. Being able to have the text choices as input benefits PaLI-X compared with the previous SoTA Pix2Struct [27] which has to render the text on the image, but this does not explain all the improvements. In a question-only configuration (no answer choice present), PaLI-X achieves 46.3 on AI2D, more than 4 points higher than Pix2Struct's result.
Footnote 3: As with all the other benchmarks, our training examples are carefully deduped to exclude images occurring in these benchmarks, including AI2D. Such results, therefore, are _not_ attributable to train-test data leakage.
In general, having access to OCR texts extracted by an external OCR pipeline boosts performance. Still, for several benchmarks (e.g., AI2D, ChartQA, OCRVQA and Widget-Cap), PaLI-X's end-to-end performance when using its intrinsic OCR capability is close to that leveraging additional OCR input. A common feature for these benchmarks is that they have well-oriented text - diagrams, charts, book covers or user interfaces, with reasonably large font size at 756\(\times\)756 resolution. For tasks involving scene text in natural images (TextCaps, TextVQA, STVQA) or very high density of small texts (DocVQA, InfoVQA), results still highlight clear benefits when utilizing an external OCR model.
#### 4.1.2 Multitask Fine-tuning
We simultaneously fine-tune and evaluate the pretrained checkpoints on multiple benchmarks belonging to the same category. We deduplicated every training set over the test sets of every task in the mixture to prevent the leakage of any test-set examples into the mixed training set. This is useful as it leads to a single fine-tuned model that performs all the tasks, rather than having to fine-tune each task separately. We performed such multitask fine-tuning on all Image Captioning benchmarks and most VQA benchmarks, respectively.
Table 3 shows the multitask fine-tuning result for captioning tasks. The performance on COCO is slightly decreased in the multitask setting, which is likely a result of this task needing longer training to converge. For Screen2Words, having the smallest train and dev/test sets could be responsible for the performance fluctuation. Notably, VizWiz-Cap and Widget-Cap shows improved performance from multitask fine-tuning. Overall, the average performance decreases by 1.4 points (0.2 excluding Screen2Words) with multitask fine-tuning, while offering the clear advantage of having a single checkpoint to perform all these tasks. Appendix B shows similar results for VQA tasks. We consider this outcome a positive result that establishes the on-par performance between multitask fine-tuning and single-task fine-tuning for diverse benchmarks, in contrast with previous work which argued a gap between single-task and multitask fine-tuning [19], or demonstrated little gap over benchmarks from the same domain [20].
#### 4.1.3 Few-shot Evaluation
We fine-tuned the PaLI-X model on a mixture of few-shot tasks. The few-shot mixture contains Episodic mixtures, (Non-Episodic) Welbi and (Non-Episodic) CC3M data. Note that all of these datasets were already used in previous stages of training, but with lower mixture proportions. During
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{COCO} & \multirow{2}{*}{NoCaps} & Text & VizWiz & Screen2 & Widget & Avg. \\ & & & Caps & Cap & Words & Cap & \\ \hline Split & Karp.-test & val & val & test-dev & test & test & - \\ \hline SOTA (Single-task FT) & 149.1 & **127.0** & 148.6 & 119.4 & 109.4 & 136.7 & \\ \hline PaLI-X Single-task FT & **149.2** & 126.3 & 150.8 & 123.1 & **127.9** & 153.2 & - \\ PaLI-X Multitask FT & 147.3 & 125.6 & **154.6** & **124.2** & 120.6 & **153.7** & - \\ Multitask (+/-) & -1.9 & -0.7 & +3.8 & +1.1 & -7.3\({}^{*}\) & +0.5 & -1.4 (-0.2 w/o “+”) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Scores from multitask fine-tuning compared with those from single-task fine-tuning for Image Captioning. Validation or test-dev set numbers are reported for some tasks.
pre-training, we only use up to 4 shots, with both encoder and decoder shots (see Appendix B). For fine-tuning, we use up to 8 encoder shots and do not use decoder shots.
We evaluate the few-shot performance on COCO caption (Karpathy test split [30]), and \(\mathrm{XM3600}\)[23] datasets. For each task, we first create a "shots pool" with 256 examples that are randomly selected from the task's training set. As the \(\mathrm{XM3600}\) benchmark does not come with a training set, we use Google Translate API to enhance the COCO Karpathy training set with captions in the 35 languages represented in \(\mathrm{XM3600}\). Then, for each test data point, we randomly pick \(N\) shots from the pool as the actual few-shot examples. Following [10], we also evaluate on 2 text-only shots settings where only the textual part of 2 randomly sampled few-shot examples are used.
Table 4 reports the few-shot captioning performance on English and multilingual captioning, as well as few-shot VQA performance on VQAv2. PalLI-X achieves SOTA few-shot results on COCO with both 4 shots and 32 shots; it outperforms previous SOTA by +4.4 CIDEr points for 4-shot, suggesting a strong ability to efficiently gather hints from few examples. We also report few-shot CIDEr scores averaged over 35 languages using \(\mathrm{XM3600}\), demonstrating PalLI-X's multilingual capabilities. Meanwhile, although PalLI-X also performs decently on VQAv2, the gap behind the SoTA Flamingo model [10] (which freezes the language backbone) may be the result of losing some of the few-shot text-only QA capability by fine-tuning the language backbone, which supports the hypothesis regarding the tension between few-shot and fine-tuning abilities.
### Video Captioning and Question Answering
We fine-tune and evaluate the PaLI-X model on 4 video captioning (MSR-VTT [55], VATEX [56], ActivityNet Captions [57], Spoken Moments in Time [58]) and 3 video question answering benchmarks (NExT-QA [59], MSR-VTT-QA [60], ActivityNet-QA [61]). A brief description of each benchmark and clarifications on their usage are provided in Appendix C.
Experimental setupWe fine-tune our model (with base resolution 224\(\times\)224) for each task separately, use the validation split for early stopping, and report performance on the test split. We use a learning rate of \(10^{-4}\) for all tasks, and do not adapt any hyperparameters for specific tasks. Frames are sampled using a fixed temporal stride for each dataset (determined based on the video length distribution in that dataset such that the product of the number of frames and stride is larger than the total number of frames for half of the videos), and we experimented with including up to 8 or 16 frames per video. We did not include pooling over the spatial dimension; embeddings for 16\(\times\)16 patches per frame are provided as visual input to the multimodal encoder.
ResultsWe report CIDEr score for the video captioning tasks. Video QA tasks are treated as open-ended generation tasks; we report full-string accuracy (for MSR-VTT-QA and ActivityNet-QA) and WUPS metrics (NExT-QA) in [65; 59]. As shown in Table 5, the 16-frames version has an edge over the 8-frame version, sometimes with a significant margin (e.g., close to a 6 point increase in CIDEr score for ActivityNet-Captions). More importantly, while PaLI-X pretraining was dominated by image-text tasks, we were able to achieve new SOTA performance for 5 out of 7 tasks4, and performed very close to prior SOTA on MSR-VTT-QA (47.1 vs 47.4).
Footnote 4: As noted in Table 5, current SOTA on NExT-QA for the open-ended QA task was achieved by Flamingo 32-shot, which had outperformed prior fine-tuning SOTA. To the best of our knowledge, PaLI-X performance on this task does outperform existing published fine-tuning performances, with the caveat that we do not have information on what Flamingo fine-tuning would have achieved on this task.
### Image classification
To test image classification capabilities we fine-tuned PaLI-X and models from [5] on ImageNet [66] and evaluated the resulting model on ImageNet-REAL [67] and out-of-distribution
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{COCO Captions} & \multicolumn{2}{c}{\(\mathrm{XM3600}\) Cap. (35-lang avg.)} & \multicolumn{2}{c}{VQAv2} \\ \cline{2-5} Method & 4 shots & 32 shots & 4 shots & 32 shots & 4 shots & 32 shots \\ \hline Prev. SoTA [10] & 103.2 & 113.8 & N/A (53.6 w/ fine-tune [5]) & **63.1** & **67.6** \\ PaLI-X & **107.6** & **114.5** & 45.1 & 47.1 & 56.9 & 57.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Few-shot performance of the PaLI-X model (multilingual captioning for \(\mathrm{XM3600}\)).
datasets: ImageNet-R [68], ImageNet-A [69], ImageNet-Sketch [70], ImageNet-v2 [71]. We used the model from the first training stage (at resolution 224) and the one from the last training stage (at resolution 756). We used the same training hyperparameters for all of runs (selected without any hyperparameter tuning; mode details in Appendix D).
The results can be seen in Table 6. We compare the results to generative model with open vocab - GIT2 [9] (using 384 image resolution), which is the current SOTA for full fine-tuning on ImageNet. PaLI-X achieves SOTA results for generative models on Imagenet, and other datasets. We also performed zero-shot evaluation for PaLI-X and the results can be found in Appendix D.
### Object Detection
Object detection can be easily formulated in our model as shown in pix2seq [72], The dataset mix used for pre-training is presented in Sec. 3; detection data was included up to and including the stage using resolution 672, after which a separate detection-specific model was fine-tuned on detection data. Before detection-specific tuning, LVIS [73] & COCO labels were removed from all detection training datasets, allowing zero-shot evaluation on LVIS.
Bounding box mean AP on LVIS is shown in Table 7, including zero-shot performance; the detection-tuned model reaches an AP of 31 in general, and 31.4 on rare classes, and about 12 for both in zero-shot. Performance on rare classes was on par with performance on common classes, a difficult
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model (resolution) & INet [66] & REAL [67] & INet-R [68] & INet-A [69] & INet-Sketch [70] & INet-v2 [71] \\ \hline GIT2 [9] (384) & **89.22** & - & - & - & - & - \\ PaLI-17B [5] (224) & 86.13 & 88.84 & 78.21 & 50.00 & 71.21 & 78.91 \\ \hline PaLI-X (224) & 88.22 & 90.36 & 77.66 & 55.97 & 72.56 & 81.42 \\ PaLI-X (756) & **89.19** & **90.98** & **80.06** & **72.57** & **73.37** & **83.66** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Classification accuracy (top-1) fine-tuned on Imagenet [66].
Figure 2: Examples demonstrating multilingual, OCR and other capabilities transferred to detection.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{2}{c}{MSR-VTT} & \multicolumn{2}{c}{Activity-Net} & \multicolumn{2}{c}{VATEX} & \multicolumn{2}{c}{SMIT} & \multicolumn{2}{c}{NExT-QA} \\ \cline{2-7} Method & Cap. [55] & QA [60] & Cap. [57] & QA [61] & Cap. [56] & Cap. [58] & QA [59] \\ \hline Prior SOTA & 75.9 & **47.4** & 52.5 & 44.7 & 94.0\({}^{\dagger}\) & 28.1\({}^{\ddagger}\) & 33.5\(\lx@note{footnote}{We use the same training hyperparameters for all of runs (selected without any hyperparameter tuning); mode details in Appendix D).} \\ & GIT2 [9] & Flannigo [10] & PDVC [62] & VINDLU [63] & GIT2 [9] & MV-GPT [64] & Flannigo 32shot [10] \\ \hline PaLI-X (8fr) & 74.6 & 46.9 & 49.0 & 48.4 & 66.0 & 42.5 & 37.0 \\ PaLI-X (16fr) & **76.8** & **47.1** & **54.9** & **49.4** & 69.3 & **43.5** & **38.3** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results for Video Captioning and Video-QA using 8 frames (8fr) or 16 frames (16fr). \({\dagger}\)GIT2 uses Self-Critical Sequence Training to directly optimize the CIDEr metric for VATEX. \({\ddagger}\)SMIT has not been used for video captioning before, we apply MV-GPT [64] and report results on the test set. \({\lx@sectionsign}\)Numbers were obtained using 32-shot; since Flannigo 32-shot outperforms fine-tuning SOTA on this open-ended QA task, they did not conduct further fine-tuning experiments for this task.
feat traditionally accomplished by complicated sampling schedules and augmentations. In our set up, it is directly enabled by PaLI-X's diverse training mix. This could likely be further improved with investment in fine-tuning e.g. using noise-augmentation methods from pix2seq [72], or a further stage of high-resolution, LVIS only training. Qualitatively, we observe emergence of many interesting phenomena enabled by co-training with non-detection tasks; for example, multilingual detection, OCR bounding boxes and longer descriptions, none of which are included in detection training, are often handled well by PaLI-X. Additional results and information can be found in Appendix E.3.
## 5 Model Fairness, Biases, and Other Potential Issues
Large models, if left unchecked, have the potential to inflict harm on society - such as amplifying biases [76; 77; 78; 79], causing disparities [78; 80; 81], or encoding narrow cultural perspectives [82; 83]. Hence, evaluating PaLI-X for such potential issues is important. We focus our RAI evaluation on three parts: (1) harmful associations, such as toxicity and profanity, (2) demographic parity in the model's output, such as encoding societal stereotypes/biases, and (3) performance disparity across subgroups. This breakdown follows earlier works in the literature, such as [84].
Toxicity / profanity.We estimate the level of toxicity and profanity in the generated captions, including when disaggregated across subgroups. We use the FairFace dataset [85] that comprises of images of people with ground-truth attributes: gender presentation, age and ethnicity. We generate captions and use the Perspective API [86] (threshold \(>0.8\)) to measure toxicity and profanity. Table 8 summarizes the results; we observe a low level of toxicity/profanity across all slices. Tables 9 and 10 provide a detailed breakdown of toxicity/profanity results for all subgroups in FairFace dataset. In Tables 11 and 12, we report similar results in the MIAP [87] dataset, disaggregated by perceived gender and age.
Bias / Demographic Parity.We estimate the level of demographic parity (DP) [88] in PaLI-X with respect to gender and occupation. To estimate the level of demographic parity (DP) in the model's output, we feed an image into PaLI-X with the chosen occupation title as a prefix and record the average log-perplexity score of the captions generated by the model. To ensure that any observed parity would likely reflect unintended biases in the model itself as opposed to the evaluation dataset, we use CelebA [89] that contains celebrity images with gender presentation annotation. Our assumption is that many occupations reflecting societal stereotypes, such as secretaries and plumbers, are quite rare in the CelebA dataset so disparities in output may reflect what is encoded in the model itself. The list of occupations is compiled based on [90] and the US job statistics report in [91].
Figure 3 (top) summarizes the overall results. First, PaLI-X tends to assign a higher log-perplexity score to women than men across most occupations; i.e. men are predicted to be more likely to hold such occupations. Second, PaLI-X assigns a higher likelihood for a woman to be ('secretary'
\begin{table}
\begin{tabular}{l c c} \hline \hline & LVIS AP & LVIS AP\({}_{\text{Rare}}\) \\ \hline ViLD [74] (tuned on non-rare LVIS) & 29.3 & 26.3 \\ Region-CLIP [75] (tuned on non-rare LVIS) & 32.3 & 22.0 \\ OwLViT-L/16 [28] (tuned on non-rare LVIS) & 34.7 & 25.6 \\ OwLViT-L/16 [28] (with Object365 and VG datasets) & 34.6 & 31.2 \\ \hline PaLI-X (Zeroshot) & 12.36 & 12.16 \\ PaLI-X (Detection-tuned) & 30.64 & 31.42 \\ \hline \hline \end{tabular}
\end{table}
Table 7: PaLI-X object detection results on LVIS. The diverse pre-training mix enables parity performance between LVIS rare and common classes. Other related approaches are shown for context, but are not directly comparable.
\begin{table}
\begin{tabular}{l|c c|c c c|c c c|c} \hline \hline & \multicolumn{3}{c}{Gender} & \multicolumn{3}{c}{Ethnicity} & \multicolumn{3}{c}{Age} \\ & Lowest & Highest & Lowest & Median & Highest & Lowest & Median & Highest & **Overall** \\ \hline
**Toxicity** & 0.14\% & 0.19\% & 0.00\% & 0.13\% & 0.39\% & 0.00\% & 0.17\% & 0.31\% & **0.01\%** \\
**Profanity** & 0.00\% & 0.02\% & 0.00\% & 0.00\% & 0.05\% & 0.00\% & 0.00\% & 0.03\% & **0.00\%** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Average toxicity/profanity in the captions generated by PaLI-X on FairFace dataset.
'actor') and a higher likelihood for a man to be ('guard' & 'plumber') at the 95% confidence level. Figure 3 (bottom) displays the corresponding correlations between perceived gender presentation and occupations within the WebLI dataset, where we use the Pearson correlation coefficient by treating each label as a binary random variable and noting that for binary random variables, zero correlation implies full independence. All absolute correlation coefficients in the data are \(<0.2\) with 99% of them being \(<0.1\).
Performance Disparity.We present here an evaluation of how well PaLI-X performs across different subgroups using the MIAP [87] dataset. For images containing exactly a single individual, we query PaLI-X with the question: "Is there a person in this image?" and evaluate the accuracy of its response. Note that there are no false positives in this evaluation. Table 13 summarizes the results. We observe that PaLI-X maintains a high accuracy across all subgroups.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline Age & \multicolumn{3}{c}{Toxicity} & \multicolumn{3}{c}{Profanity} \\ \hline & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) \\ \hline
**\textless{} 19** & 58.78\% & 40.00\% & 0.22\% & 89.71\% & 10.29\% & 0.00\% \\
20 - 29 & 63.01\% & 36.86\% & 0.12\% & 93.24\% & 6.73\% & 0.03\% \\
30 - 39 & 63.13\% & 36.70\% & 0.17\% & 95.41\% & 4.59\% & 0.00\% \\
40 - 49 & 63.62\% & 36.31\% & 0.07\% & 95.27\% & 4.73\% & 0.00\% \\
50 - 59 & 65.87\% & 33.88\% & 0.25\% & 96.48\% & 3.52\% & 0.00\% \\
60 - 69 & 65.31\% & 34.38\% & 0.31\% & 95.95\% & 4.05\% & 0.00\% \\ \(>70\) & 66.10\% & 33.90\% & 0.00\% & 92.37\% & 7.63\% & 0.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 10: Distribution of the predicted toxicity/profanity for the captions generated by PaLI-X on FairFace dataset disaggregated by age.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multicolumn{1}{l|}{\multirow{2}{*}{Age Bucket}} & \multicolumn{3}{c}{Toxicity} & \multicolumn{3}{c}{Profanity} \\ \hline & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) \\ \hline
0-2 yrs & 28.00\% & 72.00\% & 0.00\% & 69.90\% & 30.10\% & 0.00\% \\
3-19 yrs & 49.96\% & 49.96\% & 0.07\% & 91.46\% & 8.54\% & 0.00\% \\
20-59 yrs & 66.27\% & 33.68\% & 0.05\% & 93.42\% & 6.55\% & 0.03\% \\ \(>60\) yrs & 65.46\% & 34.54\% & 0.00\% & 96.39\% & 3.61\% & 0.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 12: Distribution of the predicted toxicity/profanity for the captions generated by PaLI-X on MIAP dataset disaggregated by age bucket.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline \multicolumn{1}{l|}{\multirow{2}{*}{Ethnicity}} & \multicolumn{3}{c}{Toxicity} & \multicolumn{3}{c}{Profanity} \\ \hline & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) \\ \hline & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) & \(<0.2\) & \(0.2-0.8\) & \(>0.8\) \\ \hline Middle Eastern & 64.24\% & 35.76\% & 0.00\% & 94.87\% & 5.13\% & 0.00\% \\ Black & 59.47\% & 40.40\% & 0.13\% & 92.67\% & 7.33\% & 0.00\% \\ Indian & 63.86\% & 36.07\% & 0.07\% & 94.39\% & 5.61\% & 0.00\% \\ Hispanic & 61.09\% & 38.79\% & 0.12\% & 94.45\% & 5.55\% & 0.00\% \\ White & 62.45\% & 37.16\% & 0.39\% & 92.85\% & 7.10\% & 0.05\% \\ Southeast Asian & 63.18\% & 36.61\% & 0.21\% & 93.57\% & 6.43\% & 0.00\% \\ East Asian & 63.15\% & 36.72\% & 0.13\% & 91.55\% & 8.45\% & 0.00\% \\ \hline \hline \end{tabular}
\end{table}
Table 9: Distribution of the predicted toxicity/profanity for the captions generated by PaLI-X on FairFace dataset disaggregated by ethnicity.
Limitations.The analysis carried out in this section is necessarily limited, since fairness is a societal concept that cannot be reduced to statistical metrics. We expect RAI evaluations to evolve over time as new issues are detected and reported in the literature and additional datasets become available. Statistical analysis is only a single step and does not substitute for studying the broad and delayed impact of deployed models.
In addition, we rely in some parts on automated tools for inferring attributes, which are not perfectly accurate and can lead to a broad categorization of people that misidentifies real identities. We do not support the creation or application of classifiers for sensitive attributes, such as gender or ethnicity, based on visual indicators and encourage readers to delve into the comprehensive work outlining their potential risks, such as [93, 94], for further insight. Also, while we use perceived gender presentation in our analysis that is provided by the data (i.e. in CelebA and FairFace), we acknowledge that people may express their gendered identities in numerous other ways.
In our evaluation, toxicity is predicted based on the generated captions only. However, without knowing the context of the image, this can introduce false positives.
## 6 Conclusions
In this work we draw more insights from further scaling vision and language models. We show that the scaling and the improved training recipe results in a model that substantially outperforms previous state-of-the-art models, leads to emergent behaviors and identifies further margins for improvements. In particular, we report that the model achieves significant improvements at document, chart, and infographic understanding, captioning, visual question answering, counting, and performs well on few-shot (in-context) captioning, video captioning and question-answering, and object detection.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Skin Tone** & **1**[2] & **2**[871] & **3**[3008] & **4**[522] & **5**[184] & **6**[85] & **7**[54] & **8**[49] & **9**[6] & **10**[1] \\ & 0.00\% & 0.11\% & 0.47\% & 1.53\% & 0.54\% & 1.18\% & 0.00\% & 0.00\% & 0.00\% \\ \hline
**Gender** & \multicolumn{4}{c}{**Predominantly Feminine**[2437]} & \multicolumn{4}{c}{**Predominantly Masculine**[3544]} \\ & \multicolumn{4}{c}{0.53\%} & \multicolumn{4}{c}{0.85\%} \\ \hline
**Age Bucket** & \multicolumn{4}{c}{**0-2 yrs**[17]} & **3-19 yrs**[568] & **20-59 yrs**[4925] & **> 60 yrs**[247] \\ & \multicolumn{4}{c}{0.00\%} & \multicolumn{4}{c}{0.00\%} & \multicolumn{4}{c}{0.77\%} & \multicolumn{4}{c}{0.81\%} \\ \hline \hline \end{tabular}
\end{table}
Table 13: Detection error rate for “person” in PaLI-X using the subset of the MIAP dataset [87] that contain exactly a single individual in the image. PaLI-X maintains a low error rate across all subgroups. Skin tone follows the Monk Skin Tone Scale [92]. Numbers inside square brackets correspond to the size of each bucket.
Figure 3: top: Level of demographic parity (DP) in PaLI-X’s output for CelebA images between women and men. Values close to zero indicate absence of bias. bottom: _Absolute_ Pearson correlation coefficients between gender presentation and occupations in WebLI.
## Acknowledgements
We would like to thank Sarah Laszlo, Kathy Meier-Hellstern, Caroline Pantofaru, Susanna Ricco, Candice Schumann, Ken Burke, Simon Wang, Rachel Hornung, Yichang Chen, Utsav Prabhu, Abhijit Ogale, Kristina Toutanova, Weicheng Kuo, Jihyung Kil, Xiangning Chen, Liang Chen, Rich Lee, Elizabeth Adkison, James Cockerille, Eric Ni, Erica Moreira, Victor Gomes, Jeremiah Harmsen, Claire Cui, Slav Petrov, Tania Bedrax-Weiss, Joelle Barral, Tom Duerig, Paul Natsev, Fernando Pereira, Jeff Dean, and Zoubin Ghahramani for helpful discussions, feedback, and support.
Additional Model Details and Examples
### PaLI-X Architecture Illustration
### Tuning ViT-22B for better OCR capabilities
The vision encoder's ability to understand text is crucial to several downstream tasks and general usability. JFT-based pre-training is insufficient to cover this, and so we tuned ViT-22B on WebLI-OCR data. In order to stay true to the original discriminative classification-based objective used for ViT-22B, we turn OCR into a bag-of-words prediction task. OCR texts are tokenized using the mT5 tokenizer [95] across all languages, and the model is trained to predict whether or not a given token occurs in an image. This is treated as multilabel classification, with an expanded classification head.
In the ablation study shown in Table 22, we confirm that this this extra tuning step indeed has a significant improvement on Scene-Text understanding capabilities, demonstrated by the performance on ST-VQA and TextVQA. Meanwhile, the performance on regular VQA tasks such as those in the VQAv2 benchmark also improves.
### Illustrative PaLI-X Examples
Table 14 shows representative examples of PaLI-X, illustrating improved abilities related to counting (both of the simple and complex variety), in context text-reading capabilities, and spatial awareness.
Figure 4: Visual input for videos: each frame is independently processed by ViT; patch embeddings are flattened and concatenated together to form the visual representation. (The example input image is in the public domain).
## Appendix B Additional results: Image Captioning and VQA
### Information of Downstream Image Benchmarks
Table 15 summarizes the Image Captioning and VQA benchmarks. For benchmarks modeled only end-to-end without OCR pipeline input (Table 1 and Table 16), fine-tuning is performed with resolution 672\(\times\)672. For Scene-Text and Document Understanding tasks presented in Table 2, fine-tuning is performed with resolution 756\(\times\)756.
### Extended Tables of Image Benchmarks
An extended table of results on some Image Benchmarks is shown as Table 16.
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{_Image Credit: Wikimedia Commons_ _CC BY-SA 4.01_} \\ \hline \hline \multicolumn{1}{c}{Q: what is written inside the box?} \\ \multicolumn{1}{c}{A: dr. strangeglove’s secret uses of uranus} \\ \multicolumn{1}{c}{Q: what is written on the top-left corner of the page?} \\ \multicolumn{1}{c}{A: the bomb and dr. strangeglove} \\ \multicolumn{1}{c}{Q: what is written on the top-right corner of the page?} \\ \multicolumn{1}{c}{A: doctor doomsday} \\ \hline \multicolumn{1}{c}{_Image Credit: ChrisGoldNY (fldxr) (CC BY-NC 2.0)_} \\ \hline \hline \end{tabular}
\end{table}
Table 14: Examples of counting, text reading capabilities with context and spatial awareness. Results are generated by the multi-task-finetuned models using the model’s inherent OCR capabilities (i.e., without the use of an external OCR system).
### Multi-lingual Captioning
Multilingual captioning on XM-3600The Crossmodal-3600 (XM3600) benchmark contains a geo-diverse set of 3600 images with human-annotated reference captions in 36 languages [23]. Table 17 presents multilingual results for both PALI (current SoTA on XM-3600) and PaLI-X, both finetuned with 224\(\times\)224 resolution. Overall, PaLI-X improves on the SoTA performance across 5 of the 7 languages we report here (and for 14 of the total 35 languages considered); notably, the performance on English is 4 CIDEr points lower compared to PaLI. The 35-language average CIDEr score is in the same ballpark between PaLI and PaLI-X, with a slight +0.5 advantage for PaLI.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{COCO} & \multicolumn{2}{c}{NoCaps} & \multicolumn{2}{c}{VQAv2} & \multicolumn{2}{c}{OKVQA} & \multicolumn{2}{c}{TallyQA} \\ \cline{2-9} Model & Karp.-test & val & test & test-dev & test-std & val & simple & complex \\ \hline SimVLM & 143.3 & 112.2 & 110.3 & 80.03 & 80.34 & - & - & - \\ CoCa (2.1B) & 143.6 & 122.4 & 120.6 & 82.3 & 82.3 & - & - & - \\ GIT (0.7B) & 144.8 & 125.5 & 123.4 & 78.56 & 78.81 & - & - & - \\ GIT2 (5.1B) & 145.0 & 126.9 & **124.8** & 81.74 & 81.92 & - & - & - \\ OFA (0.9B) & 145.3 & - & - & 82.0 & 82.0 & - & - & - \\ Flamingo (80B) & 138.1 & - & - & 82.0 & 82.1 & 57.8\({}^{*}\) & - & - \\ BEiT-3 (1.9B) & 147.6 & - & - & 84.2 & 84.0 & - & - & - \\ PalM-E (562B) & 138.7 & - & - & 80.0 & - & **66.1** & - & - \\ MoViE & - & - & - & 69.26 & - & - & 74.9 & 56.8 \\ PaLI (17B) & 149.1 & **127.0** & 124.4 & 84.3 & 84.3 & 64.5 & 81.7 & 70.9 \\ \hline PaLI-X (55B) & **149.2** & 126.3 & 124.3 & **86.0** & **86.1** & **66.1** & **86.0** & **75.6** \\ \hline \hline \end{tabular}
\end{table}
Table 16: Results on COCO Captions (Karpathy split), NoCaps, VQAv2, OKVQA, and TallyQA with end-to-end modeling without OCR pipeline input. The “simple” and “complex” are test subsplits.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Model & en & fr & hi & iw & ro & th & zh & 35-lang avg. \\ \hline PaLI & **98.1** & 75.5 & 31.3 & 46.8 & 35.8 & 72.1 & **36.5** & **53.6** \\ PaLI-X & 94.2 & **78.7** & **32.0** & **46.9** & **36.9** & **75.3** & 36.1 & 53.1 \\ \hline \hline \end{tabular}
\end{table}
Table 17: CIDEr scores on image captioning for the Crossmodal-3600 benchmark for seven diverse languages (English, French, Hindi, Hebrew, Romanian, Thai, and Chinese), as well as the average of the 35 languages covered by the benchmark. Both models are finetuned with 224\(\times\)224 resolution.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & en & fr & hi & iw & ro & th & zh & 35-lang avg. \\ \hline PaLI & **98.1** & 75.5 & 31.3 & 46.8 & 35.8 & 72.1 & **36.5** & **53.6** \\ PaLI-X & 94.2 & **78.7** & **32.0** & **46.9** & **36.9** & **75.3** & 36.1 & 53.1 \\ \hline \hline \end{tabular}
\end{table}
Table 17: CIDEr scores on image captioning for the Crossmodal-3600 benchmark for seven diverse languages (English, French, Hindi, Hebrew, Romanian, Thai, and Chinese), as well as the average of the 35 languages covered by the benchmark. Both models are finetuned with 224\(\times\)224 resolution.
### TallyQA and the emergence of complex counting capability
We present in Table 18 the performance of similar models across a wide range of capacity - from 700M parameters to 55B parameters for PaLI-X. The graphs in Fig. 5 illustrate how simple counting appears to follow a more linear progression as parameter-size increases, while complex counting appears to show emergence somewhere before the datapoint provided by the performance of PaLI 17B. This corresponds to our intution that complex counting is a true multimodal task that requires additional capabilities from a model, in terms of the alignment that is required between the visual information and the prompt specification.
### Details on Few-shot Modeling
#### b.5.1 Few-shot Formulation
Figure 6 illustrates the network flow of a few shot model. The text and prompt part of each shot is embedded and concatenated as text features for the PaLI-X model. Each shot's images and the target image are independently encoded by the ViT component, and the ViT features are concatenated along the sequence axis as visual features. Conditioned on that sequence, the PaLI-X decoder autoregressively makes the predictions for the target image.
Encoder shot and Decoder shotsWhile images for all few-shot examples and target example are given as input to the model, text information can be provided in different ways. During inference time, all text information related to the few-shot examples is given to the encoder; in the case of a Multi-answer VQA task, for example, this includes both the prompts that contain the questions, and the expected answers. Prompt for the target example is also given to the encoder, and the decoder is tasked with generating an answer for the target example. During training, however, we increase the training efficiency by making the model predict answers for both the target example and selected shots (the _decoder shots_). That is, we partition the \(N\) shots in two sets: encoder shots (\(N_{e}>0\)) and decoder shots (\(N_{d}\geq 0\)), such that \(N_{e}+N_{d}\leq N\). We use up to 4 shots in total during pre-training (i.e. \(N=4\) ), and sample \(N_{e}\) uniformly at random from 1 to \(N\). Text input for encoder shots contain
Figure 5: Performance on TallyQA splits for simple and complex using PaLI variants and PaLI-X. All models use 224\(\times\)224 image resolution. The emergent behavior on complex counting beyond the 3B size is made clear with PaLI-X.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & TallyQA simple & TallyQA complex & Weighted average \\ \hline PaLI (700M) & 66.9 & 55.6 & 62.4 \\ PaLI (3B) & 72.0 & 56.7 & 65.9 \\ PaLI (17B) & 76.2 & 65.5 & 71.9 \\ PaLI-X (55B) & 81.3 & 71.0 & 77.2 \\ \hline \hline \end{tabular}
\end{table}
Table 18: Performance on TallyQA splits for simple and complex questions. All models use 224\(\times\)224 image resolution.
both prompts and answers. The decoder shots, however, act as if they were target examples: their text input to the encoder contains only the prompt, and the decoder needs to predict answers for the decoder shots in addition to the target example.
Attention re-weightingIncreasing the number of shots turned out to be challenging, potentially due to cross-attention to target example input tokens getting diluted by the large number of shots. To address this, we introduce an attention re-weighting mechanism. As shown in Figure 7, we explicitly boost the weights for cross attention between decoder tokens and encoded tokens from the target example (that is, the target image and the target text prompt).
Specifically, if there are \(N\) shots in total, when decoding each token we multiply the cross attention weights by \(N\) for the target image and text tokens from the encoder outputs. We observe this attention re-weighting technique is especially helpful when we provide the model with many shots (e.g. 32 shots). [96] introduces a technique along similar lines to manipulate attention weights when gathering them from different threads of encoded shots at inference time.
#### b.5.2 Additional Few-shot Results
Multilingual captioning resultsTable 19 reports the CIDEr scores for 7 languages and an average over 35 languages to demonstrate PaLI's multilingual captioning capabilities on the \(\text{XM}3600\) benchmark in teh few-shot setting. The pre-trained model (no few-shot finetuning) achieves an average score of 22.7. The PaLI-X model achieves an average score of 45.1 for 4 shots and 47.1 for 32 shots. Note that the 32-shot PaLI-X average CIDEr score is only 6 points behind the fully finetuned model, which uses roughly 600k training examples per language (while the few-shot approach does not update the model parameters).
Qualitative resultsFigure 8 shows 3 examples on few-shot captioning and VQA tasks for qualitative analysis. The first row shows captions for the images using the images' original language,
Figure 6: A detailed view on how the few-shot exemplars are fed to the model components.
Figure 7: Re-weighted attention with few-shots.
demonstrating the cross multilingual transfer of the few-shot capability. The second row captions the images with a country's popular food, showing that the few-shot approach can access the model's world knowledge. The last row shows a VQA with an explanation-like scenario where we ask if the technologies in the images are "new". Generally speaking, the shown personal computer was produced more than 40 years ago and could be regarded as old technology considering the fast pace of the current high-tech development. However, the 3 input shots provide the detailed calibration for the concept of "new" and the few-shot model successfully take the context and output "new" with plausible explanation to the very old PC.
#### b.5.3 Few-shot ablation results
In this section, we present and discuss some ablation results for few-shot we explored in order to inform our final design choices on PaLI-X. Unless otherwise specified, we use a 700M-parameter model with the same encoder-decoder architecture, consisting of a ViT-B/16 vision encoder and a mT5-base encoder-decoder language model.
Pooling vs not pooling image tokensTo mitigate the computational burden that arises with many shots, we can pool (for example, average) the per-image tokens before concatenating all input tokens. This pooled image tokens model achieved a CIDEr score of 56.3 for 4-shots COCO captioning, which is substantially lower than the full model's CIDEr score of 61.7. This highlights the importance of keeping all the tokens coming out of the ViT encoder, despite the computational overhead.
Limited-range Encoding Attention.We explore per-example image-text attention, as proposed and applied in [10]. Under this approach, the image query tokens for each example can only attend
Figure 8: Qualitative Results on few-shot captioning (first two rows) and VQA (the last row) tasks.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Crossmodal-3600 Captioning} \\ \hline & en & fr & hi & iw & ro & th & zh & 35-lang avg. \\ \hline PaLI-X 0-shot & 48.8 & 25.0 & 10.5 & 20.1 & 13.0 & 33.3 & 18.4 & 22.7 \\ PaLI-X (2 text-only shots\({}^{5}\)) & 54.5 & 46.7 & 12.0 & 22.2 & 9.4 & 40.3 & 23.7 & 25.8 \\ PaLI-X 4 shots & 77.8 & 62.5 & 22.2 & 38.7 & 30.2 & 56.0 & 27.7 & 45.1 \\ PaLI-X 32 shots & 81.4 & 66.1 & 25.6 & 40.6 & 32.4 & 59.4 & 29.7 & 47.1 \\ \hline PaLI-X (finetuned) & 94.2 & 78.7 & 32.0 & 46.9 & 36.9 & 75.3 & 36.1 & 53.1 \\ \hline \hline \end{tabular}
\end{table}
Table 19: Few-shot performance of the PaLI-X model on multilingual captioning tasks.
to its corresponding text tokens, while the text query tokens can attend to all tokens. By using this per-example attention model, we achieved a CIDEr score of 59.6, which is 2.1 points lower than the full attention model's CIDEr score of 61.7 for 4-shots COCO captioning.
Attention re-weighting for large number of shots.We report the few-shot results on COCO captioning from early-stopped PaLI-2 3B models; in this case, we did not apply normalized attention in training. We provide the test results with and without attention re-weighting during _inference_ for a different number of encoder shots. Attention re-weighting achieves increasing CIDEr scores of 82.1, 84.3 and 84.5 with 4, 8 and 16 shots respectively. On the other hand, the model achieves 83.4, 76.5 and 66.3 without attention re-weighting. The decreasing performance may suggest that the model fails to locate the target image and text prompt among the large number of shots, whereas the attention re-weighting helps the model to focus on the target features. Accordingly, we decided to include attention re-weighting during finetuning for PaLI-X.
Distributing shots between encoder and decoder.We explore the use of both encoder and decoder shots during pre-training. We pretrain the PaLI-2 700M model on PaLI-2 mixtures with varying number of encoder shots (between 1 and 4). The remaining shots (up to exactly 4) are used as decoder shots. Using only encoder shots leads to a 64.0 CIDEr score for 4 shots in COCO captioning. The best mix of encoder and decoder shots achieves a CIDEr score of 65.2. This suggests splitting shots leads to a more challenging pre-train task that helps the model learn more efficiently.
### Finetuning hyperparameters
The hyperparamter choices for downstream finetuning experiments are summarized in Table 20. As mentioned in the Main Text, for all of the downstream finetuning experiments, we used a reduced set of hyperparameters, without heavy per-task optimization.
### Multi-task finetuning
We deduplicated every training set mixture over the test sets of every task in order to prevent leakage of any test-set examples into the training set. The mixture is formed by putting the training examples of each subtask together, with heuristic adjustments for a better balance. Following the resolutions for the single-task finetuning, the multi-task captioning and VQA finetuning are done with 672 and 756 image resolutions, respectively. The multitask finetuning covers just about 5M examples, which is 20k steps with a batch size of 256. For scene-text and document understanding tasks, the multi-task finetuning uses the end-to-end setting without OCR pipeline input.
The following aspects made multitask finetuning particularly challenging: (i) all tasks used the same prompt without task-specific indicators; the model is thus required to adapt to the style of multiple benchmarks simultaneously. 2) We do not perform per-task validation set optimization. All subtasks are evaluated using the same checkpoint, but tasks converge to their optimal value at a different pace.
### Ablation studies
We first show in Table 22 the advantage brought by the OCR co-training stage of ViT-22B. We pair the vanilla ViT-22B and the ViT-22B with additional OCR co-training with a small language model mT5-base and pretrain these models on 40M of WebLI-OCR data with the split OCR objective, before finetuning on ST-VQA. Co-training on image and OCR classification has a significant advantage on
\begin{table}
\begin{tabular}{l c c} \hline \hline Benchmark & learning rate schedule & Steps before LR decay to 0 & batch size \\ \hline COCO & & 10k & 256 \\ VQAv2 & & 20k & 256 \\ OCRVQA & & 20k & 256 \\ Multitask-VQA & & 20k & 256 \\ Multitask-Captioning & & 20k & 256 \\ All other & & 5k & 128 \\ \hline \hline \end{tabular}
\end{table}
Table 20: Hyperparameter used for finetuning PaLI-X.
ST-VQA and TextVQA. In the meantime, the performance on VQAv2, which is not very scene-text heavy, is improved as well. Moreover, we found that making the top left patch white, which helped the co-training of image classification and ocr classification on ViT-22B, is not required for the subsequent training of PaLI-X.
For ablation of the PaLI-X training procedure, we used a 5B model with UL2-3B and ViT-G with 2B parameters, which is roughly a 10:1 down-scale of the PaLI-X 55B model.
For stage 1 training, we show in Table 23 that adding image token generation does not harm the performance on the main image+language understanding tasks.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Model & \multicolumn{2}{c}{OCR-task Indicator} & \multicolumn{2}{c}{ST-VQA} & \multicolumn{2}{c}{TextVQA} & \multicolumn{2}{c}{VQAv2} & \multicolumn{2}{c}{3-task avg.} \\ \hline mT5-base + Vanilla ViT-22B & No & 42.6 & 36.1 & 68.9 & 49.2 & \\ \hline mT5-base + ViT-22B-OCR & No & **47.0** & 38.9 & 69.8 & **51.9** \\ mT5-base + ViT-22B-OCR & Yes & 46.2 & **39.4** & **70.2** & **51.9** \\ \hline \hline \end{tabular}
\end{table}
Table 22: Advantage of the OCR co-training stage of ViT-22B. Pretraining is performed with resolution 224\(\times\)224 and finetuning is with 448\(\times\)448. Numbers reported are on validation split.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \begin{tabular}{c} VQA \\ v2 \\ \end{tabular} & \begin{tabular}{c} OK \\ VQA \\ \end{tabular} & \begin{tabular}{c} Text \\ VQA \\ \end{tabular} & \begin{tabular}{c} VizWiz \\ VQA \\ \end{tabular} & \begin{tabular}{c} ST \\ VQA \\ \end{tabular} & \begin{tabular}{c} OCR \\ VQA \\ \end{tabular} & \begin{tabular}{c} Info \\ VQA \\ \end{tabular} &
\begin{tabular}{c} Doc \\ QA \\ \end{tabular} & Avg. \\ \hline Split & test-dev & val & val & test-dev & val & test & test & test & test & - \\ \hline Previous Multi-task SOTA & 84.3 & 64.5 & 68.4 & 71.6 & 75.1 & 71.3 & 40.0 & 76.6 & 70.5 & - \\ \hline Single-task FT & **86.0** & **66.1** & **71.9** & **72.6** & **80.2** & **75.9** & 49.2 & 80.0 & **70.9** & - \\ Multi-task FT & 84.3 & 63.5 & 71.4 & 71.4 & 79.0 & 73.4 & **50.7** & **80.9** & 70.6 & - \\ Multi-task (+/-) & -1.7 & -2.6 & -0.5 & -1.2 & -1.2 & -2.4 & +1.5 & +0.9 & -0.3 & -0.8 \\ \hline \hline \end{tabular}
\end{table}
Table 21: Scores from multi-task finetuning compared with those from single-task finetuning for VQA. Validation or test-dev set numbers are reported for some tasks.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Mixture & COCO & VQAv2 \\ \hline without ViT-VQGAN & 139.3 & 77.3 \\ with 10\% ViT-VQGAN & 139.7 & 77.1 \\ \hline \hline \end{tabular}
\end{table}
Table 23: Ablation experiment showing adding ViT-VQGAN tokens does not harm understanding performance (captioning and VQA tasks).
Additional results: Video Captioning and QA
Below we give a brief description of each video data set we used for evaluation. Note that we freshly collected the data when performing the experiments, which led to different effective numbers of videos in different splits in some cases, see Table 24.
These descriptions refer to the original dataset size, but we train on (sometimes significantly) fewer videos -- the exact numbers are given in Table 24. This is because not all videos in the datasets were available online at the time of writing (e.g., due to user deletion).
### Datasets & Benchmarks
**MSR-VTT [55]:** This dataset consists of 10K open domain video clips for video captioning, with 20 captions each. The duration of each video clip is between 10 and 30 seconds. We follow the standard splits proposed by [55] and report results on the test set.
**VATEX [56]:** VATEX includes captions for 41K videos sampled from the Kinetics-600 dataset, with 10 English captions each. We report results on the English public test set.
**ActivityNet Captions [57]:** This dataset consists of 100K temporally localized sentences for 20k videos. We follow the standard split containing 50/25/25% of the dataset for training, validation and testing, and use ground truth temporal proposals at evaluation following [57]. Note that following other works [62], we use the val_1 split for validation and val_2 split for testing.
**Spoken Moments in Time (SMIT) [58]:** This dataset consists of long captions obtained via audio recordings for 500k short video clips. While this dataset has been traditionally only used for text to video retrieval, we find that it is a strong benchmark for captioning as it is the largest manually annotated set of videos with text captions.
**ActivityNet-QA [61]:** The dataset contains 58,000 question-answer pairs for videos in the ActivityNet dataset [97]. We report accuracy (using exact string match) on the test split. Note that we do open-ended generation for all VideoQA datasets.
**MSR-VTT-QA [60]:** This dataset was created using a semi-automatic pipeline on top of the MSR-VTT dataset. We report accuracy (using exact string match) on the test split.
**NExT-QA [59]:** We focus on the Open-Ended QA task, which consists of 52,044 question-answer pairs for a total of 5,440 videos (sampled from the VidOr dataset[98]). Exactly following Next-QA [59] and Flamingo [10], we report the Wu-Palmer Similarity (WUPS) on the test set.
Additional results: Image Classification
Setup for zero-shot and finetuning evaluationThe setup used for the experiments here uses the PaLI-X model to generate directly the (English) class name using the captioning prompt. The output is considered correct if it matches exactly the class name (apart from ImageNet-REAL, where we check if the class corresponding to the output is in the set of correct labels).
Zero-shot Evaluation resultsWe use the same scoring technique as in PaLI [5] to evaluate PaLI-X in zero-shot setting (without training on any Imagenet data). We use the PaLI-X model obtained after the first stage of training (using the base 224 image resolution).
The results are presented in Table 25. We compare the results to PaLI [5] - previous zero-shot generative SOTA, and Flamingo [10] - another generative model of similar architecture with comparable 1-shot and 5-shot results. Overall, we report that the results between PaLI and PaLI-X for 0-shot are similar.
FinetuningTo test image classification capabilities, we finetune PaLI-X on ImageNet [66] and evaluate the resulting model on ImageNet-REAL [67] and out-of-distribution datasets: ImageNet-R [68], ImageNet-A [69], ImageNet-Sketch [70], ImageNet-v2 [71].
We use the model from the first training stage (at resolution 224) and the one from the last training stage (at resolution 756). We use the same training hyperparameters for all of runs (selected without any hyperparameter tuning).
The results can be seen in Table 26. We compare the results to generative model with open vocab - GiT2 [9] (using 384 image resolution), which is the current SOTA for full-finetuning on ImageNet. PaLI-X achieves close to SOTA results for generative models on Imagenet, and other datasets.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model (resolution) & INet & REAL & INet-R & INet-A & INet-Sketch & INet-v2 \\ \hline GIT2 (384) & **89.22** & - & - & - & - & - \\ PaLI 3B (224) & 85.11 & 88.71 & **81.11** & 45.71 & 70.00 & 78.23 \\ PaLI 17B (224) & 86.13 & 88.84 & 78.21 & 50.00 & 71.21 & 78.91 \\ \hline PaLI-X (224) & 88.22 & 90.36 & 77.66 & 55.97 & 72.56 & 81.42 \\ PaLI-X (756) & 88.82 & 90.80 & 79.97 & **73.47** & **73.39** & 83.48 \\ PaLI-X \({}^{\dagger}\) (756) & 89.19 & **90.98** & 80.06 & 72.57 & 73.37 & **83.66** \\ \hline \hline \end{tabular}
\end{table}
Table 26: Classification (top-1) accuracy with Imagenet [66] fine-tuning on: ImageNet, ImageNet-REAL [67], ImageNet-R [68], ImageNet-A [69], ImageNet-Sketch [70], Imagenet-v2 [71] (resolution in parentheses). PaLI-X \({}^{\dagger}\) fine-tuned for 2.2x more steps.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model (ImageNet data) & INet & REAL & INet-R & INet-A & INet-Sketch & INet-v2 & ObjNet \\ \hline Flamingo-80B (1-shot) & 71.9 & - & - & - & - & - \\ Flamingo-80B (5-shot) & 77.3 & - & - & - & - & - \\ PaLI (17B) (0-shot) & **72.11** & **76.43** & 81.97 & 44.70 & **63.83** & **64.46** & 42.62 \\ \hline PaLI-X (0-shot) & 71.16 & 75.75 & **82.96** & **46.13** & 61.58 & 63.91 & **44.58** \\ \hline \hline \end{tabular}
\end{table}
Table 25: Top 1 accuracy results of 0-shot image classification on ImageNet ImageNet ImageNet-REAL [67], ImageNet-R [68], ImageNet-A [69], ImageNet-Sketch [70], Imagenet-v2 [71] and ObjectNet [99].
Object Detection
### Object detection as a VLM task
Object detection is framed similarly to Pix2seq [72], with two key differences: the use of a natural language vocabulary, and class-conditioning. Prompt classes are fed to PaLI-X's text encoder, in the format detect class1 and class2 and class3. The model is trained to only output bounding boxes corresponding to classes in this prompt. We represent bounding boxes as coordinates in the same style as pix2seq [72]; that is, 4 integers \(\mathtt{y}_{\mathtt{min}}\)\(\mathtt{x}_{\mathtt{min}}\)\(\mathtt{y}_{\mathtt{max}}\)\(\mathtt{x}_{\mathtt{max}}\) ranging from 0 to 999. Figure 9 shows an example input.
Prompt sampling hyperparametersDuring training, a prompt for each example. We construct prompts from three pieces of information:
* _Positives_: These are the bounding boxes for objects definitely present in the image. During training, per example we sample \(p^{+}\sim\mathcal{U}(0,P^{+}_{\max})\), and keep that proportion of positives.
* _Negatives_: These are the known instance negatives i.e. bounding boxes for objects definitely not present. For exhaustively labelled datasets like COCO, this is simply classes not labelled as positives. For non-exhaustively labelled datasets like LVIS, these are the classes not labelled as positives, which were presented to raters. During training sample \(f^{-}\sim\mathcal{U}(0,5.0)\), and use up to \(f^{-}\times n^{+}\), where \(n^{+}\) is the number of positives after sampling \(p^{+}\).
* _Global negatives_: These are negatives which are not explicitly labelled as negatives. They are taken from a wider label space combining multiple detection datasets. For a given example, valid global negatives consist of classes from the wider label space not explicitly labelled as positives or negatives. During training, we sample \(f^{GN}\sim\mathcal{U}(0,5.0)\) and append \(f\times n^{+}\) global negatives, where \(n_{+}\) is the number of positives after sampling \(p^{+}\). By default, the combined label spaces of Visual Genome, Objects365 and OpenImagesV4 was used as the global label space, with the exception of detection finetuning, where LVIS and COCO label spaces were also added.
We truncate the number of total classes to \(n_{\text{max}}\). \(n_{\text{max}}\) and \(P^{+}_{\max})\) are tuned per dataset to meet sequence lengths. After truncatation, we shuffle classes in the prompt.
Figure 9: An example training pair, consisting of the text prompt, the image and the expected output. The prompt consists of multiple classes; we show a hypothetical Open Images V4 example, with positives ‘car’ and ‘wheel’, negative ‘giraffe’ and global negatives ‘mask’ and ‘coffee maker’ (sampled from the visual genome label space).
### Preprocessing
During pre-training, data is preprocessed to remove all LVIS-rare labels, following the protocol of OwlViT [28]. This is not done for detection finetuning. Images are randomly flipped horizontally, and randomly resized to between 0.3 and 2.0 \(\times\) their original sized, followed by selecting a random square crop of the current training resolution. If the image is resized to be smaller than the current resolution, it is left as is. Images are finally padded to a square.
### Licenses and attribution for images used in Main Text Figure 2
* Watermelon: Credit: Sarah Pflug [https://burst.shopify.com/photos/cutting-watermelon](https://burst.shopify.com/photos/cutting-watermelon).
* Bowls [https://www.flickr.com/photos/ariesandrea/502826051/](https://www.flickr.com/photos/ariesandrea/502826051/) CC-BY-NC-ND 2.0
* Business cat Credit: Sarah Pflug [https://burst.shopify.com/photos/business-cat-in-office](https://burst.shopify.com/photos/business-cat-in-office)
* Wall Credit: Matthew Henry [https://burst.shopify.com/photos/man-walking-in-front-of-this-is-paradise-wall?c=urban-life](https://burst.shopify.com/photos/man-walking-in-front-of-this-is-paradise-wall?c=urban-life)
|
2306.01564
|
Cosmologies with positive Lambda: Hierarchies of future behaviour
|
Smooth Cauchy data for the Einstein-Lambda-vacuum field equations with
positive cosmological constant Lambda that are sufficiently close to de Sitter
data develop into a solution that admits a smooth conformal boundary Scri+ in
its future. The conformal Einstein equations determine a smooth conformal
extension across Scr+ that defines on `the other side' again a Lambda-vacuum
solution. In this article we discuss to what extent these properties generalize
to the future asymptotic behaviour of solutions to the Einstein-Lambda
equations with matter. We study FLRW solutions and the Einstein-Lambda
equations coupled to conformally covariant matter transport equations, to
conformally privileged matter equations, and to conformally non-covariant
matter equations. We present recent results on the
Einstein-Lambda-perfect-fluid equations with a non-linear asymptotic dust or
asymptotic radiation equation of state.
|
Helmut Friedrich
|
2023-06-02T14:21:49Z
|
http://arxiv.org/abs/2306.01564v2
|
# Cosmologies with positive \(\lambda\): Hierarchies of future behaviour.
###### Abstract
Smooth Cauchy data on \(\mathbb{S}^{3}\) for the Einstein-\(\lambda\)-vacuum field equations with cosmological constant \(\lambda>0\) that are sufficiently close to de Sitter data develop into a solution that admits a smooth conformal boundary \(\mathcal{J}^{+}\) in its future. The _conformal Einstein equations_ determine a smooth conformal extension across \(\mathcal{J}^{+}\) that defines on 'the other side' again a \(\lambda\)-vacuum solution. In this article we discuss to what extent these properties generalize to the _future asymptotic behaviour of solutions to the Einstein-\(\lambda\) equations with matter_. We study FLRW solutions and the Einstein-\(\lambda\) equations coupled to conformally covariant matter transport equations, to conformally privileged matter equations, and to conformally non-covariant matter equations. We present recent results on the Einstein-\(\lambda\)-perfect-fluid equations with a non-linear _asymptotic dust_ or _asymptotic radiation_ equation of state.
## 1 Introduction
Roger Penrose suggested to discuss the asymptotic behaviour of space-times in terms of extensions of their conformal structure [24], [25]. The idea is that a _physical space-time_\((\hat{M},\hat{g}_{\mu\nu})\) may admit a smooth extension \((M,g_{\mu\nu},\Omega)\) where \(M\) is a smooth manifold
with boundary \({\cal J}\), \(g_{\mu\nu}\) a smooth Lorentz metric and \(\Omega\) a smooth function on \(M\), so that \(M=\hat{M}\cup{\cal J}\), \(\Omega>0\) and \(g_{\mu\nu}=\Omega^{2}\,\hat{g}_{\mu\nu}\) on \(\hat{M}\), while \(\Omega=0\) on the set \({\cal J}\) (referred to as _Scri_). It may not be easy to find for a given space-time \((\hat{M},\hat{g}_{\mu\nu})\) a suitable conformal factor \(\Omega\) and manifold extension Scri, but if it can be done, the construction provides precise and complete information about the asymptotic behaviour of the space-time in the neighbourhood of Scri.
In the first 20 years following the introduction of the concept there became available some related general observations, various discussions of fields and physical quantities on and near conformal boundaries that were assumed to be smooth, and detailed verifications of conformal boundaries in the case of a number of important exact solutions with symmetries. After becoming acquainted with the idea I tried to understand to what extent the concept applied to general solutions of Einstein's equation
\[R_{\mu\nu}[\hat{g}]-\frac{1}{2}\,R[\hat{g}]\,\hat{g}_{\mu\nu}+\lambda\,\hat{g} _{\mu\nu}=\hat{T}_{\mu\nu}, \tag{1.1}\]
with cosmological constant \(\lambda>0\) and energy momentum tensor \(\hat{T}_{\mu\nu}\).
The conformal extension idea will be illustrated here by a discussion of FLRW models because some of their features will be important for us in the following. By these we understand space-times with manifold \(\mathbb{R}\times S\) and metric \(\hat{g}=-dt^{2}+a^{2}\,\hat{h}\), where \(a=a(t)\) is a scalar, \(S\) is a 3-dimensional manifold, and \(\hat{h}\) a t-independent Riemannian metric of constant curvature on \(S\) with Ricci scalar \(R[\hat{h}]=const.\geq 0\). The space-times are required to satisfy the Einstein-\(\lambda\)-perfect-fluid equations with flow field \(U=\partial_{t}\), total energy density \(\hat{\rho}(t)\), pressure \(\hat{p}(t)\), and an equation of state \(\hat{p}=w(\hat{\rho})\).
To discuss the solutions in terms of a conformal representation we use the rescalings
\[\hat{g}_{\mu\nu}\to g_{\mu\nu}=\Omega^{2}\,\hat{g}_{\mu\nu},\qquad\hat{U}^{ \mu}\to U^{\mu}=\Omega^{-1}\,\hat{U}^{\mu},\qquad\hat{\rho}\to\rho=\Omega^{- e}\,\hat{\rho},\]
with a conformal factor \(\Omega=\Omega(t)\) whose evolution is fixed by the requirements that the Ricci scalars of \(g\) and \(\hat{h}\) satisfy \(R[g]=R[\hat{h}]\) throughout the solution and \(a\,\Omega=1\), \(d(a\,\Omega)/dt=0\) on a given slice \(\{t=t_{*}\}\). The constant \(e\) will be determined later.
Then \(\Omega=a^{-1}\) and in terms of the coordinate \(\tau(t)=\tau_{*}+\int_{t_{*}}^{t}\,a^{-1}\,dt\) follows
\[g=-d\tau^{2}+\hat{h}.\]
With a linear equations of state \(w(\hat{\rho})=w_{*}\,\hat{\rho}\) where \(0\leq w_{*}=const.\leq 1/3\), the conformal analogues of the well known Friedmann and energy conservation equations that represent the content of (1.1) then read, with the dot denoting \(d/d\tau\),
\[(\dot{\Omega})^{2}=\frac{\lambda}{3}-\frac{R[\hat{h}]}{6}\,\Omega^{2}+\Omega^ {e}\,\frac{\rho}{3},\qquad\dot{\rho}=\Omega^{-1}\,(3+3\,w_{*}-e)\,\rho\,\dot{ \Omega}.\]
With initial data \(\Omega=\Omega_{*}>0\) and \(\rho=\rho_{*}\geq 0\) at \(\tau=\tau_{*}\) this implies
\[\rho=\rho_{*}\left(\frac{\Omega}{\Omega_{*}}\right)^{3+3\,w_{*}-e},\]
and thus, _independent of the choice of \(e\)_, the conformal Friedmann equation in the form
\[(\dot{\Omega})^{2}=\frac{\lambda}{3}-\frac{R[\hat{h}]}{6}\,\Omega^{2}+\frac{\rho_ {*}}{3}\left(\frac{\Omega}{\Omega_{*}}\right)^{3+3\,w_{*}}.\]
Let \(\Omega_{*}\) be small (that is \(a(t_{*})\) be large) so that the right hand side is positive for \(\Omega\leq\Omega_{*}\) and choose the sign of the square root and the parameter \(\tau\) so that \(\dot{\Omega}\) is decreasing while \(\tau\) is increasing. The equation can then be integrated until \(\Omega\to 0\) at some finite value \(\tau_{**}\) of the parameter. Since \(a\to\infty\) and \(t\to\infty\) as \(\Omega\to 0\), the hypersurface \(\mathrm{Scri}=\{\Omega=0\}\) defines a boundary of the physical space-time that represents future time-like infinity with respect to the physical metric \(\hat{g}_{\mu\nu}=\Omega^{-2}\,g_{\mu\nu}\) and that is space-like with respect to \(g_{\mu\nu}\).
That the zero of the function \(\Omega\) is given here by a finite value of the conformal coordinate \(\tau\) helps to recognize subtle matter dependent differences in the asymptotic behaviour of \(a(t)\) as \(t\to\infty\). In the vacuum case \(\rho_{*}=0\) we get with \(t_{*}=\tau_{*}=0\), \(\lambda=3\), \(S=\mathbb{S}^{3}\), and \(\hat{h}=h_{\mathbb{S}^{3}}\) whence \(R[\hat{h}]=6\), the solution \(\Omega=\cos\tau\). Then \(t=\log\tan(\tau/2+\pi/4)\) and \(a=1/\cosh t\) which gives the maximally symmetric, geodesically complete, conformally flat _de Sitter solution_
\[\hat{M}=\mathbb{R}\times\mathbb{S}^{3},\qquad\hat{g}=-dt^{2}+\cosh^{2}t\,\hat{ h}. \tag{1.2}\]
It is here not so much important for us that the solution can be given explicitly but that we get precise information on the asymptotic behaviour of \(\Omega(\tau)\) near Scri. The solution of the ODE above extends smoothly to Scri and in fact beyond. While the 'physical' de Sitter metric is defined for \(t\in\mathbb{R}\), which is covered by \(\tau\) with \(-\pi/2<\tau<\pi/2\), the cyclic function \(\Omega(\tau)\) and the metric \(g\) are defined and smooth for \(\tau\in\mathbb{R}\), defining a sequence of (isometric) vacuum solutions which are separated by Scri's.
When \(\rho_{*}>0\) and \(0\leq w_{*}\leq 1/3\) different cases occur. If \(0<w_{*}<1/3\) there arises a smoothness and an extension problem. The solutions do approach the value \(\Omega=0\) but only a few derivatives of the function \((\Omega/\Omega_{*})^{3+3\,w_{*}}\) have a finite limit as \(\Omega\to 0\). Some solutions, such as the one obtained with \(w_{*}=1/9\) for instance, admit a unique extension into a range where \(\Omega<0\). It is smooth where \(\Omega\neq 0\) and drops smoothness as \(\Omega\to 0\). Others, like the one obtained for \(w_{*}=1/6\), do not admit an extension beyond \(\Omega=0\) as solution to the equation above.
However, in the case of _pure dust, where \(w_{*}=0\)_, or in the case of _(incoherent) pure radiation, where \(w_{*}=1/3\)_, the solutions extend smoothly to \(\Omega=0\) and beyond. (The unusual word _pure_ is added here to distinguish these cases clearly from related ones considered later). We set for convenience \(\Omega_{*}=1\), \(e=3+3\,w_{*}\) and consider the _case of pure dust with \(e=3\)_ and the _case of pure radiation with \(e=4\)_. We have then \(\rho=\rho_{*}=const.>0\) and get the conformal Friedmann equation in the form
\[(\dot{\Omega})^{2}=\frac{\lambda}{3}-\frac{R[\hat{h}]}{6}\,\Omega^{2}+\frac{ \rho_{*}}{3}\,\Omega^{e}.\]
The global solutions and the solution manifold look in these two cases as follows.
_Pure dust solutions_. Depending on the real roots of the polynomial
\[P(\Omega)=\frac{\lambda}{3}-\frac{R[\hat{h}]}{6}\,\Omega^{2}+\frac{\rho_{*}} {3}\,\Omega^{3}, \tag{1.3}\]
three cases occur (assuming in the following discussions suitable parameters \(\tau\) and signs of the square root of \(P\)).
(i) \(R[\hat{h}]^{3}>54\,\lambda\,\rho_{*}^{2}\): \(P\) has roots \(\Omega_{-}<0<\Omega_{1}<\Omega_{2}\). There are solutions that takes values in \([\Omega_{2},\infty[\) start from a Big Bang (where \(\Omega\to\infty\), resp. \(a\to 0\) for a finite value of the parameter), decrease, reach their minimum (i.e. \(a\) reaches a maximum) \(\Omega_{2}\), increase again and approach a Big Crunch where \(\Omega\to\infty\) for a finite value of the parameter.
The solutions that take values in \([\Omega_{-},\Omega_{1}[\) are cyclic, oscillating between \(\Omega_{1}\) and \(\Omega_{-}\) and passing through Scri's on the way. These approach the de Sitter solution as \(\rho_{*}\to 0\).
(ii) \(R[\hat{h}]^{3}=54\,\lambda\,\rho_{*}^{2}\): \(P\) has a root \(\Omega_{-}<0\) and a double root \(\Omega_{+}>0\). The solutions that takes values in the domain \(]\Omega_{+},\infty[\) start from a Big Bang, decrease monotonously and approach the value \(\Omega_{+}\) without ever assuming it. Similarly, starting from their minimum value \(\Omega_{-}\) the solutions which takes values in \([\Omega_{-},\Omega_{+}[\) are increasing, pass Scri's, and approach asymptotically the value \(\Omega_{+}\) in both directions.
iii) \(0\leq R[\hat{h}]^{3}<54\,\lambda\,\rho_{*}^{2}\): \(P\) has one real root \(\Omega=\Omega_{-}<0\). The solutions start with \(\Omega>0\) from a Big Bang, decrease monotonously, pass through a Scri for a finite value of the parameter, become negative, assume their minimum \(\Omega_{-}\), increase again, pass through another Scri and approaches a Big Crunch where \(\Omega\to\infty\).
This solution admits a smooth cyclic extension in the following sense. Before the limit \(\Omega\to\infty\) is achieved the function \(\omega=\Omega^{-1/2}\) is defined and satisfies
\[4\,(\dot{\omega})^{2}=\frac{\rho_{*}}{3}-\frac{R[\hat{h}]}{6}\,\omega^{2}+ \frac{\lambda}{3}\,\omega^{6}. \tag{1.4}\]
With a redefinition of the constants this becomes the conformal Friedmann equation above with \(w_{*}=1\) (_stiff matter equation of state_) and \(\lambda\) and \(\rho_{*}\) interchanged. The condition on \(R[\hat{h}]\) above ensures that the polynomial in \(\omega\) on the right hand side is positive everywhere and the equation can be integrated across \(\omega=0\). Where \(\omega<0\) the transformation \(\Omega=\omega^{-2}\) connects to a second copy of the solution above and the process can be repeated. If the parameter \(\tau\) is chosen so that \(\Omega(0)=\Omega_{-}\), whence \(\Omega(\tau)=\Omega(-\tau)\), the solution is given in terms of Jacobi's elliptic function \(cn(u,k)\)[19] by
\[\Omega(\tau)=\Omega_{-}+\Sigma\frac{1-cn(u(\tau),k)}{1+cn(u(\tau),k)}, \tag{1.5}\]
where \(u(\tau)=(\rho_{*}\,\Sigma/3)^{1/2}\,\tau\), the modulus is \(k=(R[\hat{h}]+2\,\rho_{*}\,\Sigma-4\,\rho_{*}\,\Omega_{-})^{1/2}\,(8\,\rho_{*} \,\Sigma)^{-1/2}\), \(\Sigma=(3\,\Omega_{-}^{2}-\Omega_{-}\,R[\hat{h}]/\rho_{*})^{1/2}\), and the root \(\Omega_{-}\) of \(P\), that satisfies \(\Omega_{-}<-R[\hat{h}]/6\,\rho_{*}\), is related to \(\lambda\) by \(\lambda-R[\hat{h}]\,\Omega_{-}^{2}/2+\rho_{*}\,\Omega_{-}^{3}=0\).
_Pure radiation solutions_. Depending on the real roots of the polynomial
\[Q(\Omega)=\frac{\lambda}{3}-\frac{R[\hat{h}]}{6}\,\Omega^{2}+\frac{\rho_{*}}{ 3}\,\Omega^{4}, \tag{1.6}\]
the following cases occur.
(i) \(R[\hat{h}]^{2}>16\,\rho_{*}\,\lambda\): Q has four simple roots \(\Omega_{1}<\Omega_{2}<0<\Omega_{3}<\Omega_{4}\). There are solutions that take values in \([\Omega_{4},\infty[\), start from a Big Bang, achieve their minimum \(\Omega_{4}\), increase and end in a Big Crunch. There are a similar solutions that take values in \(]-\infty,\Omega_{1}]\).
There are cyclic solutions that take values in \([\Omega_{2},\Omega_{3}]\), oscillate between \(\Omega_{2}\) and \(\Omega_{3}\), and pass through Scri's on the way. These solutions approach the de Sitter solution as \(\rho_{*}\to 0\).
(ii) \(R[\hat{h}]^{2}=16\,\rho_{*}\,\lambda\): \(Q\) has the two double roots \(\Omega_{-}<0<\Omega_{+}\). There is a strictly monotonous solution which takes values in \(]\Omega_{-},\Omega_{+}[\), passes through a Scri, and approaches the values \(\Omega_{-}\) and \(\Omega_{+}\) asymptotically. There are two strictly monotonous solutions that take values in \(]\Omega_{+},\infty[\) and in \(]-\infty,\Omega_{-}[\) respectively. The first one approaches at one end the value \(\Omega_{+}\) asymptotically and at the other end a Big Bang. The second solution is similar.
(iii) \(0\leq R[\hat{h}]^{2}<16\,\rho_{*}\,\lambda\): \(Q\) has no real root. The solution takes values in \(]-\infty,\infty[\). It starts from a Big Bang, decreases with \(\Omega>0\) monotonously, passes through a Scri, decreases further, and reaches a Big Crunch where \(\Omega\to-\infty\).
Again, this solution admits a smooth cyclic extension in the following sense. With \(\omega=-\Omega^{-1}\) close to the Big Crunch the equation for \(\Omega\) gives
\[(\dot{\omega})^{2}=\frac{\lambda}{3}\,\omega^{4}-\frac{R[\hat{h}]}{6}\,\omega ^{2}+\frac{\rho_{*}}{3}, \tag{7}\]
which is the original equation with the roles of \(\lambda\) and \(\rho_{*}\) swapped. It can be integrated beyond \(\omega=0\) and with \(\Omega=\omega^{-1}\) be connected to a second copy of the solution for \(\Omega\).
If the parameter \(\tau\) is chosen so that \(\Omega(0)=0\) and \(\Omega(\tau)\) is increasing with increasing \(\tau\) near \(\tau=0\), the solution is given in terms of Jacobi's elliptic functions by
\[\Omega(\tau)=\sqrt{f}\,\,\frac{sn(u(\tau),k)}{cn(u(\tau),k)}\,\,\frac{k^{ \prime}+dn(u(\tau),k)}{1+dn(u(\tau),k)}, \tag{8}\]
where \(u(\tau)=\sqrt{\rho_{*}/3}\,(\sqrt{f}+e)\,\tau\), \(f=\sqrt{\lambda/\rho_{*}}\), \(e=\sqrt{f/2+R[\hat{h}]/8\,\rho_{*}}\), \(k^{\prime}=\sqrt{1-k^{2}}\), \(k=2\,(e+\sqrt{f})^{-1}\,\sqrt{e\,\sqrt{f}}\).
The 'fine tuned' cases (ii) will not be of interest to us in the following. In the cases (iii) a Big Bang in the finite past (in terms of physical time) is connected with a space-like Scri in the infinite future. These cases admit the limits \(R[\hat{h}]\to 0\) which seems to be favoured by certain models which are en vogue at present. We shall be interested in the cases where \(R[\hat{h}]>0\) and focus on the ends where the solutions approach Scri.
Of course, in the standard interpretation of GR only a maximal connected set where \(\Omega>0\) will be considered as a physical solution. Its conformal extension may just be considered a fun game which works because the solutions are conformally flat and extremely simple. Heeding Tolman's admonition [34], which reads (with a slight variation) '\(\ldots\) we study FLRW models primarily in order to secure definite and relatively simple mathematical problems, rather than to secure a correspondence with known reality \(\ldots\)' we wonder: Do any of the observations above extend to more general situations? We are in particular interested in examples where the fluid flow is not forced to be geodesic and which are not
conformally flat so as to admit gravitational radiation, a concept that FLRW models allow to talk about only in terms of approximations.
Answers of any generality to the question above need global or semi-global results on suitable Cauchy problems for Einstein's field equations. Moreover, as shown by the subtleties discussed above, they require sharp control on the asymptotic behaviour of the solutions at least at future time-like infinity. At the time when Penrose put forward his proposal no such results were available. Until the early 1980's the understanding of the Cauchy problem for Einstein's equations was limited to existence results local in time.
## 2 Conformal field equations, stability results.
Finding initial data which develop into solutions to the Einstein equations that admit smooth conformal boundaries can pose subtle problems when the cosmological constant \(\lambda\) vanishes or is negative. Because the initial slices are not compact there have to be made choices about the fall-off behaviour of the data (see [10], [12]).
In the same article where Einstein introduced the cosmological constant, he came to the conclusion that cosmological solutions should be spatially compact [2]. In fact, in contrast to the FLRW solutions, which are defined by ODE's, there can be no other choice when the full field equations are involved. There are no natural boundary conditions for the evolution equations if \(\lambda>0\). If the initial slice is compact, which will be assumed in the following, only smoothness and smallness conditions can be imposed on 'general' Cauchy data. The following global non-linear stability result holds [4], [5].
_On a slice \(S=\{t=const.\}\sim\mathbb{S}^{3}\) of the de Sitter solution to Einstein's field equations (1.1) with \(\lambda=3\) and \(\hat{T}_{\mu\nu}=0\) consider smooth Cauchy data for these equations. If these data are (in terms of suitable Sobolev norms) sufficiently close to the de Sitter data on \(S\), they develop into solutions to (1.1) that are time-like and null geodesically complete and admit smooth space-like conformal boundaries at past and future time-like infinity._
Because the cosmological constant can be given any positive value by a conformal rescaling with a constant conformal factor, the precise value of \(\lambda\) is irrelevant here.
The technical basis of this result is a remarkable feature of the Einstein equations. While they are designed to determine a metric, they can be represented in terms of a conformal factor \(\Omega\), the conformal metric \(g_{\mu\nu}=\Omega^{2}\,\hat{g}_{\mu\nu}\) and certain tensor fields derived from them so that they imply with suitable gauge conditions equations that can be hyperbolic even where the conformal factor vanishes or becomes negative, i.e. beyond the domain where \(\hat{g}_{\mu\nu}\) is defined [3]. We refer to these equations as _conformal Einstein equations_. It should be noted that this name has subsequently also been used for conformal representations of the Einstein equations which were derived for other purposes and do not share the properties used below.
Conformal de Sitter space, given above by \(M=]-\pi/2,\pi/2[\times\mathbb{S}^{3},\ g=-d\tau^{2}+\hat{h}\), \(\Omega=\cos\tau\), is a solution to the conformal Einstein equations that extends smoothly, as a solution to the equations and with the same expressions for the metric and conformal factor, beyond the boundaries \(\{\tau=\pm\pi/2\}\) to all of \(\mathbb{R}\times\mathbb{S}^{3}\). Then \(\Omega<0\) on the slices \(\{\tau=\pm\pi\}\). Consider Cauchy data \(\bar{g}\), \(\bar{\Omega}\), \(\ldots\) for the conformal field equations on a slice \(\{\tau=\tau_{*}\}\) with \(|\tau_{*}|<\pi/2\) which are 'general' in the sense that symmetries are not necessarily
imposed. If these data are sufficiently close to the conformal de Sitter data induced on \(\{\tau=\tau_{*}\}\), general properties of hyperbolic equations [18] guarantee that the solution \(\bar{g}\), \(\bar{\Omega}\),... to the conformal field equations which develop from the general data also exist on the domain \(\{|\tau|\leq\pi\}\) and the conformal factor \(\bar{\Omega}\) is negativ on \(\{\tau=\pm\pi\}\). The conformal field equations then ensure that there exist two hypersurfaces \({\cal J}^{\pm}\subset\{|\tau|\leq\pi\}\) with \(\bar{\Omega}|_{{\cal J}^{\pm}}=0\), \(d\bar{\Omega}|_{{\cal J}^{\pm}}\neq 0\) that are space-like with respect to \(\bar{g}\) and sandwich a domain \(\hat{M}\subset\{|\tau|\leq\pi\}\) on which \(\bar{\Omega}>0\). Then \((\hat{M},\hat{g}=\bar{\Omega}^{-2}\,\bar{g})\) is the desired solution to the Einstein equations.
The fact that the set of all Cauchy data on \(\mathbb{S}^{3}\) for the Einstein vacuum equations with positive cosmological constant contains an open subset (in terms of suitable Sobolev norms) of data which develop into solutions that are conformally well behaved at future and past time-like infinity shows that the existence of smooth conformal boundaries can be a fairly general feature of solutions to Einstein's field equations. Besides a smallness condition there are no restrictions on the conformal Weyl tensor.
Recovering from the technical struggles that led to this insight, I began to wonder:
_If the field equations ensure a smooth future evolution of \(\Omega\) and \(g\) beyond \({\cal J}^{+}=\{\Omega=0\}\), so that they define in the future of \({\cal J}^{+}\) another 'physical' solution to Einstein's field equations with metric \(\hat{g}=\Omega^{-2}\,g\), and any gravitational radiation, represented by nonlinear perturbations of the conformal Weyl tensor, travels unimpeded across \({\cal J}^{+}\) into that domain, why should physics come to an end at the future conformal boundary \({\cal J}^{+}\)?_
This behaviour may be considered as just another quirk of the field equations, that should not be taken too seriously. The history of General Relativity shows, however, that Einstein's equations were often wiser than their solvers. Physicists made sense of the more exotic features of the solutions found by Schwarzschild and Friedmann only years after their discovery.
Though it can all be found in the article referred to above, I never explicitly speculated about this in public. Being a beginner, it would hardly have been taken seriously, in particular, because I had no answers to the questions:
\(-\)_What happens if there is matter around_?
\(-\)_How will matter behave in the far future_?
Our discussion of the FLRW solutions above have shown that different matter models, exemplified there by \(\rho_{*}\) and \(w_{*}\), may have quite divers consequences. Moreover,
\(-\)_What could be the meaning of the solution 'on the other side' of \({\cal J}^{+}\)_?
In the stability result above the space-time defined by the metric \(\hat{g}=\Omega^{-2}\,g\)_on the other side of_\({\cal J}^{+}\) looks like a time reversed version of the space-time end 'on this side': from being infinitely extended at \({\cal J}^{+}\) its space sections begin to shrink. This is certainly quite different from our present idea about the beginning of a cosmological space-time. So I kept returning to the first two questions over the following years.
The first stability result generalizing the one above concerns the (in four space-time dimensions) conformally invariant Maxwell- or Yang-Mills-equations [6]. It shows:
_The nonlinear vacuum stability result outlined above generalizes to the coupled Einstein-\(\lambda\)-Maxwell-Yang-Mills equations. The perturbed solutions admit smooth conformal boundaries in the future and the past. The conformal field equations determine smooth conformal
extensions of the solutions beyond these boundaries_.
This result establishes a pattern for analyzing various other situations in which conformally covariant matter transport equations are coupled to the Einstein-\(\lambda\) equation. Christian Lubbe and Juan Valiente Kroon studied the Einstein-\(\lambda\)-perfect-fluid equations with the equation of state \(\hat{p}=1/3\,\hat{\rho}\) for pure (incoherent) radiation. These matter equations have in common with the Maxwell equations that the energy-momentum tensor is trace-free and the conformal matter equations have the same form as the 'physical' version. They show [21]:
_The FLRW-solutions with the equations of state of pure radiation and a smooth conformal boundary in the future are non-linearly future stable in the class of all Einstein-\(\lambda\)-perfect-fluid solutions with this equation of state. The perturbed solutions admit a smooth conformal boundary in the future and a smooth conformal extension beyond_.
A further example leading to a similar result is given by the Einstein equations coupled to the massless Vlasov matter equations [17].
In the FLRW models given by (1.5), (1.8) and the generalizations discussed above, forward Scri's and backward Scri's as well as Big Bangs and Big Crunches stand back to back in the conformal extensions. Motivated by the observations above, results on Paul Tod's ideas about isotropic singularities [22], [31], [32], [33], where the initial singularity is represented after a suitable conformal rescaling by a finite space-like set similar to a \({\cal J}^{+}\), and by his thoughts about the nature of entropy near the big bang, Roger Penrose proposed a cosmological model, referred to as _Conformal Cyclic Cosmology_ (CCC) [26]. It considers a chain of universes where a given universe, say \(U_{n}\), develops in its future a well defined \({\cal J}^{+}\), referred to now as _the crossover surface_, which is followed by another universe, \(U_{n+1}\), for which the crossover surface represents an isotropic singularity. The solutions (1.5), (1.8) can be used to create such situations.
This again gives rise to complicated questions. Strongly simplifying assumptions on \(U_{n}\) and \(U_{n+1}\) may provide situations where some kind of identification or glueing of the different ends lead to a picture as outlined above. It is not clear, however, that anything similar can be done with any degree of control if more general, conformally curved, solutions to Einstein's equations are considered. The precise nature of the transition from the infinite future of \(U_{n}\) to the beginning of \(U_{n+1}\) is unresolved so far. It should be brokered by a mechanism which guarantees a unique extension without any interference from outside, but not necessarily preserving the time reflection invariance of hyperbolic equations. This requires a closer look at the equations and the matter models from both sides of the crossover surface, generalizing perhaps of the work initiated by Alain Bachelot [1].
## 3 Conformally non-covariant matter fields.
Finding such a mechanism, if something like it exists at all, requires among other things a sufficiently general and deep understanding of the behaviour of matter and the field equations at future time-like infinity. A number of authors analysed the future asymptotic behaviour of solutions to the Einstein-\(\lambda\)-perfect fluid equations with an homentropic flow, where the entropy is constant in space and time and the equation of state can be given in
the form \(\hat{p}=w(\hat{\rho})\) with some suitable function \(w\). Often they assume a linear equation of state \(\hat{p}=w_{*}\,\hat{\rho}\), \(w_{*}=const.\), and the additional condition \(0<w_{*}<1/3\), see [16], [23], [28], [29], [30]. In the articles [20] and [28] is studied the future stability of FLRW space-times for more general classes of equations of state \(\hat{p}=w(\hat{\rho})\). All of this work was done in more conventional representations of the field equations in terms of which the questions of interest here may indeed be difficult to analyse. None of the authors above looked at these things in the way indicated above.
In the following I tried to generalize the kind of analysis begun above, hoping to answer the following question:
_Can there be achieved \(C^{\infty}\), \(C^{k}\), \(\ldots\), or at least in some sense uniquely extendible conformal structures at future time-like infinity for solutions to Einstein-\(\lambda\)-matter equations with conformally non-covariant matter field equations?_
### 3.1 Scalar fields.
There are two results pointing into that direction. In [27] H. Ringstrom studied the future stability of a very general class Einstein-non-linear scalar field systems with a scalar field equation of the form
\[\hat{\nabla}_{\mu}\hat{\nabla}^{\mu}\phi-\left(m^{2}\,\phi+V^{\prime}(\phi) \right)=0, \tag{3.1}\]
where \({}^{\prime}=\partial/\partial\phi\), and an energy momentum tensor
\[\hat{T}_{\mu\nu}=\hat{\nabla}_{\mu}\phi\,\hat{\nabla}_{\nu}\phi-\left(\frac{ 1}{2}(\hat{\nabla}_{\rho}\phi\,\hat{\nabla}^{\rho}\phi+m^{2}\,\phi^{2})+V(\phi )\right)\hat{g}_{\mu\nu}, \tag{3.2}\]
with a potential or the form \(V(\phi)=\phi^{3}\,\mu+\phi^{4}\,U(\phi)\) where \(\mu\) is a constant and \(U\) a smooth real-valued function. In [9] has been considered a special case with the following result.
_If \(\mu=0\) and \(3\,m^{2}=2\,\lambda\), the coupled Einstein-\(\lambda\)-scalar-field equations (1.1), (3.1), (3.2) imply a reduced system of conformal field equations for the unknowns \(\Omega\), \(g_{\mu\nu}=\Omega^{2}\,\hat{g}_{\mu\nu}\), \(\psi=\Omega^{-1}\,\phi\) and some tensor fields derived from them. In a suitable gauge this is hyperbolic for any sign of \(\Omega\). Smooth Cauchy data for this system can be prescribed with \(\Omega=0\) on a compact \(g\)-space-like 3-manifold \({\cal J}^{+}\). The development of these data backwards in time is smooth and induces on a space-like slice \(S\) (in the corresponding physical space-time) smooth standard Cauchy data \(\Delta_{0}\) for the coupled Einstein-\(\lambda\)-scalar-field system. For \(\Delta_{0}\) there exist (in terms of Sobolev norms) an open neighbourhood of Cauchy data on \(S\) for this system so that the future development of any smooth data in this neighbourhood admit a smooth conformal extension beyond the respective future time-like infinity._
We recall that the conformally covariant scalar operator \(\phi\to L_{\hat{g}}\phi=(\square_{\hat{g}}-\frac{1}{6}\,R[\hat{g}])\phi\) with \(\square_{\hat{g}}=\hat{\nabla}_{\mu}\hat{\nabla}^{\mu}\) satisfies \(L_{\Omega^{2}\hat{g}}(\Omega^{-1}\phi)=\Omega^{-2}\,L_{\hat{g}}\phi\). With the trace of the energy momentum tensor (3.2) given by
\[\hat{T}=-\hat{\nabla}_{\mu}\phi\,\hat{\nabla}^{\mu}\phi-2\,m^{2}\,\phi^{2}-4\, V(\phi)=-R[\hat{g}]+4\,\lambda,\]
equation (3.1) can be expressed in terms of the conformally covariant wave operator. In terms of the conformal fields \(\psi=\Omega^{-1}\,\phi\) and \(g_{\mu\nu}=\Omega^{2}\,\hat{g}_{\mu\nu}\) this gives the conformal version
of the equation in the form
\[L_{g}\psi=\Omega^{-1}\left(m^{2}-\frac{2}{3}\,\lambda\right)\psi+\Omega^{-2}\,V^ {\prime}(\Omega\,\psi)-\frac{2}{3}\,\Omega^{-1}\,\psi\,V(\Omega\,\psi)\]
\[-\frac{1}{6}\,\Omega\,\psi\,\nabla_{\mu}(\Omega\,\psi)\,\nabla^{\mu}(\Omega\, \psi)-\frac{1}{3}\,m^{2}\,\Omega\,\psi^{3}.\]
Thus (3.1) is not conformally covariant with our conditions but _conformally regular_ in the sense that there will remain no \(\Omega^{-1}\) terms on the right hand side if the conditions of the theorem above are taken into account. This is a property it shares with the Einstein-\(\lambda\) vacuum field equations. The energy momentum tensor is not trace free but satisfies
\[\hat{T}=-\Omega^{2}\left(\nabla_{\mu}(\Omega\,\psi)\,\nabla^{\mu}(\Omega\, \psi)+\frac{4}{3}\lambda\,\psi^{2}+4\,\Omega^{-2}\,V(\Omega\,\psi)\right)\to 0 \,\,\,\mbox{as}\,\,\,\Omega\to 0,\]
_if \(g_{\mu\nu}\) and \(\psi\) extend smoothly_. It should be mentioned that the hyperbolic reduced conformal field equations considered above as well as those considered in the following preserve the constraints implied by the conformal field equations. In the case of the present system, the proof of this fact is far from immediate.
### Pure dust cosmologies.
The flow fields \(\hat{U}\) of Einstein-\(\lambda\)-perfect-fluid solutions with pure dust equation of state
\[\hat{p}=0,\]
arising from data on a compact 3-manifold \(S\) where \(\hat{\rho}>0\) are often interpreted as representing the cosmic flow of galaxies. The field equations imply that the trace of the corresponding energy momentum tensor satisfies
\[\hat{T}=-\hat{\rho}\neq 0.\]
In the case of the conformally flat FLRW models we have seen that such solutions can develop smooth conformal boundaries. Whether this is also possible if the conformal Weyl tensor does not vanish is not obvious (see [16]). In [11] it has been shown:
_Consider a FLRW solution to the Einstein-\(\lambda\)-perfect-fluid equation with pure dust equation of state arising from data on a Cauchy hypersurface \(S\sim\mathbb{S}^{3}\). If it admits a smooth conformal boundary at future time-like infinity, then any smooth general set of Cauchy data on \(S\) for the same equation which is sufficiently close (in terms of Sobolev norms) to the FLRW-data develops into a solution that is time-like and null geodesically future complete, admits a smooth conformal boundary in its infinite future, where \(\Omega\to 0\), and extends as a smooth solution to the conformal field equations into a domain where \(\Omega<0\)._
The unknowns \(g_{\mu\nu}=\Omega^{2}\,\hat{g}_{\mu\nu}\) and \(\rho=\Omega^{-3}\,\hat{\rho}\) in the conformal field equations then remain bounded as \(\Omega\to 0\) and it follows that
\[\hat{T}=-\Omega^{3}\rho\to 0\quad\mbox{as}\quad\ \Omega\to 0.\]
Assuming initial data so that \(\hat{\rho}>0\), the equation for the flow vector field \(\hat{U}^{\mu}\) reduces in the case of pure dust to the equation
\[\hat{U}^{\mu}\,\hat{\nabla}_{\mu}\hat{U}^{\nu}=0,\]
and thus to an ODE. In terms of the conformal fields it takes the form
\[0=U^{\mu}\,\nabla_{\mu}\,U_{\nu}-\Omega^{-1}\,(g^{\mu}\,_{\nu}+U^{\mu}\,U_{\nu })\nabla_{\mu}\Omega,\]
and the \(\Omega^{-1}\) term, which reflects the conformal non-covariance of the system, spreads into other equations of the conformal system.
_The fact that the flow is geodesic combined with the particular structure of the energy momentum tensor of a perfect fluid allows us to make in this particular case contact with and exploit some conformally invariant structure_.
Let \(\hat{g}\), \(\hat{U}\), \(\hat{\rho}>0\) satisfy the Einstein-\(\lambda\)-pure-dust equations so that \(\hat{U}^{\mu}\,\hat{\nabla}_{\mu}\hat{U}^{\nu}=0\). If the functions \(r\) and \(q\) satisfy
\[0\neq\hat{\nabla}_{\hat{U}}r=q\,r,\qquad 0\neq\hat{\nabla}_{\hat{U}}q=-\frac{ 1}{2}\,q^{2}+(\lambda/6-\hat{\rho}/3)\,r,\]
the curve \(x(s)\) with \(\frac{d}{ds}x=V=r\,\hat{U}\) and the 1-form \(b=\hat{g}(q\,\hat{U},\,\cdot\,)\) satisfy the _conformal geodesic equations_
\[\hat{\nabla}_{V}V+2\,V<b,V>-\,\hat{g}(V,V)\,b^{\#}=0,\]
\[\hat{\nabla}_{V}b-<b,V>b+\frac{1}{2}\,\hat{g}(V,\,\cdot\,)\,\hat{g}^{\#}(b,b) =\hat{L}(V,\,\cdot\,),\]
where \(\hat{L}\) denotes the Schouten-tensor of \(\hat{g}\). The point of this observation is that the conformal geodesic equations are conformally invariant and the conformal geodesics are invariants of the conformal structure [8]. This was used in [11] to regularize the conformal Einstein-\(\lambda\)-pure-dust system near future time-like infinity.
The pure dust model is thus still _conformally privileged_.
### 3.3 Equations of state with prescribed asymptotic behaviour.
In the articles cited above that study the future behaviour of cosmological solutions to the Einstein-\(\lambda\)-perfect fluid equations on the basis of linear equations of state, \(\hat{p}=w_{*}\,\hat{\rho}\), \(w_{*}=const\), no arguments are given for this choice. It appears to be rather a matter of convenience instead of being motivated by a deep understanding of its physical role. While solutions to the Einstein-\(\lambda\)-perfect-fluid equations may provide good cosmological models on large scales, it seems fairly unlikely that linear equations of state represent natural requisites from the Big Bang to future time-like infinity. Models of the universe that behave at late times like a pure dust or a pure radiation FLRW model or one of their conformally curved generalizations may require transitions of the form
\[\hat{p}=w^{**}(\hat{\rho})\,\hat{\rho},\]
with a function \(w^{**}(\hat{\rho})\) that satisfy, consistent with our earlier requirements, \(0\leq w^{**}(\hat{\rho})\leq 1/3\) and assumes the value \(w^{**}(\hat{\rho})=0\) or \(w^{**}(\hat{\rho})=1/3\) at late times. There is nothing, however, which would fix a notion of 'late time'. The only meaningful requirement would be that these values are approximated in the limit when the space-time approaches future time-like infinity. The equation of state would then still need to recognize, however, where and when this limit will be achieved.
In the cases \((iii)\) of the FLRW models discussed above the physical density \(\hat{\rho}\) and the conformal density \(\rho\) satisfy a relation of the form
\[\hat{\rho}=\Omega^{e}\rho. \tag{3.3}\]
In those cases we had \(\rho=\rho_{*}=const.>0\), \(e=const.>0\) whence \(\hat{\rho}\to 0\) as \(\Omega\to 0\) and \(\hat{\rho}\to\infty\) as \(\Omega\to\infty\). The behaviour of \(\hat{\rho}\) can thus be understood as an indicator for the approach to the infinite future or the Big Bang. In generalizing the situation we shall keep (3.3), hoping that it will serve the indicator function at least in the far future where \(\Omega\to 0\), but we may have to give up the relation \(\rho=\rho_{*}=const.\) We could think of generalizing (3.3) by assuming \(e\) to be a function. In this article we will only consider the situations near the end where \(\Omega\to 0\) and try to keep (3.3) with \(e=const.>0\). This will work well as long as \(\rho\) can be guaranteed to stay positive and bounded as \(\Omega\to 0\).
The pure dust and the pure radiation equations of state are now generalized as follows. An _asymptotic dust equation of state_ is given by a function of the form
\[\hat{p}=w(\hat{\rho})=\left(\hat{\rho}^{k}\;w^{*}(\hat{\rho})\right)\hat{\rho }\quad\mbox{with some}\quad k\in\mathbb{N}, \tag{3.4}\]
combined with (3.3) where \(e=3\). It implies with the notation \({}^{\prime}=\partial/\partial\hat{\rho}\)
\[w^{\prime}(\hat{\rho})=\hat{\rho}^{k}\;\left\{(1+k)\,w^{*}+\hat{\rho}\;(w^{* })^{\prime}\right\}. \tag{3.5}\]
An _asymptotic radiation equation of state_ is given by a function of the form
\[\hat{p}=w(\hat{\rho})=\left(\frac{1}{3}-\hat{\rho}^{k}\;w^{*}(\hat{\rho}) \right)\hat{\rho}\quad\mbox{with some}\quad k\in\mathbb{N}, \tag{3.6}\]
combined with (3.3) where \(e=4\). It implies
\[w^{\prime}(\hat{\rho})=\frac{1}{3}-\hat{\rho}^{k}\left\{(1+k)\,w^{*}+\hat{ \rho}\,(w^{*})^{\prime}\right\}. \tag{3.7}\]
In both case \(w^{*}(\hat{\rho})\) is assumed to be a smooth function defined for all values of \(\hat{\rho}\) that satisfies
\[0<w^{*}(\hat{\rho})<c=const.\]
The positivity is required to clearly distinguish the asymptotic from the pure cases. Limits \(w^{*}(\hat{\rho})\to 0\) give back the pure dust and the pure radiation equations of state.
The factors \(\hat{\rho}^{k}\) with positive \(k\) have been included in the definitions as a simple means to control the speed at which the pure dust or the pure radiation situations is approximated as \(\hat{\rho}\to 0\).
The conditions on \(w^{*}\) is may appear crude but they suffice for analysing the effect of the intended modifications of the equations of state in domains where \(\hat{\rho}\) becomes small. For \(\hat{\rho}\) positive but sufficiently close to zero, the range we are interested in, the terms in curly brackets in (3.5) and (3.7) are positive. It follows that in the case of asymptotic dust the speed of sound \(w^{\prime}(\hat{\rho})\) is positive as \(\hat{\rho}>0\) and \(w^{\prime}(\hat{\rho})\to 0\) as \(\hat{\rho}\to 0\). In the case of asymptotic radiation holds \(w^{\prime}(\hat{\rho})>0\) for \(\hat{\rho}\geq 0\) and \(w^{\prime}(\hat{\rho})\) will remain positive if the solution can be smoothly extended into a domain where \(\Omega<0\).
We note that _the Cauchy problems local in time for Einstein-\(\lambda\)-perfect fluids with asymptotic dust or radiation equations of state pose no problems where \(\hat{\rho}\) is sufficiently small_. This follows from the results of [7], [15] where only weak conditions on the equation of state are assumed.
The principal parts of the matter equations are affected by the equations of state above with the consequence that _any conformal covariance or privilege is lost_. Definition (3.4) implies
\[\hat{T}=\hat{g}^{\mu\nu}\,T_{\mu\nu}=3\,w(\hat{\rho})-\hat{\rho}=-\Omega^{3} \,\rho+3\,\Omega^{3+3\,k}\,\rho^{1+k}\,w^{*}(\Omega^{3}\,\rho),\]
while definition (3.6) gives
\[\hat{T}=-3\,\Omega^{\,4+4\,k}\,\rho^{1+k}\,w^{*}(\Omega^{4}\,\rho).\]
In both case \(\hat{T}\neq 0\) if \(\rho>0\) and \(\hat{T}\to 0\) as \(\Omega\to 0\) if \(\rho\)_remains bounded in this limit._ As seen below, this last condition will be met in the case of an asymptotic radiation equation of state while it is not clear whether this can be guaranteed also in the case of an asymptotic dust equation of state.
The conditions on the admissible values of \(k\) required above can be weakened if the equations of state above are considered in the conformal analogues of the Friedmann and energy conservation equation. In the case of asymptotic dust the systems reads
\[(\dot{\Omega})^{2}=\frac{\lambda}{3}-\frac{R[\hat{h}]}{6}\,\Omega^{2}+\Omega^ {3}\,\frac{\rho}{3},\qquad\dot{\rho}=3\,\Omega^{3\,k-1}\,\rho^{1+k}\,w^{*}( \Omega^{3}\,\rho))\,\dot{\Omega},\]
in the case of asymptotic radiation the systems reads
\[(\dot{\Omega})^{2}=\frac{\lambda}{3}-\frac{R[\hat{h}]}{6}\,\Omega^{2}+\Omega^ {4}\,\frac{\rho}{3},\qquad\dot{\rho}=-3\,\Omega^{4\,k-1}\,\rho^{1+k}\,w^{*}( \Omega^{4}\,\rho))\,\dot{\Omega}.\]
For suitable \(k\) and initial data these equations can be smoothly integrated across \(\Omega=0\) with \(\dot{\Omega}<0\) and \(\rho\) bounded and positive.
#### On the equations with an asymptotic dust equation of state.
To illustrate the kind of difficulties which arise in the case of an asymptotic dust equation of state we consider one of the equation contained in the complete system of conformal field equations. If the latter are written in terms of an orthonormal frame \(e_{k}\) with \(e_{0}=U\), the connection coefficient \(f_{a}=U^{\mu}\,e^{\nu}\,_{a}\,\nabla_{\mu}U_{\nu}\) is subject to the evolution equation
\[e_{0}(f_{a})-w^{\prime}\,e_{a}(\chi_{c}\,^{c})+\chi_{ac}\,f^{c}+(1-3\,w^{ \prime})\,L_{a0}-w^{\prime}\,\chi_{c}\,^{c}\,f_{a}\]
\[=(1-3\,w^{\prime})\,(\Omega^{-1}\,\nabla_{0}\Omega\,f_{a}+\Omega^{-1}\,\chi_{ab}\, \nabla^{b}\Omega-\Omega^{-2}\,\nabla_{0}\Omega\,\nabla_{a}\Omega)\]
\[-w^{\prime\prime}\,\frac{\hat{\rho}+w}{w^{\prime}}\,(\chi_{c}{}^{c}\,f_{a}-3\, \Omega^{-1}\,\nabla_{0}\,\Omega\,f_{a}-\Omega^{-1}\,\chi_{c}{}^{c}\,\nabla_{a} \Omega+\Omega^{-2}\,\nabla_{0}\Omega\,\nabla_{a}\Omega),\]
where \(\Omega\), the connection coefficients \(\chi_{ab}=\nabla_{a}U_{b}\) and \(\chi_{c}{}^{c}=\nabla_{k}U^{k}\), and the Schouten tensor \(L_{ik}\) obey further evolution equations. In the case of pure dust the third line is not present. In that case \(w^{\prime}=0\) and the second line contains factors \(\Omega^{-1}\). Even if the third line is ignored, the arguments indicated above in the case of pure dust would not apply in the present case because the flow equation remains a partial differential equation if \(w^{*}\neq 0\) that cannot be related to the equations of conformal geodesics. The most complicated term in the equation is, however, the first factor in the third line. This is not even defined in the case of pure dust. The \(\Omega^{-1}\) terms can not be compensated in this equation by suitable choices of \(k\). We leave this case open.
#### Solutions with an asymptotic radiation equation of state.
In the case of an asymptotic radiation equation of state the situation is quite different. Consider the equation above again. All the coefficients introduced by the radiation equation of state are well defined and approach in the limit \(w^{*}\to 0\) the corresponding value in the case of pure radiation. Moreover, the terms in the equation above that contain factors \(\Omega^{-1}\) come with the coefficients
\[1-3\,w^{\prime}=3\,(\Omega^{4}\,\rho)^{k}\,\bigl{(}(k+1)\,w^{*}+\Omega^{4}\, \rho\,(w^{*})^{\prime}\bigr{)}\quad\mbox{ and }\]
\[w^{\prime\prime}\,\frac{\hat{\rho}+w}{w^{\prime}}=-(\Omega^{4}\,\rho)^{k}\, \bigl{(}4-3\,\hat{\rho}^{k}\,w^{*}\bigr{)}\,\frac{k\,(1+k)\,w^{*}+2\,(1+k)\, \hat{\rho}\,(w^{*})^{\prime}+\hat{\rho}^{2}\,(w^{*})^{\prime\prime}}{1-3\,(1+ k)\,\hat{\rho}^{k}\,w^{*}+3\,\hat{\rho}^{1+k}\,(w^{*})^{\prime}}.\]
By a suitable choice of \(k\) the factors \(\Omega^{4\,k}\) thus allow us to make up for any negative power of \(\Omega\).
The situation is similar with the other equations of the conformal system. Moreover, the way the asymptotic radiation equation of state effects the principal part of the system is so that one can still extract from the complete system in a suitable gauge a reduced system that is symmetric hyperbolic irrespective of the sign of \(\Omega\).
Solutions that admit smooth conformal extensions at future time-like infinity can now be constructed from data for the conformal field equations which are given on a space-like hypersurface \(S\) in the 'physical domain', where \(\Omega>0\), or from data on the hypersurface \(S=\{\Omega=0\}\) that represents future time-like infinity. To check that the data satisfy the constraints induced on \(S\), and possibly the additional requirements implied by the assumption that \(\Omega=0\) on \(S\), these conditions are best expressed in terms of the unit normal to \(S\) and then transformed into a frame \(e_{k}\) with \(e_{0}=U\), whereby the evolution equations need to be used as well. Unless the field \(U\) is assume to be orthogonal to \(S\) this involves some fairly tedious calculations (see [14], where the presence of a boundary requires them). Since the latter give limited insight they are skipped here. After the Cauchy problem has been solved for the given data it follows by standard arguments that the constraints and thus the complete system of conformal field equations will be solved as well (see [11], [15] for detailed discussions). We can state now the following results [13].
_The Einstein-\(\lambda\)-perfect-fluid equations with an asymptotic radiation equation of state where \(k\geq 1\) induce in a suitable gauge a reduced system of the conformal Einstein-\(\lambda\)-perfect-fluid equations that is symmetric hyperbolic irrespective of the sign of \(\Omega\)._
_On a compact 3-dimensional manifold \({\cal J}\) one can construct smooth Cauchy data for the reduced conformal equations with \(\Omega=0\), \(U\) time-like future directed orthogonal to \({\cal J}\), and \(<U,d\Omega><0\) that satisfy the constraints induced by the conformal field equations and the special requirements on a space-like hypersurface on which \(\Omega=0\)._
_These data determine a smooth solution to the reduced equations with \(U\) hypersurface orthogonal, \(\Omega<0\) in the future of \({\cal J}\) and \(\Omega>0\) in the past of \({\cal J}\). In the latter domain the solution defines a unique solution to the Einstein-\(\lambda\)-perfect-fluid equations with an asymptotic radiation equation of state that is time-like geodesically future complete and for which \({\cal J}\) represents a conformal boundary at the infinite time-like future._
_Let \(S\) be a Cauchy hypersurface for this solution in the past of \({\cal J}\) and denote by \(\Delta\) the Cauchy data induced by that solution on \(S\). Any Cauchy data \(\Delta^{\prime}\) on \(S\) for the same equations which are sufficiently close to \(\Delta\) develop into a solution that is also time-like geodesically future complete, admits a smooth conformal boundary in the future, and a smooth conformal extension beyond._
We note that the Cauchy hypersurface \(S\) above is not required to be orthogonal to \(U\) and that the flow vector field comprised by the data \(\Delta^{\prime}\) on \(S\) is not required to satisfy the condition of hypersurface orthogonality.
### 3.4 Concluding remarks.
If \(k\) is large, the terms involving \(w^{*}\) in (3.4) and (3.6) may, when \(\Omega\to 0\), look like minor perturbations of the pure dust or pure radiation equations of state. We have seen, however, that the effects of these terms are quite different in the two cases.
When \(\Omega\) becomes small the term involving \(w^{*}\) may in the case of the asymptotic radiation equations of state indeed be considered already for \(k=1\) as a minor perturbation relative to the dominating first term on the right hand side of (3.6). This is apparently sufficient to preserve asymptotically the effects corresponding to the conformal covariance of the pure radiation equation of state.
In contrast, the transition from the pure to the asymptotic dust equation of state represented by the term involving \(w^{*}\), comes with a drastic change of the principal part of the matter equations by which the conformal privilege of the pure dust case is lost completely even if \(k\) is large.
One could hope to simplify the analysis of this case by entangling the two problems that are possibly interfering in the asymptotic dust case at future time-like infinity. Let the function \(w^{*}\) be modified so that \(w^{*}(\hat{\rho})=0\) precisely if \(\hat{\rho}=\Omega^{3}\,\rho\) falls below a certain positive threshold \(\hat{\rho}_{*}\). For the sake of discussion assume that the set \(\{\hat{\rho}=\hat{\rho}_{*}\}\) defines a space-like Cauchy hypersurface with the set \(\{\hat{\rho}>\hat{\rho}_{*}\}\), on which we have an asymptotic dust equation of state, lying in its past and the set \(\{\hat{\rho}<\hat{\rho}_{*}\}\), on which we have a pure dust equation of state, lying in its future. For this picture to make sense one has to decide whether the space-time evolution extends with a sufficient degree of smoothness across the set \(\{\hat{\rho}=\hat{\rho}_{*}\}\) where the change of principal part takes place which reduces the PDE
for the flow field to an ODE. Since this concerns the physical domain, one could expect the answer to follow from the analysis of the fluid equations in [7], [15]. If the answer is positive, the problem of asymptotic smoothness at time-like infinity only concerns the fields on \(\{\hat{\rho}<\hat{\rho}_{*}\}\). This is the situation considered in [11]. The question of interest now is whether the analyses at the set \(\{\hat{\rho}=\hat{\rho}_{*}\}\) and of the behaviour at time-like infinity can be combined to clarify what happens if the value \(\hat{\rho}_{*}\) of the threshold is lowered to eventually perform the limit \(\hat{\rho}_{*}\to 0\).
In the light of the preceding discussions I consider this question as particular interesting. While it may not inform us about the final nature of the matter fields at the end of our present universe it may give rise to a more subtle and appropriate definition of an asymptotic dust equation of state and will certainly give insights into the freedom allowed by the field equations and the matter equations to model possible ends.
|
2307.16331
|
Theoretically Principled Trade-off for Stateful Defenses against
Query-Based Black-Box Attacks
|
Adversarial examples threaten the integrity of machine learning systems with
alarming success rates even under constrained black-box conditions. Stateful
defenses have emerged as an effective countermeasure, detecting potential
attacks by maintaining a buffer of recent queries and detecting new queries
that are too similar. However, these defenses fundamentally pose a trade-off
between attack detection and false positive rates, and this trade-off is
typically optimized by hand-picking feature extractors and similarity
thresholds that empirically work well. There is little current understanding as
to the formal limits of this trade-off and the exact properties of the feature
extractors/underlying problem domain that influence it. This work aims to
address this gap by offering a theoretical characterization of the trade-off
between detection and false positive rates for stateful defenses. We provide
upper bounds for detection rates of a general class of feature extractors and
analyze the impact of this trade-off on the convergence of black-box attacks.
We then support our theoretical findings with empirical evaluations across
multiple datasets and stateful defenses.
|
Ashish Hooda, Neal Mangaokar, Ryan Feng, Kassem Fawaz, Somesh Jha, Atul Prakash
|
2023-07-30T22:31:01Z
|
http://arxiv.org/abs/2307.16331v1
|
# Theoretically Principled Trade-off for Stateful Defenses against Query-Based Black-Box Attacks
###### Abstract
Adversarial examples threaten the integrity of machine learning systems with alarming success rates even under constrained black-box conditions. Stateful defenses have emerged as an effective countermeasure, detecting potential attacks by maintaining a buffer of recent queries and detecting new queries that are too similar. However, these defenses fundamentally pose a trade-off between attack detection and false positive rates, and this trade-off is typically optimized by hand-picking feature extractors and similarity thresholds that empirically work well. There is little current understanding as to the formal limits of this trade-off and the exact properties of the feature extractors/underlying problem domain that influence it. This work aims to address this gap by offering a theoretical characterization of the trade-off between detection and false positive rates for stateful defenses. We provide upper bounds for detection rates of a general class of feature extractors and analyze the impact of this trade-off on the convergence of black-box attacks. We then support our theoretical findings with empirical evaluations across multiple datasets and stateful defenses.
Machine Learning, Knowledge-theoretic theory, Countermeasure, Black-box, Stateful defenses, Countermeasure, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful defenses, Black-box, Stateful, Black-box, Stateful defenses, Black-box, Stateful, Black-box, Stateful defenses, Black-box, Stateful, Black-box, Stateful defenses, Black-box, Stateful, Black-box, Stateful defenses, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, Stateful, Black-box, State, Stateful, Black-box, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State,, State, State, State, State, State, State, State, State, State, State,, State, State, State, State, State, State,, State, State, State, State, State, State, State, State, State, State, State, State, State,, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State,, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State,, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State, State,, State
## 2 Background
### Black-box Attacks
Adversarial Examples are perturbed inputs that intentionally mislead or deceive machine learning models. Specifically, given an image \(\mathbf{x}\) with label \(y\) and a classifier \(f\), such attacks aim to construct an adversarial example \(\mathbf{x}_{adv}\) such that:
\[f(\mathbf{x}_{adv})\neq y\;\;\text{and}\;\;||\mathbf{x}_{adv}-\mathbf{x}||_{p}\leq\epsilon \tag{1}\]
where \(\epsilon\) is the perturbation budget per some \(\ell_{p}\) norm. In the black-box setting, these attacks only have on query access to the model. One common characteristic of black-box attacks is the use of similar queries to gather information about the model's behavior. Specifically, by making queries with slight perturbations to the input and observing the corresponding model outputs, attackers can gain insights into the model's decision-making process.
Consider the initial stage of many black-box adversarial attacks, which involves estimating the direction to move the input to achieve the desired adversarial effect. For example, the NES (Ilyas et al., 2018), HSJA (Chen et al., 2020), and QEBA (Li et al., 2020) attacks estimate the gradient by sampling nearby points from a Gaussian (or similar) probability distributions, and computing finite differences over these points. Other attacks such as SurFree (Maho et al., 2021) and Square (Andriushchenko et al., 2020) also sample nearby points to estimate a "random search" direction (not a gradient) in which to move the input. We will often refer to the interplay between such queries made during the direction estimation stage and a stateful defense, particularly because the attack's overall convergence properties are often directly influenced by choice of direction.
### Stateful Defenses
The overall intuition behind stateful defenses is that black-box attackers often submit highly similar queries as part of the optimization procedure for their chosen adversarial task. These highly similar queries can then be detected. Defenses such as Blacklight (Li et al., 2022) have reduced attack success rate (ASR) of state-of-the-art black-box attacks to as low as 0%.
A stateful defense typically comprises a classifier \(f\), feature extractor \(H\) (with some associated distance metric), query store \(q\), and threshold \(\tau\). The defense then compares an incoming query against all queries stored in \(q\). If similarity with any example in \(q\) exceeds \(\tau\), the defense deploys preventive measures such as query rejection or account banning.
Different stateful defenses primarily vary in their choices of \(H\). Specifically, some defenses such as Blacklight and PIHA (Li et al., 2022; Choi et al., 2023) leverage discrete-valued metrics such as hamming distance over hashes, e.g., SHA-256 hashes of quantized pixels. Others, such as Stateful Defense (SD) (Chen et al., 2020), employ real-valued metrics, e.g., \(\ell_{2}\) distance between embeddings from neural similarity encoders. In this work, we evaluate Blacklight and PIHA since they are available for both the CIFAR-10 and ImageNet datasets.
**Model Stealing** Recent work has also proposed stateful defenses against model-stealing attacks. Such attacks aim to steal a local "clone" model \(f^{c}\) such that the behavior of \(f^{c}\) is similar to that of \(f\). Defenses such as SEAT (Zhang et al., 2021) have also been successful here and can force the attacker to create as many as 65 accounts to steal a single model. This success can be similarly explained by the submission of highly similar queries. For example, at iteration \(t\) of a Jacobian-based Augmentation (JBA) attack (Papernot et al., 2017), the adversary constructs a "useful" but highly similar query \(\mathbf{x}_{t+1}\) by perturbing previous query \(\mathbf{x}_{t}\) so that it maximizes the loss \(\mathcal{L}\) of \(f^{c}_{t}\):
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}+\eta*sgn(\nabla_{\mathbf{x}_{t}}\mathcal{L}( \mathbf{x}_{t},f(\mathbf{x}_{t}))) \tag{2}\]
where \(\eta\) is some step size.
## 3 Trade-offs between Detection and False Positives
In this section, we demonstrate that there exists an implicit trade-off between detecting attack queries and avoiding false positives in the context of stateful defenses. We begin with a constructive model through which we provide explicit characterizations of the feature extractor and data distributions. We use this toy model to highlight the trade-off, and then relax the assumptions to provide a more general bound that highlights the direct influence of the feature extractor and the problem domain.
### Toy Model
**Feature extractor.** We begin by considering an explicit class of feature extractors based on simple quantization. The feature extractor is given by \(H:\mathbb{R}^{d}\rightarrow\mathbb{Z}^{d}\) with a discrete output space. Specifically,
\[H(\mathbf{x})=\lfloor\mathbf{x}+\mathbf{0.5}\rfloor \tag{3}\]
where the \(\lfloor.\rfloor\) operation is element-wise. Many defenses employ quantization to provide perceptual similarity (Li et al., 2022; Choi et al., 2023). In this model, we consider a query to be an attack query if and only if it produces the exact same features as that of a prior query. Later, in Section 3.2 we expand beyond the toy model to consider the case where \(H\) is a generic feature extractor, and queries are considered attack queries when their features are within some distance \(\tau\) of a prior query.
**Natural Query Distribution.** Stateful defenses assume that natural images are sufficiently "spread out", or dissimilar enough such that they can be distinguished. Therefore, for our model we assume that natural images originate from one of several Gaussian distributions, which are uniformly dispersed across input space \(\mathbb{R}^{d}\)1. Each natural image is obtained from a distinct Gaussian distribution. This may be viewed as a "best case" situation for the defense, where natural images are sufficiently spread out across the input space to avoid false positives. For simplicity, we assume isotropic Gaussian distributions: \(\mathcal{N}(\mathbf{p},\mathbf{I_{d}}\sigma^{2})\) where \(\mathbf{p}\in\mathbb{Z}^{d}\). Intuitively, when applying \(H\) to a natural image \(\mathbf{x}\sim\mathcal{N}(\mathbf{p},\mathbf{I_{d}}\sigma^{2})\), it should output the discrete feature vector \(\mathbf{p}\) with high probability.
Footnote 1: For the case where the input space is constrained, for instance to [0,255], the natural images can instead be sampled from truncated Gaussian distributions.
**Attack Query Distribution.** To estimate the gradient at input \(\mathbf{x}\), a Monte Carlo simulation approach would require sampling a total of \(q\) perturbations \(\{\mathbf{x},\mathbf{x}+\boldsymbol{\delta_{1}},...,\mathbf{x}+\boldsymbol{ \delta_{q}}\}\). For our model, we consider the distribution of perturbations for \(\mathbf{x}\) to be \(\mathcal{N}(0,\mathbf{I_{d}}\beta^{2})\), i.e., the adversary is estimating a gradient using finite differences on a Gaussian basis (Ilyas et al., 2018).
Given the setting described above (also illustrated in Figure 1), we now present the following result, which bounds the detection rate with the false positive rate:
**Theorem 3.1**.: _Let the adversary sample a natural image \(\mathbf{x}\) from one of the above distributions \(\mathcal{N}(\mathbf{p},\mathbf{I_{d}}\sigma^{2})\), and perturb it with \(\boldsymbol{\delta}\sim\mathcal{N}(0,\mathbf{I_{d}}\beta^{2})\) to estimate a gradient. Given that the stateful defense incurs a false positive rate \(\alpha^{fp}\), the detection rate \(\alpha^{det}\) for the perturbed query \(\mathbf{x}+\boldsymbol{\delta}\) is then bounded as follows:_
\[\alpha^{det}\leq 1-\left(2-2\Phi\left(0.5\beta^{-1}\right)\right)^{d}(1- \alpha^{fp}) \tag{4}\]
Proof.: \(H\) fails to detect the attack query \(\mathbf{x}+\boldsymbol{\delta}\) if and only if \(H(\mathbf{x}+\boldsymbol{\delta})\neq H(\mathbf{x})\). Therefore,
\[\alpha^{det} =1-\mathbb{P}[H(\mathbf{x})\neq H(\mathbf{x}+\boldsymbol{\delta})] \tag{5}\] \[\leq 1-\mathbb{P}[H(\mathbf{x}+\boldsymbol{\delta})\neq\mathbf{p}, H(\mathbf{x})=\mathbf{p}]\] (6) \[=1-\mathbb{P}[H(\mathbf{x}+\boldsymbol{\delta})\neq\mathbf{p}\mid H (\mathbf{x})=\mathbf{p}]\mathbb{P}[H(\mathbf{x})=\mathbf{p}]\] (7) \[\leq 1-\mathbb{P}[H\left(\mathbf{p}+\boldsymbol{\delta}\right) \neq\mathbf{p}]\mathbb{P}[H(\mathbf{x})=\mathbf{p}]\] (8) \[=1-\mathbb{P}[||\boldsymbol{\delta}||_{\infty}>0.5]\mathbb{P}[H( \mathbf{x})=\mathbf{p}]\] (9) \[=1-\left(2-2\Phi\left(0.5\beta^{-1}\right)\right)^{d}(1-\alpha^{fp}) \tag{10}\]
where \(\Phi\) is the cummulative distribution function of \(\mathcal{N}(0,1)\). Note that to go from (7) to the inequality in (8), we assign a specific value \(\mathbf{x}=\mathbf{p}\), i.e., placing \(\mathbf{x}\) at the center of the quantization bin for \(H\) (see Equation 3). By placing it at the center, the probability of evasion when adding \(\boldsymbol{\delta}\) is minimized, and the resulting event is also independent of event \(H(\mathbf{x})=\mathbf{p}\). Finally, going from (9) to (10) uses standard results for the CDF of a multivariate Gaussian.
**Takeaway.** There exists a trade-off between the detection rate \(\alpha^{det}\) and the false positive rate \(\alpha^{fp}\), i.e., decreasing \(\alpha^{fp}\) also decreases the upper bound for \(\alpha^{det}\). Furthermore, this trade-off also depends on the standard deviation \(\beta\) of the perturbation distribution, i.e., high values of \(\beta\) lead to a lower detection rate.
### General Analysis
Recall that our toy model assumed a quantization-based feature extractor and a uniform natural image distribution. We now extend our results to a more generic perceptual feature extractor and image distribution. Specifically, consider \(H:\mathbb{R}^{d}\rightarrow\mathbb{R}^{y}\) where \(y\) is the dimensionality of the output feature space. We assume \(H\) to be Lipschitz continuous with constants \(K_{L}\) and \(K_{U}\) :
\[K_{L}||\mathbf{x_{1}}-\mathbf{x_{2}}||\leq||H(\mathbf{x_{1}})-H(\mathbf{x_{2} })||\leq K_{U}||\mathbf{x_{1}}-\mathbf{x_{2}}||, \tag{11}\]
\(\forall\)\((\mathbf{x_{1}},\mathbf{x_{2}})\in\mathbb{R}^{d}\). Note that we no longer assume the implementation of \(H\) as in the toy model; the continuity assumption here is only needed to ensure that \(H\) captures perceptual similarity, i.e., similar images should indeed have similar features. Furthermore, since \(H\) is now continuous, we extend to a threshold based detection setting i.e. a query \(\mathbf{x}\) is considered an attack query if and only if
Figure 1: Illustration of the Toy Model in 1-D. We assume that any natural query \(\mathbf{x}\) is sampled from a distinct Gaussian distribution (green). Two such distributions are shown, centered at \(K-1\) and \(K\). For a given natural query \(\mathbf{x}\), the attack queries \(\mathbf{x}+\boldsymbol{\delta}\) are sampled from another Gaussian distribution (orange) centered around a natural query. The feature extractor is designed to map each natural query to a unique output. Therefore, \(H\) maps all values within each quantization bin to the same output. This means that the green shaded area represents \(\alpha^{fp}\), and the orange shaded area represents \(\alpha^{det}\) for the attack queries.
\(||H(\mathbf{x})-H(\mathbf{x_{h}})||\leq\tau\) where \(\mathbf{x_{h}}\) is any historical query. Given these changes, we can now re-analyze the detection \(\alpha^{det}\) for a perturbed query \(\mathbf{x}+\mathbf{\delta}\):
**Theorem 3.2**.: _Let the adversary sample natural image \(\mathbf{x}\), and perturb it with \(\mathbf{\delta}\sim\mathcal{N}(0,\mathbf{I_{d}}\beta^{2})\) to estimate a gradient. For a false positive rate \(\alpha^{fp}\), the detection rate \(\alpha^{det}\) for perturbed query \(\mathbf{x}+\mathbf{\delta}\) is then bounded as follows:_
\[\alpha^{det}\leq\frac{1}{\Gamma(\frac{d}{2})}\gamma\left(\frac{d}{2},\frac{1} {2}\left(\frac{K_{U}}{K_{L}}\frac{M_{\mathcal{D}}}{\beta}\frac{1}{1-\alpha^{ fp}}\right)^{2}\right) \tag{12}\]
_where \(M_{\mathcal{D}}=\mathbb{E}[||\mathbf{x_{1}}-\mathbf{x_{2}}||]\), i.e., the expected spread of natural queries, and \(\gamma\) and \(\Gamma\) are the monotonic lower incomplete and complete Gamma functions respectively._
Proof.: \(H\) fails to detect the attack query \(\mathbf{x}+\mathbf{\delta}\) if and only if \(||H(\mathbf{x})-H(\mathbf{x}+\mathbf{\delta})||>\tau\). Therefore,
\[\alpha^{det}=\mathbb{P}\left[||H(\mathbf{x})-H(\mathbf{x}+\mathbf{\delta})||\leq\tau\right] \tag{13}\]
Similarly, \(H\) produces a false positive for two natural images \(\mathbf{x_{1}}\) and \(\mathbf{x_{2}}\) if and only if \(||H(\mathbf{x_{1}})-H(\mathbf{x_{2}})||\leq\tau\). Therefore,
\[\alpha^{fp}=\mathbb{P}\left[||H(\mathbf{x_{1}})-H(\mathbf{x_{2}})||\leq\tau\right] \tag{14}\]
Using Equation 11 with 13 and 14:
\[\alpha^{det}\leq\mathbb{P}\left[||\mathbf{\delta}||\leq\frac{\tau}{K_{L}}\right] \tag{15}\]
\[\alpha^{fp}\geq\mathbb{P}\left[||\mathbf{x_{1}}-\mathbf{x_{2}}||\leq\frac{ \tau}{K_{U}}\right] \tag{16}\]
Finally, using a CDF for the norm of a Gaussian, i.e., a chi-distribution in Equation 15 and Markov's inequality in Equation 16, we get:
\[\alpha^{det}\leq\frac{1}{\Gamma(\frac{d}{2})}\gamma\left(\frac{d}{2},\frac{1} {2}\left(\frac{K_{U}}{K_{L}}\frac{M_{\mathcal{D}}}{\beta}\frac{1}{1-\alpha^{ fp}}\right)^{2}\right) \tag{17}\]
where:
\[\gamma(s,x)=\int_{0}^{x}t^{s-1}e^{-t}dt \tag{18}\]
\[\Gamma(s)=\int_{0}^{\infty}t^{s-1}e^{-t}dt \tag{19}\]
## 4 Experiments
Motivated by our analysis in Section 3, we conduct experiments to validate our findings empirically, and thus answer the following questions:
**Q1. How does the trade-off empirically depend upon the spread, i.e., variance \(\beta\) of the attack queries?**
**Q2. How does the trade-off empirically depend upon the Lipschitz constant ratio \(K_{U}/K_{L}\) of the feature extractor?**
**Q3. What are the implications of the trade-off for the convergence of black-box attacks?**
### Experimental Setup
**Feature extractors.** We focus our evaluation on feature extractors from two state-of-the-art stateful defenses: black-light (Li et al., 2022) and PIHA (Choi et al., 2023). Below we provide detailed descriptions and hyper-parameters for both.
Blacklight operates on an input image with pixel values in the range of [0, 255]. First, it discretizes the pixels into bins of size 50. Second, a sliding window technique is applied to the discretized image, utilizing a window size of 20 for TinyImages (Torralba et al., 2008) and 50 for ImageNet (Russakovsky et al., 2015). During this process, each window is hashed using the SHA-256 algorithm. Finally, the resulting set of hashes obtained from all the windows is considered as the "feature" for the image. For efficiency purposes, Blacklight utilizes only the top 50 hashes. To quantify the distance between two hash sets, Blacklight computes the number of non-common hashes, which can be interpreted as an \(\ell_{1}\) distance.
PIHA also operates on input images with the same pixel range. First, it runs a 3x3 low-pass Gaussian filter with standard deviation 1 over the image. Second, the image is converted to the HSV color space with the S and V components discarded. Finally, PIHA runs a sum-pooling operation over 7x7 image blocks, and the "feature" is computed as the output of the local binary pattern algorithm (Ojala et al., 1994) on the sum-pooled image.
**Datasets**. We evaluate Blacklight and PIHA using two datasets, TinyImages and ImageNet. The TinyImages dataset is a collection of 32x32 images and is the super-set collection from which the popular CIFAR-10 dataset is sampled (providing nearly 80 million images as opposed to only 60,000). The ImageNet dataset comprises over 1 million 256x256 images. We sample a random subset of 1 million images from both datasets for our experiments.
### Q1. Variance of Attack Queries
Theorem 3.2 suggests a clear inverse relationship between the (\(\alpha^{det}\), \(\alpha^{fp}\)) trade-off and \(\beta\). We now empirically validate this relationship, i.e., for any given feature extractor and dataset, we plot \(\alpha^{fp}\) against \(\alpha^{det}\) for a variety of thresholds \(\tau\). We compute \(\alpha^{fp}\) over 1 million images for all settings except PIHA on ImageNet, for which we compute on \(100\)k and extrapolate due to computational complexity. We compute \(\alpha^{det}\) over 100 images by sampling perturbations from Gaussians with different standard deviations \(\beta\).
Results are presented in Figure 2. Notably, we first observe that for any \(\beta\), the trade-off between \(\alpha^{det}\) and \(\alpha^{fp}\) indeed exists across all thresholds. More specifically, to obtain a larger \(\alpha^{det}\) always requires an increase in \(\alpha^{fp}\) as well. This validates the takeaways from Theorems 3.1 and 3.2. Furthermore, the inverse relationship with \(\beta\) also exists, i.e., achieving the same \(\alpha^{det}\) requires a larger \(\alpha^{fp}\) when \(\beta\) is increased. Interestingly, PIHA can achieve higher \(\alpha^{det}\) on the low-dimensional TinyImages compared to Blacklight, but both suffer on ImageNet when \(\beta\) increases beyond \(\beta=0.01\).
### Q2. Lipschitz Constants of the Feature Extractor
Theorem 3.2 also suggests that the (\(\alpha^{det}\), \(\alpha^{fp}\)) trade-off is influenced by Lipschitz constants \(K_{U}\) and \(K_{L}\) of the feature extractor. However, this assumes a continuous feature extractor -- although the feature extractors from Blacklight and PIHA are not continuous, they are still designed to approximate the perceptual likeness of images (yielding closer features for similar queries and further features for dissimilar ones). Given the lack of closed-form expressions, we resort to an empirical estimation of \(K_{U}\) and \(K_{L}\).
We create image pairs \(\mathbf{x}\) and \(\mathbf{x}+\mathbf{\delta}\) where \(\mathbf{x}\) is sampled from the dataset (TinyImages/ImageNet), and \(\mathbf{\delta}\sim\mathcal{N}(0,\mathbf{I_{d}}\beta^{2});\beta=0.01\). For each pair, we then calculate the ratio between the \(\ell_{2}\) distance in the feature space and the input space, i.e., \(\frac{||H(\mathbf{x})-H(\mathbf{x}+\mathbf{\delta})||}{||\mathbf{\delta}||}\). We construct \(10000\) such pairs and plot the distribution of these distance ratios.
Figure 3(a) plots these distributions for ImageNet images processed by both Blacklight and PIHA feature extractors. We note a larger distribution spread in the histogram for PIHA compared to Blacklight, hinting at a greater value for \(\frac{K_{U}}{K_{L}}\) for PIHA. As per Theorem 3.2, this suggests that PIHA possesses the potential for superior detection rates compared to Blacklight. We corroborate this empirically by plotting \(\alpha^{fp}\) against \(\alpha^{det}\) in a manner akin to that in Section 4.2. As presented in Figure 3(b), PIHA indeed manifests higher detection rates when compared with Blacklight.
### Q2. The Trade-off and Attack Convergence
Given that increasing \(\beta\) worsened the trade-off of the defense (Q1 in Section 4.2), we now question the impact of increasing \(\beta\) on the attack convergence itself. We specifically consider the adversary goal of gradient estimation via finite differences. Formally, it can be shown through the following result that increasing \(\beta\) should worsen the quality of the estimated gradient:
**Theorem 4.1**.: _Let \(\nabla_{x}\) be the true gradient of \(\mathbf{x}\) for the
Figure 3: Lipschitz constant ratio of the feature extractors is directly proportional to the quality of the trade-off. On the left, we present the distribution of ratios between pairwise distance in the feature space and pairwise distance in the input space — a larger distribution spread implies a larger Lipschitz ratio for that feature extractor. On the right, we present the corresponding (\(\alpha^{det}\), \(\alpha^{fp}\)) trade-off.
Figure 2: There exists a trade-off between detection rate \(\alpha^{det}\) and false positive rate \(\alpha^{fp}\) for stateful defenses. This trade-off is worsened for larger \(\beta\) values. Each curve is computed by varying threshold \(\tau\) for the chosen feature extractor, and each setting presents four curves corresponding to different \(\beta\) values.
classifier's loss, and \(G\) be a matrix of rows \(g_{1},\cdots,g_{k}\sim\mathcal{N}(0,\mathbf{I_{d}}\beta^{2})\). Then, the norm of estimated gradient \(G\cdot\nabla_{\mathbf{x}}\) is bounded in probability by:_
\[\mathbb{P}[(1-\epsilon)\|\nabla_{\mathbf{x}}\|\leq\|G\cdot\nabla_{ \mathbf{x}}\|\leq(1+\epsilon)\|\nabla_{\mathbf{x}}\|]\geq\\ 1-2\cdot exp\bigg{(}-k-\frac{1+\epsilon}{2\beta^{2}}\bigg{)}\]
_where \(0\leq\epsilon\leq 1\) is the estimation error._
A detailed proof of this result can be found in Appendix A.0.1. The left-hand side represents the probability that our estimated gradient is "good", i.e., produces the same increase-in-loss as the true gradient. As \(\beta\) increases, the lower bound on this probability decreases (right-hand side), suggesting that the estimate is less likely to produce the same increase-in-loss.
We empirically validate this impact of increasing \(\beta\) in Figure 4, which plots the increase in loss when following gradients estimated with different \(\beta\). These figures present a clearer overall picture -- for any given \(\alpha^{fp}\), even though larger \(\beta\) decreases the detection rate, a gradient estimated with larger \(\beta\) is also strictly worse for the adversary, i.e., does not increase the loss as much (see the gradation from red to blue). In other words, these findings suggest that the worsening of the (\(\alpha^{det}\), \(\alpha^{fp}\)) trade-off at larger \(\beta\) is not without a negative impact on the adversary.
## 5 Conclusion
In conclusion, our work offers a more formal understanding of how stateful defenses prevent black-box adversarial attacks. We outlined a crucial trade-off between detecting attack detection and false positives, and highlighted its dependence upon the distribution of attack and natural queries, and the properties of the defense's feature extractor. Our analysis can help illuminate why certain defenses perform better against black-box attacks, which can help to refine current strategies and potentially guide the design of future defenses. As the landscape of adversarial attacks and defenses evolves, our findings contribute to the development of more robust and resilient machine learning models under the realistic black-box threat model.
## 6 Acknowledgements
This material is based upon work supported by DARPA under agreement number 885000, National Science Foundation Grant No. 2039445, and National Science Foundation Graduate Research Fellowship Grant No. DGE 1841052. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors(s) and do not necessarily reflect the views of our research sponsors.
|
2305.07445
|
QVoice: Arabic Speech Pronunciation Learning Application
|
This paper introduces a novel Arabic pronunciation learning application
QVoice, powered with end-to-end mispronunciation detection and feedback
generator module. The application is designed to support non-native Arabic
speakers in enhancing their pronunciation skills, while also helping native
speakers mitigate any potential influence from regional dialects on their
Modern Standard Arabic (MSA) pronunciation. QVoice employs various learning
cues to aid learners in comprehending meaning, drawing connections with their
existing knowledge of English language, and offers detailed feedback for
pronunciation correction, along with contextual examples showcasing word usage.
The learning cues featured in QVoice encompass a wide range of meaningful
information, such as visualizations of phrases/words and their translations, as
well as phonetic transcriptions and transliterations. QVoice provides
pronunciation feedback at the character level and assesses performance at the
word level.
|
Yassine El Kheir, Fouad Khnaisser, Shammur Absar Chowdhury, Hamdy Mubarak, Shazia Afzal, Ahmed Ali
|
2023-05-09T07:21:46Z
|
http://arxiv.org/abs/2305.07445v1
|
# QVoice: Arabic Speech Pronunciation Learning Application
###### Abstract
This paper introduces a novel Arabic pronunciation learning application **QVoice**, powered with end-to-end mispronunciation detection and feedback generator module. The application is designed to support non-native Arabic speakers in enhancing their pronunciation skills, while also helping native speakers mitigate any potential influence from regional dialects on their Modern Standard Arabic (MSA) pronunciation. QVoice employs various learning cues to aid learners in comprehending meaning, drawing connections with their existing knowledge of English language, and offers detailed feedback for pronunciation correction, along with contextual examples showcasing word usage. The learning cues featured in QVoice encompass a wide range of meaningful information, such as visualizations of phrases/words and their translations, as well as phonetic transcriptions and transliterations. QVoice provides pronunciation feedback at the character level and assesses performance at the word level.
Yassine El Kheir, Fouad Khaisser, Shammur Absar Chowdhury,
Hamdy Mubarak, Shazia Afzal, Ahmed Ali Qatar Computing Research Institute, HBKU, Doha, Qatar
[email protected]
**Index Terms**: Arabic pronunciation learning, Mispronunciation detection model, automatic scoring.
## 1 Introduction
Historically, acquiring a new language or improving pronunciation skills often demanded substantial effort and resources, typically in a classroom setting. Nevertheless, advancements in technology have paved the way for more personalized and self-paced learning experiences for language learners. In recent years, the use of self-assessment tools has gained popularity as they provide learners with a means to track their progress and identify areas for improvement. Such tools are particularly useful in language learning, where children require extensive training to develop reading and pronunciation skills, on the other hand, non-native speakers often struggle with differences in their native language, furthermore, native speakers may face challenges with dialectal variations.
This paper presents **QVoice**1 application which aims to provide learners with a comprehensive tool for training themselves to pronounce Arabic accurately and effectively; and demonstrates the potential of self-assessment tools in language learning.
Footnote 1: [https://apps.apple.com/gb/app/qvoice-apl/id6444646358](https://apps.apple.com/gb/app/qvoice-apl/id6444646358)
## 2 QVoice Application
The **QVoice** mobile application architecture comprises of a front-end and a back-end modules, as depicted in Figure 1. The front-end serves as the primary interface for learners, allowing to initialize their microphone, record their audio input and get feedback from the system. We initialize the front-end by fetching the right word along with the additional learning cues: text, audio, and images, from the database. These words are then displayed to the user for practice. Once the user records their audio input, the data is sent to the back-end built using **Flask API**2 for processing and generating feedback. The generated feedback and supportive learning materials are then displayed to the learner in the front-end.
Footnote 2: [https://flask-api.github.io/flask-api/](https://flask-api.github.io/flask-api/)
### End-to-End Mispronunciation Detection Model
The Qvoice application uses an end-to-end mispronunciation detection model to predict pronunciation correctness. The system takes the input audio and estimates the score at the character level, alongside an overall utterance score to quantify the degree of mispronunciation. The model's scores enable the identification of insertions, deletions, substitutions, and mispronounced characters. Moreover, the end-to-end model incorporates an additional attention mechanism to detect mispronuncations due to dialectal influence. The proposed model has shown promising results in identifying various mispronunciation mistakes and can be useful for language learners and instructors.
### Database
The current database contains more than \(400\) words/phrases selected from multiple educational sources such as Aljazeera ([https://learning.aljazeera.net](https://learning.aljazeera.net)), BTEC (Basic Travel Expression Corpus), etc. Each of these words/phrases is associated with a unique identifier, along with its corresponding transliteration, translation, reference audio exhibiting good pronunciation, an Arabic sentence example, the corresponding sentence translation, and additional information generated using an internal generative model that has been verified for each utterance. Additionally, each utterance was voxelized using **Farasa**[1], and has an audio clip associated with it that has been generated using a text-to-speech (**TTS**)3 engine [2]. When a fetch request is received, a random word/phrase is selected and transmitted within less than one second.
Footnote 3: [https://tts.qcri.org/reference/](https://tts.qcri.org/reference/)
### User Interface
The user interface of QVoice includes two primary pages, designed using **Flutter**4. The first page (Practice View) displays the targeted word and additional cues to the learners and allows them to record the pronunciation of the given word/phrase. Once recorded, the application transitions to the second page (Feedback View) which provides generated feedback to the
learner. The details are given below:
Practice View:In this task view, the selected Arabic word is displayed to the learner, along with an image that depicts the meaning of the word, English transliteration and English translation. The recording can be initiated by pressing and holding the record button, which will begin pulsating to indicate that the recording has started. Upon completion, the button is released, and the feedback view displays the results fetched from our back-end end-to-end model in less than \(1.5\) seconds.
Feedback View:The feedback page is designed to give fine-grained information about the pronunciation and puts it in context with L1 and L2 language. At the top of the page, an overall score at the utterance level is presented using a star-based rating system. Following, the practiced example is shown, color-coded to signal the pronunciation accuracy at the character level. The colored bar portrays the significance of each color to its score with red means very poor to green representing excellent pronunciation. The feedback view also presented the transliteration in English along with a predicted sequence from the model analysing what has been said by the user highlighting any characters or sounds that were omitted, added or mispronounced. To aid the learner with audio feedback, the app provides a reference for pronunciation with different speed (normal and slowed reference). The app also supports an assistant view putting the practiced word in context with further information.
Assistant View:In the assistant view, the learner has access to a grammatically correct example containing the practiced word highlighted in blue. Along with the Arabic example, the assistant also provided its English translation and the Arabic-generated sample, using TTS engine. This audio feedback showcase how the practice word fits in a larger sentence/context. The learner can play, pause, and replay the audio as much as needed. Finally, a detailed graphophonic explanation is provided along with reference pronunciation with English examples to aid in further improving the user's pronunciation skills.
## 3 Conclusion
The proposed application is a fully functional Arabic pronunciation learning framework. The application facilitates the learners to practice Arabic pronunciation on their own time and pace. The fine-grained pronunciation feedback locates the position of the mispronunciation, and a combination of auditory, visual, and textual learning cues enables the learner to practice efficiently while exploiting prior L1 language knowledge. In future, we plan to extend the demo to incorporate articulatory features among other cues. The practice words are selected from multiple scenarios. The presented system is modular and allows easy adaption to the desired content. The back-end systems have high accuracy for detecting mispronunciation, the audio feedback/learning cues are generated using state-of-the-art TTS allowing natural responses.
|
2304.10210
|
Bifurcations of mode-locked periodic orbits in three-dimensional maps
|
In this paper, we report the bifurcations of mode-locked periodic orbits
occurring in maps of three or higher dimensions. The `torus' is represented by
a closed loop in discrete time, which contains stable and unstable cycles of
the same periodicity, and the unstable manifolds of the saddle. We investigate
two types of `doubling' of such loops: in (a) two disjoint loops are created
and the iterates toggle between them, and in (b) the length of the closed
invariant curve is doubled. Our work supports the conjecture of Gardini and
Sushko, which says that the type of bifurcation depends on the sign of the
third eigenvalue. We also report the situation arising out of Neimark-Sacker
bifurcation of the stable and saddle cycles, which creates cyclic closed
invariant curves. We show interesting types of saddle-node connection
structures, which emerge for parameter values where the stable fixed point has
bifurcated but the saddle has not, and vice versa.
|
Sishu Shankar Muni, Soumitro Banerjee
|
2023-04-20T10:59:38Z
|
http://arxiv.org/abs/2304.10210v1
|
# Bifurcations of mode-locked periodic orbits in three-dimensional maps
###### Abstract
In this paper, we report the bifurcations of mode-locked periodic orbits occurring in maps of three or higher dimensions. The 'torus' is represented by a closed loop in discrete time, which contains stable and unstable cycles of the same periodicity, and the unstable manifolds of the saddle. We investigate two types of 'doubling' of such loops: in (a) two disjoint loops are created and the iterates toggle between them, and in (b) the length of the closed invariant curve is doubled. Our work supports the conjecture of Gardini and Sushko, which says that the type of bifurcation depends on the sign of the third eigenvalue. We also report the situation arising out of Neimark-Sacker bifurcation of the stable and saddle cycles, which creates cyclic closed invariant curves. We show interesting types of saddle-node connection structures, which emerge for parameter values where the stable fixed point has bifurcated but the saddle has not, and vice versa.
## I Introduction
Mode-locked periodic orbits occur when a dynamical system has two commensurate frequencies that are not harmonically related to each other. Such cycles always occur as stable-unstable pairs, and are located on a closed invariant curve formed by the unstable manifolds of the saddle. Such closed invariant curves occurring in maps represent tori in continuous time. In this paper we investigate bifurcations of such'resonant tori' in three-dimensional maps.
Much of the earlier work on the bifurcation of tori focused on ergodic tori or quasiperiodic orbits. The doubling of quasiperiodic orbits was first reported by Kaneko [1] and Arneodo et al. [2] in the dynamics of some three- and four-dimensional maps. Since then, such bifurcations have been reported in maps resulting from the discretization of continuous-time systems such as impulsive Goodwin's oscillator [3], 3-dimensional Lotka-Volterra model [4; 5], vibro-impacting systems [6], four coupled oscillators [7], radiophysical systems [8], etc. Sekikawa et al. demonstrated a sequence of length-doublings [9; 10] finally resulting in chaos.
Torus doubling critical point is a special point in the parameter space where the regions of occurrence of torus, doubled torus, and strange non-chaotic attractor meet. Such critical points were found in a nonlinear electronic circuit with quasiperiodic drive [11]. Scaling laws applicable in the vicinity of a critical point were obtained for a forced logistic map [12].
For a better understanding on the bifurcations of invariant tori, iterative schemes for computation and continuation of two-dimensional stable and unstable manifolds of mode-locked orbits were proposed in [13; 14]. In piecewise-smooth maps, the birth of a bilayered torus via border-collision bifurcation was reported in [15] and bifurcation of such a torus via heteroclinic tangencies was discussed. Recently, a review on the dynamical consequences of torus doublings resulting in the formation of Shilnikov attractors were discussed in [16].
After Grebogi, Ott and Yorke [17] showed that three-frequency quasiperiodicity can be stable over a parameter range, its occurrence, observed in the form of a torus in the discrete-time phase space, has been reported in many systems including symmetrical rigid tops [18], vibro-impacting systems [19], four coupled Chua's circuits [20], a simple autonomous system [21], and a ladder of four coupled van der Pol oscillators [22].
The merger and disappearance of a stable and an unstable closed invariant curves was found to be responsible for the creation of strange nonchaotic attractors [23; 24; 25]. Behaviors close to tori-collision terminal points were studied in [26].
These observations have been summarized in [4], by showing that a stable quasiperiodic orbit can bifurcate in four possible ways. These are illustrated in Fig. 1. Fig. 1(a) and (b) show two ways of 'doubling': in (a) two disjoint loops are created and the iterates toggle between them, and in (b) the length of the closed invariant curve is doubled. The bifurcation shown in Fig. 1(c) leads to a torus in discrete time which represents three-frequency quasiperiodicity. Fig. 1(d) shows a situation where a stable and an unstable tori merge and disap
Figure 1: The four types of bifurcations of a quasiperiodic orbit [4]. Unstable objects are shown with red dashed lines and stable objects are marked in blue.
pear (or are created if the parameter is varied in the opposite direction). Banerjee et al. explained these bifurcations based on the method of'second Poincare section' [4].
The above lines of work concern bifurcations of quasiperiodic orbits. However, mode-locked periodic orbits represent a generic case. Such orbits possess rational rotational numbers and appear as shrimp shaped regions in two-dimensional parameter space [27] that cover a larger range of parameters than the quasiperiodic orbit. Therefore, in this paper we explore the ways in which mode-locked periodic orbits may bifurcate.
Although much research has been done on the doubling of quasiperiodic orbits, much less research attention has been devoted to the doubling of resonant tori or mode-locked periodic orbits. Gardini and Sushko [28] proposed a conjecture regarding the conditions for different types of doubling bifurcations of closed invariant curves related to mode-locked periodic orbits. However, to date these have not been tested using systems that exhibit these bifurcations. In this paper we fill that gap and validate their conjecture.
We also explore the birth of a third frequency in a mode-locked periodic orbit, which results in the occurrence of more than two closed invariant curves in the phase space and the iterates cyclically move among them.
Both types of torus doubling are related to the occurrence of period doubling bifurcation and the birth of multiple loops is related to the occurrence of a Neimark-Sacker bifurcation in a fixed point. Since a saddle cycle as well as a stable cycle occur on the closed invariant curve, an interesting situation emerges: it is possible that one of them undergoes a bifurcation but the other does not. Such situations lead to atypical structures of the closed invariant curve, which are also reported in this paper.
The paper is organized as follows: In SSII, we recapitulate the conjecture and discuss the mechanism behind the doubling of mode-locked orbits in three-dimensional maps. In SSIII, we give an example of the doubling phenomenon of a mode-locked periodic orbit in a three-dimensional smooth map where two disjoint loops are formed. In SSIV, we provide example of the case where the length of manifold doubles and they lie on a non-orientable Mobius strip. In SSVII, collision of two closed invariant curves formed out of a saddle-node connection and a saddle-saddle connection is discussed. SSVIII presents the conclusions and future research questions arising out of this paper.
## II Qualitative theory of doubling of mode-locked orbit
Closed invariant curves are born through Neimark-Sacker bifurcation of a stable fixed point. This can lead to a few scenarios:
1. A supercritical Neimark-Sacker bifurcation leading to the formation of a stable closed invariant curve. 1. There are a dense set of points on the curve, which implies an irrational frequency ratio. This leads to the formation of a quasiperiodic orbit. 2. There are a finite number of stable and saddle points and the invariant closed curve is composed of a union of these points and the unstable manifolds of the saddle points. This leads to the formation of a mode-locked periodic orbit.
2. A subcritical Neimark-Sacker bifurcation leading to an unstable closed invariant curve. 1. The unstable closed invariant curve may be a quasiperiodic orbit; 2. The unstable closed invariant curve may be a mode-locked periodic orbit.
In this paper we consider the bifurcations of a closed invariant curve related to Case 1(b).
Let us consider a three-dimensional map with a stable cycle (say \(C^{N}\)) and saddle cycle (say \(C^{S}\)). The union of these points and the unstable manifolds of the saddle cycles form the closed invariant curve (see Fig. 2). Let us denote the eigenvalues of the stable periodic orbit as \(\lambda_{1}^{N}\), \(i=1,2,3\) and the eigenvalues of the saddle periodic orbit as \(\lambda_{i}^{S}\), \(i=1,2,3\).
In order for the closed invariant curve to exist, one of the three eigenvalues must be positive. Let the positive eigenvalues be denoted by \(\lambda_{1}^{N},\lambda_{1}^{S}\) for the node and saddle periodic points, respectively. The corresponding stable manifolds of the nodes and the unstable manifolds of the saddles form the saddle-node connections on the closed invariant curve.
Following [28], let us first consider the case of doubling of the closed invariant curve. In order for a doubling to occur, there must be a flip bifurcation, i.e., one of the eigenvalues should pass through \(-1\). Let this eigenvalue be denoted as \(\lambda_{2}^{N}\) for the node and as \(\lambda_{2}^{S}\) for the saddle. These eigenvalues, associated with the saddle and the node, need not pass through \(-1\) at the same parameter value. Indeed, the saddle and the node can undergo flip bifurcation at different parameter values. We will see this in two explicit examples considered later in this paper.
Consider a situation where both the saddle \(C^{S}\) and the node \(C^{N}\) have undergone flip bifurcation (see Fig. 3). A doubled stable orbit (say \(C^{2N}\)) has emanated from \(C^{N}\), which has turned into a flip-saddle with one unstable direction. A similar scenario occurs with the saddle periodic point \(C^{S}\), which
Figure 2: Schematic structure of a closed invariant curve for a mode-locked periodic orbit. The stable cycle is marked by blue triangles and the saddles by red rectangles. The directions associated with eigenvalues 1, 2 and 3 of the stable cycle are shown for one of the points.
doubles by making another of its branches unstable (two independent unstable directions). Two saddle points (say \(C^{2S}\)) emanate on both sides of the previous saddle (\(C^{S}\)). Two distinctly different stable behaviors can result from this bifurcation. Gardini and Sushko [28] conjectured that the shape of the resulting manifold is determined by the third eigenvalue \(\lambda_{3}^{N},\lambda_{5}^{S}\). The sign of the third eigenvalue classifies the bifurcation into two types.
### Positive third eigenvalue (\(\lambda_{3}^{N}>0,\lambda_{3}^{S}>0\))
If the third eigenvalue associated is positive, then the trajectories will be converging on one side of the manifold and will give a geometric structure like Fig. 3(a). This leads to the formation of two disjoint cycles \(\Gamma_{2a},\Gamma_{2b}\). We note that the two disjoint cycles are cyclically invariant, i.e., the trajectory on the periodic orbits toggle between the upper cycle \(\Gamma_{2a}\) and the lower cycle \(\Gamma_{2b}\). The two disjoint closed loops are invariant in the sense that they act as a single loop for the second iterate of the map.
The two attracting disjoint cycles bound a strip or a manifold whose shape is topologically a cylinder. The manifold (topological cylinder) consists of the unstable closed cycle \(\Gamma\) and the two stable disjoint cycles \(\Gamma_{2a}\), and \(\Gamma_{2b}\). The cylinder manifold is orientable. Each of the node and saddle periodic points \((C^{N},C^{S},C^{2N},C^{2S})\) have three independent eigen-directions, which are represented in Fig. 3 with different colours.
### Negative third eigenvalue (\(\lambda_{3}^{N}<0,\lambda_{3}^{S}<0\))
If the third eigenvalue is negative, the shape of the invariant set is a Mobius strip (a non-orientable manifold). The boundary of the Mobius strip constitutes the saddle-node connection of the doubled saddle and node mode-locked periodic orbits. This is qualitatively shown in Fig. 3(a).
We now illustrate the above bifurcations with a few examples.
## III Example: disjoint two-loop mode-locked orbit
In this section we consider the Mira map studied in [29], which is a three-dimensional map given by
\[\begin{split}& x^{\prime}=y,\\ & y^{\prime}=z,\\ & z^{\prime}=Bx+Cy+Az-y^{2},\end{split} \tag{1}\]
The Jacobian of the map (1) is
\[J=\begin{bmatrix}0&1&0\\ 0&0&1\\ B&C-2y&A\end{bmatrix}, \tag{2}\]
whose determinant is \(\det(J)=B\). For \(B<0\), the map is orientation-reversing everywhere, and for \(B>0\), the map is orientation-preserving everywhere. We have observed disjoint two-loop mode-locked periodic orbit in the orientation-reversing regime. In [22], authors have mentioned that such disjoint doublings of quasiperiodic orbit can be found in orientation-preserving systems as well but in higher dimensional systems of dimensions greater than or equal to four.
From the one-parameter bifurcation diagram with the variation of parameter \(B\), see Fig. 4(a), we observe that a fixed point undergoes a Hopf bifurcation to a quasiperiodic orbit and then through a saddle-node bifurcation, a mode-locked period-five orbit is formed. It then doubles to a period-10 mode-locked orbit. With further increase in the parameter \(B\), we observe subsequent doublings of mode-locked orbit, after which it transits to chaos.
The doubling of the mode-locked orbit can be better understood when we continue the saddle and stable mode-locked periodic orbits with respect to the parameter \(B\). We have used the multidimensional Newton-Raphson method to locate the saddle cycle. Fig. 4(b) shows that the saddle doubles first, followed by the doubling of the node. The computation of the eigenvalues over the same parameter range of \(a\) (see Fig. 5) reveals that, as the parameter \(B\) increases, the second eigenvalue reaches \(-1\) at \(B=-0.5627\). In Fig. 5(b), we see that the second eigenvalue of the node reaches \(-1\) at \(B=-0.55\) and hence the stable period-five orbit bifurcates to a period-ten orbit.
At \(B=-0.58\), a stable period-5 mode-locked orbit exists along with a period-5 saddle. We compute the unstable manifolds of the saddle cycle using the method of fundamental domains [30], which form a saddle-node connection. The resulting invariant closed curve is shown in Fig. 6(a).
As we increase the parameter \(B\), a stage is reached where the saddle has doubled but the stable periodic orbit has not. At \(B=-0.555\), the saddle period-10 orbit coexists with a stable period-five orbit. The manifolds of the saddle point are shown in Fig. 6(b). We notice a complex structure in which two loops have formed, but these are joined by branches that connect with the stable period-5 orbit.
Next, we consider a parameter where both the saddle and stable mode-locked periodic orbit have doubled. At \(B=-0.54\), we see a mode locked period-10 orbit and the unstable manifolds of the period-doubled saddle cycle shows two disjoint loops (Fig. 6(b)).
Note that this disjoint mode-locked periodic orbit does not imply bistability, rather the periodic points of the disjoint loops are cyclically visited. The second iterate of the map shows a single loop.
The eigenvalues of the saddle and stable periodic points before the bifurcation are shown in Table 1. We note that before the torus doubling bifurcation, \(\lambda_{1}\) is positive, \(\lambda_{2}\) is negative (which subsequently crosses \(-1\)), and \(\lambda_{3}\) is positive. Thus, our observation supports the conjecture by Gardini and Sushko.
## IV Example: length doubled mode-locked orbit
We consider the three-dimensional generalised Henon map [31] given by
\[\begin{split}& x^{\prime}=a-y^{2}-bz,\\ & y^{\prime}=x,\\ & z^{\prime}=y,\end{split} \tag{3}\]
Figure 4: (a) One-parameter bifurcation diagram of \(x\) vs \(B\). The parameters are \(A=-2.269,C=-2.1\). At \(B=-0.58\), we observe a period-5 mode-locked orbit. At \(B=-0.54\), we observe a period-10 mode-locked orbit. When \(B\) is further increased, it undergoes subsequent period-doubling bifurcations leading to chaos. (b) One-parameter bifurcation diagram showing both the saddle (black) and stable (blue) periodic points.
Figure 5: Continuation of the eigenvalues as the parameter \(B\) is varied: (a) for the saddle cycle and (b) for the stable cycle. Three eigenvalues are differentiated by different colours: eigenvalue \(\lambda_{1}\) is denoted by magenta, \(\lambda_{2}\) by red, and \(\lambda_{3}\) by black.
Figure 3: In (a), the doubling bifurcation related to the positive third eigenvalue leading to disjoint loops. In (b), the doubling bifurcation related to negative third eigenvalue and the mode locked periodic orbit lies on the surface of a Möbius strip.
Figure 6: (a) The closed invariant curve formed by the unstable manifolds (in red) of the saddle period-5 periodic orbit at \(B=-0.58\). The stable mode-locked period-5 orbit is denoted by blue triangles and the saddle period-5 orbit is denoted by black squares. (b) The structure of the closed invariant curve for \(B=-0.555\), when the saddle has doubled but the stable cycle has not yet doubled. The zoomed portion shows the connection through the stable fixed point. (c) At \(B=-0.54\), two disjoint loops are formed by the unstable manifolds of two disjoint yet cyclic saddle period-10 mode-locked orbits. The other parameters are \(A=-2.269\) and \(C=-2.1\).
where \(a,b\) are the parameters. Fig. 7(a) shows a one-parameter bifurcation diagram for this system considering \(a\) as the parameter, with \(b\) fixed at \(0.1\). A period-1 orbit bifurcates to a quasiperiodic orbit at \(a\approx 0.78\) as evidenced by the maximal Lyapunov exponent being zero (Fig. 7(b)). A stable period-4 orbit emerges at \(a\approx 1.08\).
Fig. 8 shows the saddle period-4 orbit with black squares and the stable period-4 orbit with blue triangles. We observe that they indeed form a saddle-node connection forming a single loop. Thus, the orbit existing for \(a=1.2\) is a mode-locked period-4 cycle.
We now investigate what happens to this cycle as the parameter \(a\) is varied. A one-parameter bifurcation diagram obtained using a continuation algorithm for both the saddle and stable cycles, is shown in Fig. 9. It shows that the two orbits bifurcate at different parameter values. There is a range, approximately [1.205, 1.29], where the stable orbit has bifurcated but the saddle has not.
Continuation of the eigenvalues of both the saddle and stable mode-locked periodic orbits as a parameter varies, is presented in Fig. 10. Fig. 10(a) and (b) show that with increase of the parameter, at \(a=1.204\), \(\lambda_{2}\) reaches \(-1\), which leads to the period doubling.
When the stable cycle has already doubled but the saddle one has not (Fig. 11(a)), the unstable manifolds of the saddle points (red line) form a single closed loop structure connecting all the points. Branches emerge from the saddle period-4 cycle (which was earlier the node) to connect the stable period-8 points.
We next consider a parameter value \(a=1.3\), \(b=0.1\), at which the saddle and stable period-eight orbits coexist. The period-eight orbit cyclically visits each of its points. Considering both branches of the unstable manifold, we observe that the closed invariant curve winds around twice and its length has doubled, see Fig. 11(b).
For \(a=1.2,b=0.1\), the eigenvalues of the stable and saddle period-four orbits are given in Table 2. It shows that before the bifurcation, \(\lambda_{1}^{N}\) and \(\lambda_{1}^{S}\) are positive and \(\lambda_{2}^{N}\) and \(\lambda_{2}^{S}\) are negative. As the parameter is varied, \(\lambda_{2}^{N}\) reaches \(-1\) first, followed by \(\lambda_{2}^{S}\). The third eigenvalue is negative. Under this condition, [28] conjectured that the shape of the manifold should be a Mobius strip and a length-doubling biurcation would occur. Our numerical results support the conjecture.
## V From mode-locked orbits to cyclic invariant closed curves
It is known that when a third frequency is born from a quasiperiodic orbit, it creates a torus in discrete time. In this section we present the mechanisms of generation of a third frequency from a mode-locked periodic orbit. We find that it
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Type & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\lambda_{3}\) \\ \hline Stable & 0.3550 & -0.7131 & 0.2593 \\ \hline Saddle & 1.3963 & -0.7878 & 0.0597 \\ \hline \end{tabular}
\end{table}
Table 1: The eigenvalues of stable and saddle period-5 cycles for \(B=-0.58\).
Figure 8: Saddle-node connection of stable and saddle mode locked period-4 orbit for \(a=1.2,b=0.1\).
Figure 10: Variation of the eigenvalue of the saddle cycle (a) and the stable cycle (b) with parameter \(a\). The parameter \(b\) is fixed at \(0.1\). The three eigenvalues are shown with different colours: \(\lambda_{1}\) is denoted by magenta, \(\lambda_{2}\) by red, and \(\lambda_{3}\) by black.
Figure 7: In (a), a one-parameter bifurcation diagram of the generalised Hénon map upon varying parameter \(a\). In (b), plot of the maximal Lyapunov exponent in the same parameter range. The other parameter is fixed as \(b=0.1\).
Figure 9: Variation of the saddle (black) and stable (blue) periodic orbits with parameter \(a\), where \(b\) is fixed at \(0.1\).
can result in two types of orbits. The resulting orbit can be a collection of quasiperiodic loops located on a torus in the discrete-time phase space (schematically shown in Fig. 12), and the iterates visit them cyclically. The resulting orbit can also be a mode-locked periodic orbit where the points are connected by cyclic closed invariant curves.
How can such structures be created? Since the birth of a third frequency is caused by a Neimark-Sacker bifurcation, i.e., a pair of complex conjugate eigenvalues exiting the unit circle, the starting point in such a transition has to be a closed invariant curve created out of a saddle-focus connection. If a closed invariant curve is composed of a saddle-node connection as shown in Fig. 2, with the change of a parameter a pair of real eigenvalues have to turn complex conjugate, thus creating a saddle-focus connection.
The saddle-focus connection can be of two types. The unstable manifold of the saddle cycle can either connect with the one-dimensional stable manifold associated with the real eigenvalue or with the two-dimensional stable manifold associated with the complex conjugate eigenvalues of the focus. It depends on the rate of convergence along the manifolds. If the 1D manifold associated with the real eigenvalue is the slow one, the unstable manifold connects with it (Fig. 13(a)). An example of this situation was presented in [32]. If the 2D manifold is the slow one, the unstable manifold of the saddle spirals into the node along this manifold (Fig. 13(c)).
The Neimark-Sacker bifurcation occurring in these two types of saddle-node connection are shown in Fig. 13. In both cases, if there are \(n\) points in the stable cycle, \(n\) cyclic closed invariant curves are created.
In the first case, the repelling focus creates a loop around it and the saddle-focus connection through the 1D manifold remains intact (see Fig. 13(b)). In the second case, the unstable manifolds of the saddle connect with the closed loops (see Fig. 13(d)). Note that in bifurcations of saddle-focus loops also, the saddle and the focus may bifurcate at different parameter values. Fig. 13 shows situations where the focus has bifurcated but the saddle has not.
We now give a few examples of the birth of cyclic closed invariant curves from saddle-focus connections in physical system models.
### 3D Lotka-Volterra model
In this section, we provide an explicit example of the situation schematically depicted in Fig. 13(c) and (d) using the three-dimensional Lotka-Volterra map [5] given by
\[\begin{split} x^{\prime}&=x+Rx(1-x-\alpha y-\beta z ),\\ y^{\prime}&=y+Ry(1-\beta x-y-\alpha z),\\ z^{\prime}&=z+Rz(1-\alpha x-\beta y-z),\end{split} \tag{4}\]
where \(R,\alpha,\beta\) are the parameters of the system.
For \(R=1,\alpha=1,\beta=-0.59\), we observe a stable period-six orbit and a saddle period-six orbit. Considering the one-dimensional unstable manifold of each saddle periodic point, we observe a saddle-node connection in Fig. 14 (a). As \(\beta\) is decreased to \(-0.91\), two eigenvalues of the stable periodic point become complex conjugate, while the third one remains real, all of which have modulus less than one. The saddle cycle has two complex conjugate eigenvalues with modulus less than one and a real eigenvalue greater than one. Computation of the one-dimensional unstable manifolds of each saddle periodic point reveals formation of a saddle-focus connection (see Fig. 14(b)).
As the parameter \(\beta\) is further decreased to \(-1\), we observe that the period-6 stable cycle has undergone a Neimark-Sacker bifurcation and we can observe six cyclic closed invariant curves, see Fig. 14(c).
The variation of the eigenvalues of both saddle and stable periodic point with respect to the parameter \(\beta\) is shown in Fig. 15. It shows that two eigenvalues of the stable cycle become complex at \(\beta=-0.646\). The modulus crosses \(1\) at \(\beta=-0.968\). Subsequently we see the onset of a six cyclic closed invariant curve.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Type & \(\lambda_{1}\) & \(\lambda_{2}\) & \(\lambda_{3}\) \\ \hline Stable & 0.1795 & -0.9813 & -0.0006 \\ \hline Saddle & 1.6217 & -0.6890 & -0.0001 \\ \hline \end{tabular}
\end{table}
Table 2: The eigenvalues of stable and saddle cycles for \(a=1.2,b=0.1\), i.e., before the bifurcation
Figure 11: (a) The situation at \(a=1.25\) and \(b=0.1\) where the stable period-4 cycle has already doubled but the saddle cycle has not. The stable period-8 cycle (shown as blue triangles) has emerged through a flip bifurcation of the period-4 cycle (marked in red triangles, now a saddle), and the saddle periodic orbit marked by black squares. (b) Saddle-node connection of stable and saddle mode locked period-eight orbit for \(a=1.3,b=0.1\). The red curve denotes the unstable manifold making a loop with double length.
Figure 12: Multiple loops located on a torus
### Three-dimensional border collision normal form
Piecewise smooth maps occur in many physical and engineering systems. It has been shown that, in the neighborhood of a border-crossing fixed point, such systems are aptly represented by a piecewise linear 'normal form' map [33]. For three-dimensional piecewise smooth systems, the 3D border collision normal form is given by
\[X^{\prime}=F(X)=\begin{cases}A_{L}X+b\mu,&x\leq 0\\ A_{R}X+b\mu,&x>0,\end{cases} \tag{5}\]
Figure 14: In (a), a saddle-node connection constituted by a saddle period-six orbit (marked by black squares) and a stable period-six orbit (marked by blue triangles) at \(\beta=-0.59\). In (b), a saddle-focus connection constituted by a saddle period-six orbit (marked by black squares) and a stable period-six orbit (marked by blue triangles) at \(\beta=-0.91\). The unstable manifold of each saddle periodic point is shown in red. In (c), six cyclic invariant closed curves are seen at \(\beta=-1\) after the period-6 stable cycle has undergone a Neimark-Sacker bifurcation. The other parameters are \(R=1,\alpha=1\).
Figure 13: Schematic representations of the two mechanisms of creation of multiple loops through Neimark-Sacker bifurcation. The blue triangles represent the foci, the black squares represent the saddles.
Figure 15: The variation of the eigenvalues with respect to the parameter \(\beta\). (a) the modulus of the eigenvalues of the saddle, (b) the imaginary part of the eigenvalues of the saddle, (c) the modulus of the eigenvalues of the node, and (d) imaginary part of the eigenvalues of the node. The three eigenvalues are differentiated by three different colours (magenta, red, black). The other parameters are \(R=1,\alpha=1\).
where \(F:\mathbb{R}^{3}\rightarrow\mathbb{R}^{3},X=[x,y,z]^{T},b=[1,0,0]^{T}\) is a column vector, and \(\mu\in\mathbb{R}\). The matrices \(A_{L},A_{R}\) are defined by
\[A_{L}=\begin{bmatrix}\tau_{L}&1&0\\ -\sigma_{L}&0&1\\ \delta_{L}&0&0\end{bmatrix},\ \ \ \text{and}\ \ A_{R}=\begin{bmatrix}\tau_{R}&1&0\\ -\sigma_{R}&0&1\\ \delta_{R}&0&0\end{bmatrix}. \tag{6}\]
In Fig.16 (a), we observe a period-7 stable and saddle orbit marked by blue triangles and black squares respectively for \(\delta_{R}=1.4\). A saddle-focus connection is observed. After increasing \(\delta_{R}\) to 1.5, seven cyclic closed invariant curves are observed. Continuation of the eigenvalues with variation of the parameter \(\delta_{R}\) (Fig. 17) shows that for \(\delta_{R}=1.4\), both the saddle cycle and the stable cycle have two complex conjugate eigenvalues and a real eigenvalue. At \(\delta_{R}=1.455\), the complex conjugate eigenvalues cross the unit circle, and a 7-piece cyclic closed invariant curve is born.
## VI Birth of three-frequency resonant torus
Let us consider the globally coupled three-dimensional map (7). A similar two-dimensional map has been studied in [34]. The dynamical equations of the map are given by
\[\begin{split} x^{\prime}&=f(x)+p\epsilon\Big{(}f(y)-f( x)\Big{)},\\ y^{\prime}&=f(y)+(1-p)\epsilon\Big{(}f(x)-f(y)\Big{)},\\ z^{\prime}&=x,\end{split} \tag{7}\]
where
\[f(x)=x(1-x)(ax^{2}+(b^{2}-da)x+c),\]
and \(p,\epsilon,a,b,c,d\) are parameters. A one-parameter bifurcation diagram with respect to parameter \(a\) is presented in Fig 18. For \(a<27.1107\), two disjoint cyclic quasiperiodic closed invariant curves exist. At \(a=27.1107\), a period-10 orbit is born via a saddle-node bifurcation. Fig. 19(a) shows that the period-10 cycle occurs on two disjoint closed invariant curves connected by the one-dimensional unstable manifolds of a period-10 saddle cycle. When \(a=27.408\), we observe each of the stable periodic points undergo a supercritical Neimark-Sacker bifurcation resulting in the formation of ten-disjoint cyclic closed loops, see Fig. 19(b).
The system develops two coexisting stable period-20 orbit for \(a\in(27.461,27.6)\) (this is not shown to avoid the figure becoming messy). At \(a=27.46\), ten disjoint cyclic closed loops are formed via a saddle-node bifurcation of a pair of stable and saddle period-20 orbits. Fig. 20 shows the bifur
Figure 16: In (a), a saddle-focus connection at \(\delta_{R}=1.4\) constituted by period-seven saddle orbit (marked by black squares) and stable period-seven orbit (marked by blue triangles). The unstable manifold of each saddle periodic point is shown in red. In (b), seven cyclic invariant closed curves are seen at \(\delta_{R}=1.5\) after each stable periodic point has undergone a Neimark-Sacker bifurcation. The other parameters are \(\tau_{R}=-0.5,\tau_{L}=0.74,\delta_{L}=0.73,\mu=0.01,\sigma_{L}=0.5,\sigma_{R} =1.1\).
Figure 17: The variation of the eigenvalues with respect to the parameter \(\delta_{R}\). In (a), the modulus of the eigenvalues of the saddle, (b) the imaginary part of the eigenvalues of the saddle, (c) the modulus of the eigenvalues of the node, and (d) imaginary part of the eigenvalues of the node. The three eigenvalues are differentiated by three different colours (magenta, red, black). The other parameters are set as \(\tau_{R}=-0.5,\tau_{L}=0.74,\delta_{L}=0.73,\mu=0.01,\sigma_{L}=0.5,\sigma_{R} =1.1\).
Figure 18: A one-parameter bifurcation diagram of (7) with respect to parameter \(a\). The other parameters are fixed as \(p=0.5,\epsilon=-1.4,b=1.688,c=3.5,d=0.85\). Three different parameter values are chosen as shown by dotted lines.
cation diagram in this parameter range. For clarity, we show the occurrence of saddle-node bifurcation for the cycles lying on a single loop (two stable period-2 and two saddle period-2 cycles). The concerned loop is shown in Fig. 19(d).
Similar saddle-node bifurcations occur in all 10 loops. At \(a=27.5210\), there are 10 closed invariant curves formed through saddle-node connections. These are, in turn, located on two loops. The iterates toggle between the two bigger loops and move cyclically among the smaller loops. The resulting phase space is shown in Fig. 19(c). Note that the saddle and stable periodic orbits form cyclic disjoint loops connected via their one-dimensional unstable manifold. A zoomed-in version of one of the cyclic closed loops is shown in Fig. 19(d).
## VII Collision of saddle-node and saddle-saddle connections
It has been reported in [4], that there can be situations where a stable torus and an unstable torus collide and disappear (see Fig. 1(d)). If the parameter is varied in the opposite direction, one can see the birth of a torus out of nothing. Such bifurcations, involving ergodic tori have been reported in power electronic systems.
The question is, can such a bifurcation involve resonant tori, i.e., mode-locked periodic orbits? In Fig. 21, we schematically show how such a collision between saddle-node connection \(\Gamma_{1}\) and saddle-saddle connection \(\Gamma_{2}\) can take place. The three eigendirections are shown in different colours and arrows.
As \(\Gamma_{1}\) and \(\Gamma_{2}\) approach each other as the parameter varies, they may collide and annihilate each other. If the parameter varies in the opposite direction, there is a sudden appearance of two closed invariant curves, one with a saddle-node connection and the other with a saddle-saddle connection. This is a topologically feasible scenario, but we have not yet found any physical system that exhibits this phenomenon.
## VIII Conclusions
In this work, we have explored four scenarios related to the bifurcations of mode-locked periodic orbits and their associated closed invariant curves:
1. The birth of two disjoint closed invariant curves
2. The doubling of the length of the closed invariant curve
3. The birth of a few pieces of cyclic closed invariant curves
Figure 21: Collision of saddle-node connection \(\Gamma_{1}\) with the saddle-saddle connection \(\Gamma_{2}\). The saddle periodic orbits are denoted by black squares, another saddle periodic orbit with black diamond, and the stable periodic orbits are denoted by blue triangles.
Figure 19: In (a), two disjoint cyclic closed invariant loops are observed comprising a stable period-10 orbit and a saddle period-10 orbit at \(a=27.1107\). In (b), ten disjoint pieces of cyclic closed invariant curves are observed at \(a=27.408\). In (c), ten disjoint pieces of closed invariant loops are observed comprising two stable period-20 orbits and two saddle period-20 orbits at \(a=27.5210\). In (d), a magnified version of a single closed loop is shown.
Figure 20: Continuation of periodic points lying on a single cyclic closed loop with respect to parameter \(a\). Saddle-node bifurcation of a pair of two stable and two saddle periodic points lying on a single loop at \(a=27.46\). Two stable and saddle points are marked in blue and red respectively. The other two coexisting stable and saddle points are marked in magenta and black respectively. The other parameters are fixed as \(p=0.5,\varepsilon=-1.4,b=1.688,c=3.5,d=0.85\).
4. Merger and disappearance of two closed invariant curves, one involving a saddle-node connection and the other involving a saddle-saddle connection.
We have provided prototypical examples of three-dimensional maps in which bifurcation phenomena 1, 2, and 3 above can be observed. Through numerical simulations, we have validated the conjectures proposed in [28]. In addition, we have shown the interesting structures of the invariant manifolds when the node has bifurcated and the saddle has not, and vice versa.
We have explored the transitions from mode-locked periodic orbits to the formation of cyclic closed invariant curves. There can be two mechanisms of the creation of an \(n\)-piece closed invariant curve out of a period-\(n\) mode locked orbit. We have presented examples of such bifurcations from a saddle-focus connection.
The situation depicted in Fig. 21 is mathematically possible, where two closed invariant curves--one with a saddle-node connection and the other with a saddle-saddle connection--merge and disappear. We have not yet found physical examples of such bifurcations, which remains an open problem.
## Acknowledgements
SSM expresses his thanks to Dr. David J.W. Simpson and Prof. Hil G. E. Meijer for many insightful suggestions in the course of this work. SSM also acknowledges the IISER Kolkata post-doctoral fellowship for financial support. SB acknowledges the J C Bose National Fellowship provided by SERB, Government of India, Grant No. JBR/2020/000049.
## Data availability
The data that support the findings of this study are available from the corresponding author upon request.
|
2306.09138
|
Exploiting Uncertainty for Querying Inconsistent Description Logics
Knowledge Bases
|
The necessity to manage inconsistency in Description Logics Knowledge Bases
(KBs) has come to the fore with the increasing importance gained by the
Semantic Web, where information comes from different sources that constantly
change their content and may contain contradictory descriptions when considered
either alone or together. Classical reasoning algorithms do not handle
inconsistent KBs, forcing the debugging of the KB in order to remove the
inconsistency. In this paper, we exploit an existing probabilistic semantics
called DISPONTE to overcome this problem and allow queries also in case of
inconsistent KBs. We implemented our approach in the reasoners TRILL and BUNDLE
and empirically tested the validity of our proposal. Moreover, we formally
compare the presented approach to that of the repair semantics, one of the most
established semantics when considering DL reasoning tasks.
|
Riccardo Zese, Evelina Lamma, Fabrizio Riguzzi
|
2023-06-15T13:50:46Z
|
http://arxiv.org/abs/2306.09138v3
|
# Exploiting Uncertainty for Querying Inconsistent Description Logics Knowledge Bases
###### Abstract.
The necessity to manage inconsistency in Description Logics Knowledge Bases (KBs) has come to the fore with the increasing importance gained by the Semantic Web, where information comes from different sources that constantly change their content and may contain contradictory descriptions when considered either alone or together. Classical reasoning algorithms do not handle inconsistent KBs, forcing the debugging of the KB in order to remove the inconsistency. In this paper, we exploit an existing probabilistic semantics called DISPONTE to overcome this problem and allow queries also in case of inconsistent KBs. We implemented our approach in the reasoners TRILL and BUNDLE and empirically tested the validity of our proposal. Moreover, we formally compare the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks.
Key words and phrases:Inconsistent Knowledge Base, Probabilistic Reasoning, OWL Reasoner
## 1. Introduction
In the Semantic Web, one of the main goals is to create a Knowledge Base (KB) that is as comprehensive as possible, by connecting information published on the Web and exploiting Description Logic (DL) languages. This can be done by means of the linked data cloud, where KBs from different domains are linked together to create the data cloud. One possible problem with this idea is that all the KBs that are connected have been developed by different authors considering different points of view. Therefore, it is extremely easy to find contradictions.
A classic example is given by the _flying penguin_ problem, where a KB defines that all birds fly and that penguins are birds, but they are not able to fly. This is clearly a contradiction and so, if the KB asserts that _pingu_ is a penguin, the KB is inconsistent because _pingu_ belongs to both the concept "fly" and to its complement. With standard reasoning techniques, when the KB is consistent, inference is trivial as anything is entailed by
an inconsistent KB. For this reason, systems implementing such techniques do not allow the execution of queries when the KB is inconsistent or allow only the identification of axioms causing the inconsistency. In non-monotonic reasoning this problem has been solved by considering a unique KB where non-monotonic knowledge representation is adopted, often with negative literals for coping with exception or abnormalities as penguins in the example above.
Nonetheless, when different KBs are merged, it is easy to obtain an inconsistent KB. Approaches to managing inconsistent pieces of information include the definition of a Four-Valued Logic, where classical implication is accompanied by two other types of implication of greater strength [15]; the definition of different types of negations [16]; or the use of the so-called _repairs_, consistent subsets of axioms of the KB built when the query is asked. Given the set of repairs of a KB, there are different semantics that define the conditions the query must fulfil to be true [17, 18], among them, the most used are the Brave [1], AR [17], and IAR [17] semantics.
Despite the number of works on managing inconsistent KBs, very few proposals consider the fact that information is usually uncertain in real world domains. To fill this gap, we exploit the DISPONTE semantics [14, 15], where the axioms of the KBs are associated to probability values, defining the degree of belief in the presence of the axiom in the KB. Query answering requires identifying (non-probabilistic) subsets of axioms (named _worlds_) where the query is true, in order to compute, via marginalization, the probability of the query. Defining the probability of the presence of an axiom could be easier for an expert of the domain. Or else, one can associate a probability value to axioms depending on how much they trust the source of the information, giving more confidence, in the example above, to information coming from ornithologists than that coming from, e.g., Wikipedia. Moreover, the process of computing the probability of a query is somewhat similar to the construction of the repairs, as it needs to collect consistent subsets of the axioms of the KB that make the query true. However, the results are different because we are able to return a value telling how much we can trust the truth of the query, while, with repairs, one only knows whether the query is true in a cautious/brave semantics.
One interesting feature of DISPONTE is that adding the probability does not change the syntax of the underlying logic as it exploits annotations that are built-in in Semantic Web languages. This avoids limitations on the expressivity of the languages that are used to define the KBs and compatibility problems with other KBs. Moreover, differently from [15], no pre-processing step is needed, which may be expensive for large KBs. We also present an extension, implemented in the reasoners BUNDLE and TRILL, able to cope with inconsistent KBs. BUNDLE [14, 16] and TRILL [16, 17, 18] are two probabilistic reasoners that answer queries under DISPONTE, the first is implemented in Prolog, while the second in Java. In particular, BUNDLE encapsulates several reasoners, such as the non-probabilistic reasoners Pellet [20] and Hermit [15], or the probabilistic reasoner TRILL, which can be used both inside BUNDLE or stand-alone. As we will show, the presented extension can be easily implemented in other reasoners to allow them to answer queries w.r.t. inconsistent KBs.
The contributions of the paper are twofold. First, they show how an existing probabilistic semantics can be used to reason w.r.t. inconsistent DL KBs. The paper discusses the changes that must be implemented to allow a reasoner applying the tableau algorithm to cope with this semantics and answer queries w.r.t. possibly inconsistent KBs. Moreover, despite the number of proposals to cope with inconsistent KBs, there are few implemented
systems. Second, the paper compares the presented approach to that of the repair semantics, one of the most established semantics when considering DL reasoning tasks. We show that our approach can be easily adapted to answer queries under different repair semantics.
The paper is organized as follows. Section 2 introduces background information about Description Logics, their extension to probability by means of DISPONTE, and recalls the most common Repair semantics. Section 3 discusses the proposed probabilistic semantics and the extensions needed by a reasoner to cope with it. Section 4 illustrates the differences of our semantics with the repair semantics, considering the Brave, AR and IAR semantics. Section 5 discusses related work. Section 6 shows the results of some tests of two prototype reasoners implementing the extension discussed in Section 3.1. The implementation of two different prototypes is also intended to demonstrate the ease with which this semantics can be adopted. Finally, Section 7 concludes the paper.
## 2. Preliminaries
In this section we present the background necessary for the next sections. In the first sub-section, we will briefly describe the \(\mathcal{ALC}\) DL. Then we will move on to introduce the semantics DISPONSE, describing its syntax, semantics and how to use it to calculate the probability of queries. Finally, we will provide the definitions of the repair semantics.
### Description Logics
DLs are a family of logic-based knowledge representation formalisms which are of particular interest for representing ontologies in the Semantic Web. For a good introduction to DLs we refer to [1].
The different DL languages represent the domain by means of individuals, concepts (sets of individuals of the domain), and roles (sets of pairs of individuals). They differ in how concepts and roles can be combined for defining complex concepts. We briefly review the \(\mathcal{ALC}\) DL, however, the results presented in this paper can be applied to any DL.
Let \(N_{I}\), \(N_{R}\), and \(N_{C}\) be countably infinite sets of individual, role and concept names. _Concepts_\(C\) are defined as \(C::=A\mid\bot\mid\top\mid(C\sqcap C)\mid(C\sqcup C)\mid\neg C\mid\exists R.C \mid\forall R.C\) where \(A\in N_{C}\), \(R\in N_{R}\).
Given \(C\) and \(D\) concepts, \(R\in N_{R}\), and \(a,b\in N_{I}\), a _knowledge base_ (KB) consists of a finite set of _general concept inclusion axioms_ (GCIs) \(C\sqsubseteq D\), called _TBox_, and a finite set of _concept assertions_\(a:C\) and _role assertions_\((a,b):R\), called _ABox_. Thus, given an ABox \(\mathcal{A}\) and a TBox \(\mathcal{T}\), a knowledge base is \(\mathcal{K}=(\mathcal{A},\mathcal{T})\).
The semantics od DLs is formally defined using interpretations \(\mathcal{I}=(\Delta^{\mathcal{I}},{}^{\mathcal{I}})\), where \(\Delta^{\mathcal{I}}\) is a non-empty _domain_, and \({}^{\mathcal{I}}\) is an _interpretation function_ that maps each \(a\in N_{I}\) to an element of \(\Delta^{\mathcal{I}}\), each \(C\in N_{C}\) to a subset of \(\Delta^{\mathcal{I}}\), and each \(R\in N_{R}\) to a subset of \(\Delta^{\mathcal{I}}\times\Delta^{\mathcal{I}}\). The mapping \({}^{\mathcal{I}}\) is extended to complex concepts as follows (where \(R^{\mathcal{I}}(x)=\{y|(x,y)\in R^{\mathcal{I}}\}\)):
\[\begin{array}{rcl}\top^{\mathcal{I}}&=&\Delta^{\mathcal{I}}\\ \bot^{\mathcal{I}}&=&\emptyset\\ (\neg C)^{\mathcal{I}}&=&\Delta^{\mathcal{I}}\setminus C^{\mathcal{I}}\\ (C_{1}\sqcup C_{2})^{\mathcal{I}}&=&C_{1}^{\mathcal{I}}\cup C_{2}^{\mathcal{I }}\\ (C_{1}\sqcap C_{2})^{\mathcal{I}}&=&C_{1}^{\mathcal{I}}\cap C_{2}^{\mathcal{I }}\\ (\exists R.C)^{\mathcal{I}}&=&\{x\in\Delta^{\mathcal{I}}|R^{\mathcal{I}}(x) \cap C^{\mathcal{I}}\neq\emptyset\}\\ (\forall R.C)^{\mathcal{I}}&=&\{x\in\Delta^{\mathcal{I}}|R^{\mathcal{I}}(x) \subseteq C^{\mathcal{I}}\}\end{array}\]
A query \(Q\) over a KB \(\mathcal{K}\) is an axiom for which we want to test the entailment from the KB, written as \(\mathcal{K}\models Q\). The entailment test may be reduced to checking the unsatisfiability of
a concept in the KB, i.e., the emptiness of the concept, or the inconsistency of a KB. For example, the entailment of the axiom \(C\sqsubseteq D\) may be tested by checking the unsatisfiability of the concept \(C\sqcap\neg D\) while the entailment of the axiom \(a:C\) may be tested by checking the inconsistency of the KB with the addition of \(a:\neg C\).
### Probabilistic Description Logics
DISPONTE [14, 15] is based on the distribution semantics [11] and allows the user to label some axioms with real values \(p\in[0,1]\) representing probabilities. In DISPONTE, a probabilistic knowledge base \(\mathcal{K}\) is a set of certain axioms and probabilistic axioms. A _certain axiom_ takes the form of a regular DL axiom. A _probabilistic axiom_ takes the form \(p::E\), where \(p\) is a probability and \(E\) a regular axiom. It means that we have degree of belief \(p\) in the presence of \(E\) in the KB.
Given a query \(Q\), DISPONTE computes its probability by constructing _worlds_, i.e., non-probabilistic KBs. To build a world, we first need to define an _atomic choice_, which is a couple \((E_{i},k)\) where \(E_{i}\) is the \(i\)th probabilistic axiom and \(k\) is either \(1\) or \(0\) and indicates whether \(E_{i}\) belongs to the world or not. A set of atomic choices \(\kappa\) is defined as consistent and called _composite choice_ when it does not contain two different atomic choices for the same \(E_{i}\). If a composite choice contains an atomic choice for every probabilistic axiom of the KB, it is called a _selection_. A selection \(\sigma\) identifies a _world_\(w_{\sigma}\) s.t. \(w_{\sigma}=\mathcal{C}\cup\{E_{i}|(E_{i},1)\in\sigma\}\), where \(\mathcal{C}\) is the set of certain axioms. Every world has a probability \(P(w_{\sigma})=P(\sigma)=\prod_{(E_{i},1)\in\sigma}p_{i}\times\prod_{(E_{i},0) \in\sigma}(1-p_{i})\) where \(p_{i}\) is the probability associated with axiom \(E_{i}\), because the presence of the axioms is considered pair-wise independent. \(P(w_{\sigma})\) is a probability distribution over worlds.
Given a query \(Q\) and the set of all worlds \(\mathcal{W}_{\mathcal{K}}\), the probability of \(Q\) is the sum of the probabilities of the worlds in which the query is true [14]:
\[P(Q)=\sum_{w\in\mathcal{W}_{\mathcal{K}}:w|=Q}P(w)\]
**Example 2.1** (Flying Penguins - 1).: Consider the following KB:
\[(1)\ 0.9::\textit{Penguin}\sqsubseteq\textit{Bird}\] \[(2)\ 0.9::\textit{Penguin}\sqsubseteq\neg\textit{Fly}\] \[(3)\ 0.6::\textit{pingu}:\textit{Penguin}\]
The first two axioms state that penguins are birds and that penguins do not fly, both with probability \(0.9\). The third states that _pingu_, an individual, is a penguin with probability \(0.6\). This KB is consistent and has \(8\) possible worlds because there are \(3\) probabilistic axioms, and each of them may be contained in a world or not. Thus, there are \(2^{3}\) possible worlds, which are all the possible combinations given by the probabilistic axioms. Let us consider the query \(Q=\textit{pingu}:\textit{Bird}\). This query is true in two worlds, those containing the first and third axioms. The probability is \(P(Q)=0.9\cdot 0.9\cdot 0.6+0.9\cdot 0.1\cdot 0.6=0.54\).
However, computing the probability of queries building all the worlds is infeasible because their number is exponential in the number of probabilistic axioms in the KB. To try to circumvent this problem, it is possible to resort to classical inference algorithms for collecting justifications for the query, which are usually less than the worlds in the case of large KBs, and compute the probability from them. The problem of finding justifications for a query has been investigated by various authors [1, 1, 2, 3, 4, 5].
A _justification_ for a query \(Q\) is a consistent inclusion-minimal subset \(\mathcal{E}\) of logical axioms of a KB \(\mathcal{K}\) such that \(\mathcal{E}\models Q\). On the other hand, a justification for the inconsistency of a KB is an inclusion-minimal subset \(\mathcal{E}\) of logical axioms of a KB \(\mathcal{K}\) such that \(\mathcal{E}\) is inconsistent. It can be see as a justification for the query \(\top\sqsubseteq\perp\). all-just\((Q,\mathcal{K})\) indicates the set of all justifications for the query \(Q\) in the KB \(\mathcal{K}\).
Sometimes, the term incoherent KB is used when the KB contains at least one unsatisfiable concept, i.e., a concept that cannot have individuals. Incoherence is a type of inconsistency occurring in the TBox of a KB. To simplify the notation, in this paper we will use the term inconsistency also to indicate incoherence. Note that, if the KB is consistent, there is no justification for the inconsistency. On the other hand, if the KB is inconsistent there will be at least one justification \(\mathcal{E}\) for the inconsistency.
A set \(\mathcal{J}\) of justifications defines a set of worlds \(\mathcal{W}_{\mathcal{J}}=\{w|j\in\mathcal{J},j\subseteq w\}\). \(\mathcal{J}\) is called _covering_ for \(Q\) if \(\mathcal{W}_{\mathcal{J}}\) is equal to the set of all the worlds in which \(Q\) succeeds, i.e., if for each \(w\), \(w\models Q\leftrightarrow w\in W_{\mathcal{J}}\); or _covering_ for the inconsistency of \(\mathcal{K}\) if it identifies all the inconsistent worlds.
**Example 2.2** (Flying Penguins - 2).: Let us consider the KB of Example 2.1 and the query \(Q=\mathit{pingu}:\mathit{Bird}\). This query has one justification, i.e., \(\{(1),(3)\}\), which is also covering because it defines the two worlds where the query holds.
An effective way of computing the probability of a query from a covering set of justifications consists in compiling it into a Binary Decision Diagram (BDD). A BDD is a rooted graph used to represent a function of Boolean variables, with one level for each variable. Each node of the graph has two children corresponding either to the 1 value or the 0 value of the variable associated with the node. Its leaves are either 0 or 1. BDDs are used to represent the Disjunctive Normal Form (DNF) Boolean formula \(f_{\textsc{all-JUST}(Q,\mathcal{K})}(\mathbf{X})\) built from all-just\((Q,\mathcal{K})\).
**Definition 2.3** (Boolean formula of a justification).: Given a justification \(\mathcal{E}\), and a set \(\mathbf{X}=\{X_{i}\mid p_{i}::E_{i}\in\mathcal{K}\}\) of independent Boolean random variables associated with probabilistic axioms with \(P(X_{i}=1)=p_{i}\), where \(p_{i}\) is the probability of axiom \(E_{i}\), the Boolean formula of a justification is \(f_{\mathcal{E}}(\mathbf{X})=\bigwedge_{(E_{i}\in\mathcal{E})}X_{i}\).
**Definition 2.4** (Boolean formula of a set of justifications).: Given a set of justifications \(\mathcal{J}\), and a set \(\mathbf{X}=\{X_{i}\mid p_{i}::E_{i}\in\mathcal{K}\}\) of independent Boolean random variables associated with probabilistic axioms with \(P(X_{i}=1)=p_{i}\), where \(p_{i}\) is the probability of axiom \(E_{i}\), the Boolean formula of the set of justifications is \(f_{\mathcal{J}}(\mathbf{X})=\bigvee_{\mathcal{E}\in\mathcal{J}}f_{\mathcal{E} }(\mathbf{X})=\bigvee_{\mathcal{E}\in\mathcal{J}}\bigwedge_{(E_{i}\in\mathcal{ E})}X_{i}\).
From Definition 2.4, given a query \(Q\) and the set all-just\((Q,\mathcal{K})\) of all the justifications for \(Q\) in the KB \(\mathcal{K}\), the formula \(f_{\textsc{all-JUST}(Q,\mathcal{K})}(\mathbf{X})=\bigvee_{\phi\in\textsc{all- JUST}(Q,\mathcal{K})}\bigwedge_{(E_{i}\in\phi)}X_{i}\). The probability that \(f_{\textsc{all-JUST}(Q,\mathcal{K})}(\mathbf{X})\) takes value 1 gives the probability of \(Q\)[12, 13]. The same applies also for inconsistency, where _Incons_ is the query \(\top\sqsubseteq\perp\), so \(f_{\textsc{all-JUST}(\mathit{Incons},\mathcal{K})}(\mathbf{X})\) is the DNF Boolean formula representing all-just\((\mathit{Incons},\mathcal{K})\). The probability that \(f_{\textsc{all-JUST}(\mathit{Incons},\mathcal{K})}(\mathbf{X})\) takes value 1 gives the probability of \(\mathcal{K}\) of being inconsistent. This can be seen as a measure of the inconsistency of the KB.
In principle, to compute the probability we could resort to different possible languages, such as Sentential Decision Diagrams (SDD) or Deterministic Decomposable Negation Normal Form (d-DNNF). We use BDDs because packages for compiling formulas into BDDs are extremely optimized and can manage BDDs of very large size (see e.g. [1, 1, 11, 12]) while performing sometimes better than SDD packages [16].
Given the BDD, we can use function Probability described by Kimmig et al. [KDD\({}^{+}\)11] to compute the probability.
### Repair Semantics
In this section we briefly recall the main definitions and semantics of repairs.
**Definition 2.5** (Repair).: A repair \(\mathcal{R}\) of a KB \(\mathcal{K}\) is an inclusion-maximal subset of the ABox that is consistent together with the TBox.
Given a query \(Q\) and a KB \(\mathcal{K}=(\mathcal{A},\mathcal{T})\), where \(\mathcal{A}\) is the ABox and \(\mathcal{T}\) is the TBox, we can define the three main semantics for repairs as follows.
**Definition 2.6** (Repair Semantics).:
**Brave:**: A query \(Q\) is true over a KB \(\mathcal{K}\) under the Brave semantics, written \(\mathcal{K}\models_{\mathit{Brave}}Q\), if \((\mathcal{R},\mathcal{T})\models Q\) for at least one repair \(\mathcal{R}\) of \(\mathcal{A}\)[1].
**AR:**:. A query \(Q\) is true over a KB \(\mathcal{K}\) under the AR semantics, written \(\mathcal{K}\models_{\mathit{AR}}Q\), if \((\mathcal{R},\mathcal{T})\models Q\) for every repair \(\mathcal{R}\) of \(\mathcal{A}\)[1].
**IAR:**: A query \(Q\) is true over a KB \(\mathcal{K}\) under the IAR semantics, written \(\mathcal{K}\models_{\mathit{IAR}}Q\), if \((\mathcal{D},\mathcal{T})\models Q\) where \(\mathcal{D}=\bigcap_{\mathcal{R}\in Rep(\mathcal{K})}\mathcal{R}\) and \(Rep(\mathcal{K})\) is the set of all the repairs for \(\mathcal{K}\)[1].
**Example 2.7** (University employee positions).: Consider the following KB \(\mathcal{K}=(\mathcal{A},\mathcal{T})\), where \(\mathcal{T}\) is:
\[\begin{split}&(1)\ \mathit{Professor}\sqcap\mathit{Tutor}\sqsubseteq \mathit{Lecturer}\\ &(2)\ \mathit{Person}\sqcap\mathit{Professor}\sqsubseteq \mathit{PhD}\\ &(3)\ \mathit{Professor}\sqcup\mathit{Tutor}\sqsubseteq \mathit{UniversityEmployee}\\ &(4)\ \mathit{Professor}\sqsubseteq\neg\mathit{Tutor}\end{split}\]
and \(\mathcal{A}\) is:
\[\begin{split}&(5)\ \mathit{alice}:\mathit{Person}\\ &(6)\ \mathit{alice}:\mathit{Professor}\\ &(7)\ \mathit{alice}:\mathit{Tutor}\end{split}\]
The KB states that a professor which is also a tutor is a lecturer (1), that a person who is a professor is a PhD (2), and that professors and tutors are university employees (3). A professor cannot be a tutor (4). Finally, Alice is a person (5), a professor (6) and a tutor (7).
It is easy to see that the ABox of this KB is inconsistent w.r.t. the TBox. This KB has two repairs: (I) \(\mathcal{R}_{I}=\mathcal{A}\setminus\{(6)\}\) and (II) \(\mathcal{R}_{II}=\mathcal{A}\setminus\{(7)\}\). The query \(Q_{1}=\mathit{alice}:\mathit{Lecturer}\) is false under the three semantics because it is impossible to find a repair where Alice is both a professor and a tutor. The query \(Q_{2}=\mathit{alice}:\mathit{PhD}\) is true under the Brave semantics because it is true in \(\mathcal{R}_{II}\). The query \(Q_{3}=\mathit{alice}:\mathit{UniversityEmployee}\) is true under the AR semantics because it is true in every repair. Finally, the query \(Q_{4}=\mathit{alice}:\mathit{Person}\) is true under the IAR semantics because it is true in the intersection of \(\mathcal{R}_{I}\) and \(\mathcal{R}_{II}\), that contains only axiom (5).
## 3. Querying Inconsistent Knowledge Bases
We can use the definitions from Section 2.2 to define the probability of a query \(Q\) given a (probabilistic) possibly inconsistent KB \(\mathcal{K}\), with \(Q\) an axiom. In Section 4, we will compare our proposal with the repair semantics [1]. We use a probability value to mark which axioms may be incorrect. Under this semantics, given a KB \(\mathcal{K}\) and a query \(Q\), the probability of \(Q\) is:
\[P_{C}(Q)=P(Q|\mathit{Cons})=\frac{P(Q,\mathit{Cons})}{P(\mathit{Cons})}\]
Here, \(P(\mathit{Cons})\) is the probability that the KB is consistent, i.e., the probability of the formula \(\neg f_{\textsc{ALL-JUST}(\mathit{Incons},\mathcal{K})}\), while \(P(Q,\mathit{Cons})\) is the probability of the formula \(f_{\textsc{ALL-JUST}(Q,\mathcal{K})}\land\neg f_{\textsc{ALL-JUST}(\mathit{ Incons},\mathcal{K})}\). This consists is equivalent to all the _consistent worlds_ and checking if the query holds in each. The final probability is the probability of the query within the consistent worlds over the probability of the consistency of the KB.
**Example 3.1** (Flying Penguins - 3).: Consider the query \(Q=\mathit{pingu}:\neg\mathit{Fly}\) and the following KB (slightly different from that of Ex. 2.1):
\[(1)\,0.9 ::\mathit{Bird}\sqsubseteq\mathit{Fly} (2)\,\mathit{Penguin}\sqsubseteq\mathit{Bird}\] \[(3)\,0.9 ::\mathit{Penguin}\sqsubseteq\neg\mathit{Fly} (4)\,\mathit{pingu}:\mathit{Penguin}\]
In this case we have four different worlds:
\[w_{1}= \{(1),(2),(3),(4)\} w_{2}= \{(1),(2),(4)\}\] \[w_{3}= \{(2),(3),(4)\} w_{4}= \{(2),(4)\}\]
World \(w_{1}\) is inconsistent, therefore it does not contribute to the probability of \((Q,\mathit{Cons})\). Among the other worlds, the query \(Q\) is true only in \(w_{3}\), which has probability \(0.9\cdot 0.1=0.09\). The probability of the KB to be consistent is
\[P(w_{2})+P(w_{3})+P(w_{4}) =(P(1)\cdot(1-P(3)))\,+\] \[((1-P(1))\cdot P(3))\,+\] \[((1-P(1))\cdot(1-P(3)))\] \[=0.9\cdot 0.1+0.1\cdot 0.9+0.1\cdot 0.1\] \[=0.19\]
So, \(P_{C}(Q)=0.09/0.19=0.474\).
An interesting result can be seen if axiom (3) is non-probabilistic. In this case, the KB has the two worlds \(\{(1),(2),(3),(4)\}\) and \(\{(2),(3),(4)\}\), having probability \(0.9\) and \(0.1\), respectively. The first world is inconsistent while \(Q\) holds in the second. The probability of consistency is \(P(\mathit{Cons})=0.1\) because there is a single world that is consistent. In this world the query is true, so the probability \(P(Q,\mathit{Cons})=0.1\). As result, we obtain that \(P_{C}(Q)=1\). On the other hand, the query \(\overline{Q}=\mathit{pingu}:\mathit{Fly}\) takes probability \(0\). The same results can be achieved with every value of probability of axiom (1) that is strictly lower than \(1\). This example shows how a correct design of the knowledge is important. Indeed, by associating a probability value to axiom (1), which is not always true in the domain, axioms that are certain acquire, in a way, more importance. The information that penguins do not fly is certain because there are no species of penguins that have the ability to fly, thus, irrespectively of the probability of axiom (1), the query \(Q\) is certainly true.
This process maintains unchanged the probability of queries w.r.t. a consistent KB. The proof of this statement is trivial because, if the KB is consistent, all the worlds are so, and therefore the computation of the query is equivalent to the former definition of the DISPONTE semantics.
A second result that is less obvious is that, given a query \(Q\) and an inconsistent KB \(\mathcal{K}\), if \(Q\) does not depends on axioms causing inconsistency, its probability has the same value than that which can be computed w.r.t. any \(\mathcal{K}^{\prime}\subset\mathcal{K}\), where \(\mathcal{K}^{\prime}\) is an inclusion-maximal subset of axioms from \(\mathcal{K}\) that is consistent. If we remove the probability from the probabilistic axioms of \(\mathcal{K}^{\prime}\), it is equivalent to a repair for \(\mathcal{K}\), therefore \(Q\) is true in every repair for \(\mathcal{K}\). Intuitively, \(Q\) and _Cons_ are independent because \(Q\) is true in every \(\mathcal{K}^{\prime}\). The axioms contained in the justifications for \(Q\) do not appear in the justifications for the inconsistency, and so in the formula for _Cons_ and vice-versa. The random variables associated with \(Q\) and _Cons_ are independent, therefore the computation of the conditional probability \(P_{C}(Q)\) is
\[P_{C}(Q)=P(Q|\textit{Cons})=\frac{P(Q)\cdot P(\textit{Cons})}{P(\textit{Cons}) }=P(Q)\]
### Reasoning Algorithm
One of the most used approaches to compute justifications is the tableau algorithm [11, 12, 13]. Sebastiani and Vescovi [14] defined an approach for finding justifications in the \(\mathcal{EL}\) DL that builds a Horn propositional formula of polynomial size and applies Boolean Constraint Propagation. Arif et al. [1] used implicit hitting set dualization by exploiting a SAT solver. Baader and colleagues [1, 2] presented different approaches that create a Boolean formula, called _pinpointing formula_, which represents the set of all the justifications for a query w.r.t. \(\mathcal{SI}\) KBs. This approach is implemented, for example, in TRILL\({}^{P}\) and TORNADO [20], two subsystems of TRILL. These two sub-systems are not considered in this paper because the first needs a further processing of the results to apply our approach, while the second returns results in a form not suitable for our purposes.
We now describe the tableau approach that is used in the reasoners we extended, TRILL [20, 21, 22] and BUNDLE [1, 23]. A _tableau_ is a graph where each node represents an individual \(a\) and is labelled with the set of concepts \(\mathcal{L}(a)\) to which \(a\) belongs. Each edge \(\langle a,b\rangle\) in the graph is labelled with the set of roles \(\mathcal{L}(\langle a,b\rangle)\) to which the couple \((a,b)\) belongs. The algorithm proves an axiom by contradiction by repeatedly applying a set of consistency preserving _tableau expansion rules_ until a clash (i.e., a contradiction) is detected or a clash-free graph is found to which no more rules are applicable. A clash is a couple \((C,a)\) where \(C\) and \(\neg C\) are present in the label of the node \(a\), i.e., \(\{C,\neg C\}\subseteq\mathcal{L}(a)\).
The expansion rules modify the labels of the nodes and edges of the tableau and update a _tracing function_\(\tau\), which associates a set of justifications to each concept or role of a label in the tableau. For example, the value of the tracing function for the label \(C\) of node \(a\) when axiom \(a:C\) is in the KB is set to \(\{\{a:C\}\}\), i.e., a set of justifications containing one single justification that, in turns, contains the axiom \(a:C\). Given a query, once the tableau is fully expanded, to build the justification for the query, the labels that cause clashes are collected. Then, the justifications of these labels are joined to form the justification for the query. We refer the interested reader to [18] for a detailed overview.
The expansion rules are divided into _deterministic_ and _non-deterministic_. As stated above, the first, when applied to a tableau, produce a single new tableau. The latter, when applied to a tableau, produce a set of tableaux.
When the tableau algorithm adds the negation of the query to the tableau, the value of its tracing function is set to \(\{\emptyset\}\), i.e., an empty justification, because that information does not come from the KB. So, if a clash is detected, it is not possible to know what causes it: an inconsistency, or the query. Thus reasoners usually perform a consistency check before expanding the tableau, preventing its execution if the KB is inconsistent. This is due to the fact that, when the axioms involved in the justification of a query also cause an inconsistency, the justifications for the query may be subsets of those for the inconsistency. In Example 3.1, given the query \(Q=\mathit{pingu}:\mathit{Fly}\), the single justification for the query is the set of axioms \(\{(4),(2),(1)\}\), while that for the inconsistency is \(\{(4),(2),(1),(3)\}\). In this case, when extracting the justifications from the values of the tracing function of the labels that create the clashes, the latter is not collected because it is a superset of the first.
A simple way to solve this problem is to change the tracing function so that it adds a placeholder to the justifications for the negation of the query to keep the justifications that contain this placeholder separated from those without it, which are justifications caused by an inconsistency. Basically, the tracing function for the negation of the query is initialized as \(\{\{Q_{p}\}\}\), where \(Q_{p}\) is a fake axiom that does not appear in the KB. This axiom represents a flag indicating that the label has been created because of the query. Then, the standard expansion rules can be applied to expand the tableau in the usual way, because \(Q_{p}\) acts like an axiom. Therefore, the tableau algorithm remains unchanged. At the end, if there are clashes, the justifications for the query will contain \(Q_{p}\), those due to the inconsistency will not. An important aspect of this implementation is that all the results about completeness and correctness of the tableau (see [10]) still apply, since the tableau algorithm is not modified. This also allows the application of this approach to every DL for which tableau expansion rules have been defined.
**Example 3.2** (Flying Penguins - 4).: Consider the KB of Example 3.1 where the probability values from the axioms have been removed.
\[\begin{array}{ll}(1)\;\mathit{Bird}\sqsubseteq\mathit{Fly}&(2)\;\mathit{ Penguin}\sqsubseteq\mathit{Bird}\\ (3)\;\mathit{Penguin}\sqsubseteq\neg\mathit{Fly}&(4)\;\mathit{pingu}:\mathit{ Penguin}\end{array}\]
Let us remove axiom (3). The classic tableau creates a tableau containing a single node, corresponding to \(\mathit{pingu}\), labelled with the concept \(\mathit{Penguin}\), having as value of \(\tau\) the set of justifications \(\{\{(4)\}\}\). Given the query \(Q=\mathit{pingu}:\mathit{Fly}\), the tableau is updated by adding the label \(\neg\mathit{Fly}\) to the node of \(\mathit{pingu}\), with the value of \(\tau\) equals to \(\{\emptyset\}\). Expanding this tableau by following the axioms of the KB, the label \(\mathit{Fly}\) will be added to the node for \(\mathit{pingu}\), with the value for \(\tau\) corresponding to \(\{\{(4),(2),(1)\}\}\). In this tableau there is a clash because the node for \(\mathit{pingu}\) contains both the labels \(\mathit{Fly}\) and \(\neg\mathit{Fly}\). A justification can be found by joining the values of the tracing function of the two labels, i.e., \(\emptyset\cup\{(4),(2),(1)\}=\{(4),(2),(1)\}\).
If the KB contains also axiom (3), during the expansion of the tableau, the value of \(\tau\) for the label \(\neg\mathit{Fly}\) is updated by adding the justification \(\{(4),(3)\}\). However, this justification cannot be added because the initial value of the tracing function contains the empty set, which is a subset of any other set, so the tableau cannot correctly discriminate between justifications due to the query and to the inconsistency.
If we consider this example with the new tracing function, the value of \(\tau\) for the label of the query \(\neg\mathit{Fly}\) is initialized as \(\{\{Q_{p}\}\}\). Therefore, the justifications for \(Q\) will be \(\{Q_{p},(4),(2),(1)\}\), while that for the inconsistency will be \(\{(4),(2),(1),(3)\}\). In this case, during the expansion of the tableau, the value of \(\tau\) for the label \(\neg\mathit{Fly}\) can be updated by adding the justification \(\{(4),(3)\}\), because the justification due to the inconsistency is no more a superset of \(\{Q_{p}\}\), which is due to the query, and the reasoner can easily discriminate between the two.
Another possible way to solve this problem is to directly add the negation of the query to the KB by using a name known by the reasoner. For example, if the query is the axiom \(a:C\), it is sufficient to add the axioms \(a:C_{Q_{p}}\) and \(C_{Q_{p}}\sqsubseteq\neg C\) where \(C_{Q_{p}}\) is a fresh concept not contained in the KB. Now, it is possible to run a reasoner that can return the justifications for the inconsistency of a KB and split all the justifications in two sets, one containing justifications with axioms involving the concept \(C_{Q_{p}}\) and one containing those with axioms not involving the fresh concept. The first set will contain the justifications for the query (in our case the axioms \(a:C_{Q_{p}}\) and \(C_{Q_{p}}\sqsubseteq\neg C\) must be removed from the justifications in order to collect justifications containing only axioms from the original KB), while the second will contain the justifications for the inconsistency. In this way, we do not need to modify the reasoner, nor the tracing function.
The whole reasoning flow can be divided into 4 different steps, described below, and shown in Figure 1. Once all the justifications are collected, the BDD \(BDDQ\) for the query \(Q\) is built from all-just\((Q,\mathcal{K})\) while that for the consistency of the KB \(BDDC\) is generated by negating the BDD for all-just\((\mathit{Incons},\mathcal{K})\). The next step consists of joining \(BDDQ\) and \(BDDC\) to create a BDD \(BDDQC=BDDQ\wedge BDDC\) from which the probability \(P(Q,\mathit{Cons})\) can be computed. The probability \(P(\mathit{Cons})\) can be computed directly from \(BDDC\). Finally, the probability \(P_{C}(Q)\) is computed from the two BDDs as \(P(Q,\mathit{Cons})/P(\mathit{Cons})\).
Regarding complexity, there are two problems to consider. The first is finding justifications, whose complexity has been deeply studied and depends on the logic used [11,
Figure 1. Reasoning flow: Step 1 computes the set of justifications for the query \(Q\) w.r.t. the KB \(\mathcal{K}\) all-just\((Q,\mathcal{K})\) and for the inconsistency (if any) all-just\((\mathit{Incons},\mathcal{K})\). Step 2 builds the BDDs \(BDDQ\) from all-just\((Q,\mathcal{K})\) and \(BDDC\) from all-just\((\mathit{Incons},\mathcal{K})\). Step 3 joins \(BDDQ\) and \(BDDC\) creating \(BDDQC\). Finally, Step 4 computes the probability \(P_{C}(Q)\) by first computing the probabilities \(P(Q,\mathit{Cons})\) from \(BDDQC\) and \(P(\mathit{Cons})\) from \(BDDC\).
PS10a, PS10b]. In particular, Corollary 15 in [10] shows that finding all the justifications cannot be solved in output polynomial time for \(\mathit{DL}\)-\(\mathit{Lite}_{\mathit{bool}}\) TBoxes unless \(P=NP\). Since \(\mathit{DL}\)-\(\mathit{Lite}_{\mathit{bool}}\) is a sublog of \(\mathcal{ALC}\), this result also holds for \(\mathcal{ALC}\) and all its extensions. Despite these results, it has been shown that all justifications can be found over many real world ontologies within a few seconds [11, 1].
The second problem is building the BDD and computing the probability, that can be seen as the problem of computing the probability of a sum-of-products [1]. While the problem is #P-hard, algorithms based on BDDs were able to solve problems with hundreds of thousands of variables (see e.g. the works on inference on probabilistic logic programs [1, 13, 14, 15, 16]). Methods for weighted model counting [1, 2] can be used as well to solve the sum-of-products problem.
The class #P [21] describes counting problems associated with decision problems in NP. More formally, #P is the class of function problems of the form "compute \(f(x)\)", where \(f\) is the number of accepting paths of a nondeterministic Turing machine running in polynomial time. A prototypical #P problem is the one of computing the number of satisfying assignments of a CNF Boolean formula. #P problems were shown very hard. First, a #P problem must be at least as hard as the corresponding NP problem. Second, Toda [17] showed that a polynomial-time machine with a #P oracle (P\({}^{\#\text{P}}\)) can solve all problems in PH, the polynomial hierarchy.
Note that the extensions introduced in this paper do not change the complexity of the two problems, and the original algorithms (those without the extensions) proposed for solving these two problems were shown to be able to work on inputs of real world size1 [11].
Footnote 1: E.g., NCI ontology ([https://ncit.nci.nih.gov/ncitbrowser/](https://ncit.nci.nih.gov/ncitbrowser/)) with 3,382,017 axioms and \(\mathcal{SH}\) expressiveness, or FMA ([http://si.washington.edu/projects/fma](http://si.washington.edu/projects/fma)) with 88,252 axioms in the TBox and RBox and 237,382 individuals and \(\mathcal{ALCOLN}(\mathbf{D})\) expressiveness.
## 4. Comparison with the Repair Semantics
In this section we compare our proposal with the repair semantics [1], where the authors consider KBs where the TBox is consistent while the ABox may be inconsistent w.r.t. the TBox.
For the sake of comparison, we construct the worlds by including all the TBox axioms and some axioms from the ABox.
As already discussed, a world is a subset of axioms of the original KB, i.e., it can be seen as a smaller KB or a sort of repair. A _consistent world_ is a world that is consistent. Conversely, an _inconsistent world_ is a world that is inconsistent. Since repairs correspond to all inclusion-maximal consistent subsets of ABox axioms, they can be viewed as 'possible worlds' [1]. A repair is an inclusion-maximal subset of the ABox that is TBox-consistent, therefore, with a small abuse of the notation, we can consider a repair as a KB having the entire TBox and the set of axioms from the ABox selected by the repair. Therefore, given \(Rep(\mathcal{K})\), the set of all repairs, and \(\mathcal{W}_{\mathcal{K}}\) the set of all worlds, we have \(Rep(\mathcal{K})\subseteq\mathcal{W}_{\mathcal{K}}\). Conversely, a consistent world \(w\) defines a set of repairs \(Rep_{w}(\mathcal{K})=\{\mathcal{R}|\ \mathcal{R}\) is a repair, \(w\subseteq\mathcal{R}\}\). The next Lemma follows from these definitions.
**Lemma 4.1**.: _Given a KB \(\mathcal{K}\) and a world \(w\) consistent w.r.t. \(\mathcal{K}\), then the set \(Rep_{w}(\mathcal{K})\) is not empty._
Proof.: Each world \(w\) contains all the axioms from the TBox and some axioms from the ABox. If a world is consistent then we can add ABox axioms to it until it becomes equivalent to a repair, i.e., it contains all the assertions contained in the repair. So given a consistent world, this represents one or more repairs.
By refutation, suppose there exists a consistent world \(w\) such that \(Rep_{w}(\mathcal{K})\) is empty. This means that there exists a set of axioms \(\mathcal{E}\subset w\) such that it is not contained in any repair. By definition of repairs, if \(\mathcal{E}\not\subseteq\mathcal{R}\) for every repair \(\mathcal{R}\), then \(\mathcal{E}\) is TBox-inconsistent. So, \(w\) cannot be consistent.
It is also important to note that a repair can be obtained by removing a hitting set of the justifications.
Given these definitions, we can state the following theorems.
**Theorem 4.2**.: _Given a possibly inconsistent KB \(\mathcal{K}\), a query \(Q\), and the Boolean formula \(f_{\textsc{all-Just}(\mathit{Incons},\mathcal{K})}\), \(Q\) is true under the Brave semantics, i.e., \(\mathcal{K}\models_{\mathit{Brave}}Q\) iff there exists at least one justification for \(Q\) w.r.t. \(\mathcal{K}\) such that the corresponding Boolean formula \(\phi\) joined with \(\neg f_{\textsc{all-Just}(\mathit{Incons},\mathcal{K})}\) is satisfiable._
Proof.: From Section 2.2, a justification identifies a set of worlds. The smallest one is the world containing the axioms in the justification. The set of justifications all-just\((\mathit{Incons},\mathcal{K})\) is represented by the formula \(f_{\textsc{all-JUST}(\mathit{Incons},\mathcal{K})}\). Given a satisfying truth assignment of the variables of \(f_{\textsc{all-JUST}(\mathit{Incons},\mathcal{K})}\), it is possible to build a corresponding set of axioms \(\mathcal{E}\) consisting of all those axioms \(E_{i}\) whose corresponding variable \(X_{i}\) is given value true in the assignment. These axioms are a justification for the inconsistency of the KB. Therefore, they define a set of worlds such that each world is inconsistent. Analogously, from a satisfying truth assignment of the variables of the formula \(\neg f_{\textsc{all-JUST}(\mathit{Incons},\mathcal{K})}\) it is possible to create sets of axioms that represents the justifications for the consistency of the KB, and so, the consistent worlds. Therefore, if there is at least one justification for \(Q\) w.r.t. \(\mathcal{K}\) with \(\phi\) its representation as Boolean formula, from a satisfying truth assignment of the variables of \(\phi\wedge\neg f_{\textsc{all-JUST}(\mathit{Incons},\mathcal{K})}\) it is possible to create at least one world where \(Q\) is true. From Lemma 4.1 there exists at least one repair \(\mathcal{R}\) such that \((\mathcal{R},\mathcal{T})\models Q\). So, \(\mathcal{K}\models_{\mathit{Brave}}Q\). Basically, this consists in finding all the _consistent worlds_ and checking if \(Q\) holds in at least one of these worlds.
If the formula \(\phi\wedge\neg f_{\textsc{all-JUST}(\mathit{Incons},\mathcal{K})}\) is not true for every \(\phi\), it is not possible to find a consistent world where the query \(Q\) is true. Hence, \(Q\) is false in every consistent world and thus, from Lemma 4.1, it is so in every repair. So, \(\mathcal{K}\not\models_{\mathit{Brave}}Q\).
To compare our semantics with the AR semantics, we first need to give three more definitions [1].
**Definition 4.3** (Conflict [1]).: A _conflict_ of \(\mathcal{K}=(\mathcal{A},\mathcal{T})\) is an inclusion-minimal subset of \(\mathcal{A}\) that is inconsistent together with \(\mathcal{T}\). The set of conflicts of \(\mathcal{K}\) is denoted \(\mathit{conf}(\mathcal{K})\).
Each conflict corresponds to a justification for the inconsistency that contains the assertions of the conflict and some axioms from \(\mathcal{T}\), i.e., given a justification \(j\in\textsc{all-just}(\mathit{Incons},\mathcal{K})\), if we remove from the justification the terminological axioms we obtain the conflict \(\{E_{i}|E_{i}\in j\cap\mathcal{A}\}\).
**Definition 4.4** (Set of conflicts of a set of assertions).: Given \(\mathcal{K}=(\mathcal{A},\mathcal{T})\), the _set of conflicts of a set of assertions_\(\mathcal{C}\subseteq\mathcal{A}\), denoted as \(\mathit{conf}(\mathcal{C},\mathcal{K})\) is
\[\mathit{conf}(\mathcal{C},\mathcal{K})= \{\mathcal{B}|\mathcal{B}\in\mathit{conf}(\mathcal{K}),\mathcal{C }\cap\mathcal{B}\neq\emptyset\}\]
**Definition 4.5** (Cause [1]).: A _cause_ for a Boolean query \(Q\) in a KB \(\mathcal{K}=(\mathcal{A},\mathcal{T})\) is an inclusion minimal subset \(\mathcal{C}\subseteq\mathcal{A}\) consistent with \(\mathcal{T}\) such that \((\mathcal{C},\mathcal{T})\models Q\). We use \(\text{causes}(Q,\mathcal{K})\) to refer to the set of causes for \(Q\) in \(\mathcal{K}\).
Given a query \(Q\), each cause corresponds to a justification for the query \(Q\) that contains the assertions of the cause and some axioms from \(\mathcal{T}\), i.e., given a justification \(j\in\textsc{all-just}(Q,\mathcal{K})\), if we remove from the justification the terminological axioms we obtain the cause \(\text{cause}(j)=\{E_{i}|E_{i}\in j\cap\mathcal{A}\}\). Thus, from the set all-just\((Q,\mathcal{K})\) we can define the set of causes of a set of justifications.
**Definition 4.6** (Set of causes of the set of justifications).: Given \(\mathcal{K}=(\mathcal{A},\mathcal{T})\), a query \(Q\) and its set of justifications all-just\((Q,\mathcal{K})\), a _set of causes of the set of justifications cause\((\textsc{all-just}(Q,\mathcal{K}))\)_ is defined as
\[\text{cause}(\textsc{all-just}(Q,\mathcal{K}))=\{\text{cause}(j)|j\in \textsc{all-just}(Q,\mathcal{K})\}\]
**Theorem 4.7**.: _This theorem derives from Theorem 4.11 and Remark 4.12 of [1]. Given a possibly inconsistent KB \(\mathcal{K}\), a query \(Q\), and the Boolean formulae \(f_{\textsc{all-just}(Incons,\mathcal{K})}\) and \(f_{\textsc{all-just}(Q,\mathcal{K})}\), \(Q\) is not true under the AR semantics, i.e., \(\mathcal{K}\not\models_{\text{AR}}Q\) iff \(f_{\textsc{all-just}(Q,\mathcal{K})}\wedge\neg f_{\textsc{all-just}(Incons, \mathcal{K})}\) is satisfiable, where the formula \(f_{\textsc{all-just}(Q,\mathcal{K})}\) is defined as_
\[f_{\textsc{all-just}(Q,\mathcal{K})}=f_{\textsc{all-just}(Q,\mathcal{K})}^{1} \bigwedge f_{\textsc{all-just}(Q,\mathcal{K})}^{2}\]
_where, with \(X_{\mathcal{C},\mathcal{B}}\) new Boolean variables representing the different ways of contradicting \(\mathcal{C}\),_
\[f_{\textsc{all-just}(Q,\mathcal{K})}^{1}=\bigwedge_{\mathcal{C}\in\text{causes }(Q,\mathcal{K})}\bigvee_{\mathcal{B}\in\text{confl}(\mathcal{C},\mathcal{K}) }X_{\mathcal{C},\mathcal{B}}\]
_and, with \(\text{vars}(f)\) the set of variables appearing in \(f\),_
\[f_{\textsc{all-just}(Q,\mathcal{K})}^{2}=\bigwedge_{X_{\mathcal{C},\mathcal{B }}\in\text{vars}(f_{\textsc{all-just}(Q,\mathcal{K})}^{1})}\bigwedge_{ \beta\in\mathcal{B}\setminus\mathcal{C}}\neg X_{\mathcal{C},\mathcal{B}} \lor X_{\beta}\]
Proof.: The proof follows the same steps of [1, Theorem 4.11 and Remark 4.12], \(\neg f_{\textsc{all-just}(Incons,\mathcal{K})}\) represents the set of consistent worlds, while \(f_{\textsc{all-just}(Q,\mathcal{K})}\) the set of worlds where \(Q\) is not true. It is built in two steps: the first builds the formula \(f_{\textsc{all-just}(Q,\mathcal{K})}^{1}\) representing that every cause is contradicted, and the second builds the formula \(f_{\textsc{all-just}(Q,\mathcal{K})}^{2}\) ensuring that when a cause is contradicted, every axiom of the conflict not belonging to \(\mathcal{C}\) is present. If the formula \(f_{\textsc{all-just}(Q,\mathcal{K})}\wedge\neg f_{\textsc{all-just}(Incons, \mathcal{K})}\) has an assignment of the Boolean variables that makes it satisfiable, then there is a consistent world where the query is not true, and so, from Lemma 4.1, there is at least one repair where \(Q\) is not true.
If the formula \(f_{\textsc{all-just}(Q,\mathcal{K})}\wedge\neg f_{\textsc{all-just}(Incons, \mathcal{K})}\) is not satisfiable, then it is not possible to find any consistent world where the query \(Q\) is false. Hence, \(Q\) is true in every consistent world and thus, from Lemma 4.1, it is so in every repair. So, \(\mathcal{K}\models_{\text{AR}}Q\).
**Theorem 4.8**.: _Given a possibly inconsistent KB \(\mathcal{K}\), a query \(Q\), and the set \(\mathcal{E}\) of all the ABox axioms that appear in at least one justification for the inconsistency, \(Q\) is true under the IAR semantics, i.e., \(\mathcal{K}\models_{\text{IAR}}Q\), iff there exists at least one justification for \(Q\) w.r.t. \(\mathcal{K}\) such that none of its axioms belongs to \(\mathcal{E}\)._
Proof.: By definition, the justifications for the inconsistency of a KB \(\mathcal{K}\) tell which combinations of axioms cause the inconsistency. Speaking in terms of repairs, given the set of
the justifications for the inconsistency, we can collect all the axioms that are considered as possibly causing inconsistency. For example, as in [1], these could be the ABox axioms. Given the set \(\mathcal{E}\) containing all these axioms, every repair will be defined as the initial set of axioms from the KB to which at least one of the axioms from \(\mathcal{E}\) have been removed in order to make the repair consistent. This means that the intersection of all the repairs can be found as \(\mathcal{D}=\bigcap_{\mathcal{R}\in Rep(\mathcal{K})}\mathcal{R}=\mathcal{A} \setminus\mathcal{E}\). Therefore, for \(Q\) to be true in the intersection of the repairs is necessary that there is at least one justification that does not contain any axiom from \(\mathcal{E}\).
Suppose that every justification for \(Q\) contains at least one axiom from \(\mathcal{E}\). Pick a justification \(j\) and an axiom from it that is in \(\mathcal{E}\): this axiom cannot be in the intersection of the repairs, so \(j\) is no longer a justification for \(Q\) in the intersection of the repairs. Since this is true for all justifications for \(Q\), \(Q\) has no justification in the intersection of the repairs so \(\mathcal{K}\not\models_{\mathit{IAR}}Q\)
It is worth noting that our approach is more general than the repair semantics. Indeed, even if we remove the assumption of considering only the ABox axioms as possible causes of inconsistency, Theorems 4.2, 4.7, and 4.8 still hold. Suppose that we consider every axiom of the KB as possible cause of inconsistency, if an axiom does not appear in any justification for the inconsistency, then it appears in every repair. On the other hand, if an axiom is contained in a justification for the inconsistency, then there will be at least one repair that does not contain it. Since in our proofs we consider only the axioms in the justifications, they hold irrespectively of the types of axioms considered.
Moreover, all these results can be checked by means of BDDs. For example, given the set of justifications for the inconsistency, we can find the Boolean formula \(f_{\textsc{ALI-JUST}(\mathit{Incons},\mathcal{K})}\) representing it and compile the formula into a BDD \(BDDI\). To find \(\neg f_{\textsc{ALI-JUST}(\mathit{Incons},\mathcal{K})}\), it is sufficient to negate the \(BDDI\). Similarly, to see if a formula is not satisfiable, once compiled in a BDD it is sufficient to see whether the BDD corresponds with the \(0\) leaf. Therefore, our approach is more general in the sense that it does not impose limitations on the KB because it allows both ABox and TBox to be inconsistent. Moreover, it features some desirable characteristics, such as: it can handle every DL language equipped with a reasoner able to return justifications; there are two different tools already available to cope with this semantics; it directly works on DL KBs, without resorting to DBMS or converting to different languages; it can return justifications together with the probability of the query, making the results more informative.
Finally, to the best of our knowledge, this approach is the first implementation of a general DL reasoner that can be used to answer queries under the AR semantics [1]. Other approaches can answer queries under the AR semantics in the database settings, restricting the possible languages that can be used, and so the expressivity of the DL used to model the KB.
## 5. Related Work
There are various lines of work on the topic of reasoning in case of inconsistency. For example, the standard DL semantics can be extended by the definition of a Four-Valued Logic, where classical implication is used together with two other types of implication of greater strength [12]. Their definition is given in terms of projections of concepts, i.e., every concept in a KB is associated with two sets of individuals containing those known
to belong to the concept and those known to belong to its complement. These two sets can overlap. Each implication can be translated into classical DL sub-class implications in a pre-processing step of the KB. Another possible semantics introduces different types of negations, as in [22], where two types of negation are considered to handle information known to be false and information that is not known to be true. However, these approaches force the developers of the KBs to distinguish the different versions of implications or negations, which may be not intuitive for those who are not expert of logics. Changing the standard syntax and semantics can also bring compatibility issues with other KBs and the impossibility of using standard reasoners.
Another approach considers the _repairs_ of an inconsistent KB. As already discussed, the repairs are parts (inclusion-maximal subsets) of the assertional axioms of the (inconsistent) KB that are consistent with the terminological axioms. These represent the possible ways of repairing an inconsistent KB by preserving as much information as possible (in the sense of set inclusion) to obtain a consistent KB. There could be many different repairs, depending on how many assertional axioms cause the inconsistency. There are several ways to build repairs, e.g., Baader et al. [1] look for optimal repairs where the least number of consequences is removed w.r.t. \(\mathcal{EL}\) KBs, and several semantics based on the repairs, making the inference process tolerant to inconsistency. A query is true w.r.t. the inconsistent KB if, for example, it is true in every repair of the KB (AR semantics [10]), in the intersection of the repairs (IAR semantics [10]), or in at least one repair (Brave semantics [1]). A comprehensive introduction to repairs can be found in [1].
One of the most prominent ways to answer queries under the AR semantics reduces the problem to SAT [1]. A possibility of avoiding the use of a SAT solver and building the repairs is to iteratively select a subset of the (inconsistent) KB until the subset entails the query [18]. This approach is comparable to the Brave semantics. Ludwig and Penaloza [11] proposed to precompile and save all the repairs (possibly exponentially many) to answer subsumption queries w.r.t. TBox in \(\mathcal{EL}\), thus without considering individuals in the ABox. Answering queries under Brave, AR, IAR and other semantics is efficient if all the repairs are precompiled. They also proposed a second, and more efficient, way: labelling every subsumption axiom with the set of repairs that contain it. This requires polynomial space without effectively building the repairs, which can be read using a directed acyclic graph.
Some approaches propose the use of priority levels or weights [1] to select repairs with higher priority/weight first or use weights to stratify the KBs and build sub-ontologies by keeping as many axioms with higher weights as possible [15]. To facilitate the parameter assignment task, one can exploit modularization-based approaches (such as [11]) to find local modules and assign the parameters considering only axioms in individual modules. Another way of facilitating the work of knowledge engineers, especially in very large KBs, could be to consider the reliability of data source (e.g., associate a confidence level to each data source and use this value as a weight for the information extracted from it) or to consider DISPONTE and use EDGE [14, 15] or LEAP [14, 15], two tools for automatically learning parameters of a DISPONTE KB by exploiting data contained in it.
Finally, other approaches translate the KB into logic programming clauses [16, 17, 18], performing inference through argumentation or abduction. Argumentation is exploited also in the work of Bouzeghoub et al. [1]. They present an approach based on possibilistic DL [17], where axioms of a KB are associated with a real value
representing its confidence degree. A Possibilistic DL KB defines a possibility distribution over worlds and assign a necessity measure to axioms. This necessity measure is used to check the entailment of the axiom w.r.t. the KB. The management of inconsistency is done by means of argumentation, by constructing an argumentation tree to find arguments and rebuttals in order to find justifications for the query. This approach shares many ideas with our approach. Unfortunately, the probabilistic semantics cannot be directly compared because the numerical values do not have the same meaning. Moreover, an implementation of their work is not available, so it is not possible to compare the two approaches empirically.
Despite the number of approaches presented above, there is a lack of proposals that exploit probabilistic information even though it is pervasive in real world domains. A preliminary idea was presented by [10] where the authors consider probabilistic Datalog\(\pm\) ontologies and repairs that can be built by removing also terminological axioms from the original KB, i.e., the inconsistency may also come from TBox axioms. They consider the possible world semantics, which corresponds to DISPONTE, and provide complexity results but not a system for performing such type of inference.
Other approaches consider completely different logics, e.g., in [1] the authors propose a probabilistic logic based on the probabilistic extensions of the Belnap-Dunn logic [1] combined with a bilattice logic, or an extension of Lukasiewicz's logic [14], defining a two-layer modal logical framework to account for belief in agent systems working on possibly inconsistent probabilistic information. However, these logics are different from DLs, thus it is not possible to make a meaningful comparison and they cannot be applied to the KBs connected in the open linked data cloud. Potyka and Thimm defined reasoning on linear probabilistic KBs [11], where the uncertain knowledge in the KB is represented as linear probabilistic constraints, covering different logical formalisms such as Nilsson's logic [13] or Lukasiewicz's logic [14]. In this approach, a so called _generalized_ model is used to allow probabilistic entailment on inconsistent KBs. A generalized model is defined as a probability distribution that minimally violates the KB. So, the probability values associated with the constraints must be carefully chosen in order not to violate the probability distribution. Moreover, since the violation is computed on the constraints, a second KB must be added that contains a consistent set of integrity constraints. Differently, DISPONTE considers each axiom as independent so the assignment of a probability value can be done independently for each axiom. Koch and Olteanu [15] consider probabilistic databases and apply a _conditioning_ operation that removes possible worlds not fulfilling given conditions, where conditions are a kind of constraints. This is similar to building repairs. Then, the query is asked w.r.t. the reduced database, and a confidence computation will return Bayesian conditional probabilities w.r.t. the original database. Both steps are NP-hard, while our approach is #P-hard, as we discussed in the Section 3.1. Tammet et al. [12] consider FOL, a semantics based on degrees of belief, and present an algorithm with various steps: they first calculate the decreasing confidence by means of a modified resolution search collecting different answers and justifications. Next the justifications are combined by means of the cumulation operation. Finally, the algorithm collects negative evidence for all the answers obtained so far, separately for each individual answer. This search is also split into resolution and cumulation. The search may not terminate, so a time limit is imposed, and the answer may not be complete. Both proposals share similarities with our approach, the first in the use of the probabilistic semantics, the latter in the search of two sets of justifications.
From a different point of view, the use of probability to measure the inconsistency of a KB is related to the plethora of inconsistency measures defined in literature [18, 19]. An _inconsistency measure_[12] assesses the severity of inconsistency, it gives the amount of inconsistency in a knowledge base, possibly with respect to the size of the KB. Usually, measures that fall in the first case are called _absolute_, differently from the latter, called _relative_, that consider, e.g., the number of assertions in the ABox, or the number of axioms in the whole KB. These measures can help in handling inconsistency. However, justifications are needed to debug the KB. In the last decades, many different measures have been proposed. For example, Hunter and Konieczny define a measure that assigns 1 to a KB that is inconsistent and 0 otherwise, and a measure that counts the number of minimal inconsistent subsets of the KB [13]. This has been extended to take into account also the number of contradictory set of axioms [10]. Another approach is to measure the ratio of the number of atoms in the minimal inconsistent subsets over the total number of atoms [11]. The d-hit inconsistency measure [10] indicates the size of the smallest set that has a non-empty intersection with every minimal inconsistent subset. The Hitting Set inconsistency measure is based on the concept of a hitting set \(H\) and calculates the smallest size that \(H\) might have, where \(H\) is a set of classical interpretations such that each formula is true in at least one interpretation. The maximal sets of subsets of the KB that can be made true by an interpretation are exactly the maximal consistent sets.
Some of these measures consider Priest's three valued logic LP [14], in which, besides the classical truth values "true" \(T\) and "false" \(F\), a third truth value \(B\) denoting inconsistency is considered. Thus, the minimum number of atoms that need to be assigned \(B\) to get at least a model of KB in Priest's logic can be used as an inconsistency measure [10], possibly divided by the number of atoms [15]. Others consider the KB as a set of consistent logic formulae (inconsistent formulae can be split into consistent sub-formulae). Each consistent formula has at least one model, which is a world, which can be represented by a point in a Euclidean space. Using these points, it is possible to define measures based on the distance between the points in the Euclidean space [10].
In a broad sense, computing the probability of the inconsistency of a KB is equivalent to computing an inconsistency measure that depends on the degree of belief in the axioms of the KB. From this point of view, the resulting measure may be considered as something in between of an absolute and a relative measure, because it does not consider the size of the KB, but the probabilities specified in it.
De Bona et al. [10] provide a classification of inconsistency using a bipartite graph relating logical formulae (built on axioms) and minimal inconsistent subsets of the KB. This allows one to compute different measures based on the count of formulae or subsets of the axioms of the KB from the bipartite graph. However, a direct comparison among all these measures is not possible in the majority of cases, because each has its rationale and grows differently as the KB increases.
## 6. Experiments
To test the feasibility of our approach, we extended our reasoners TRILL [16, 17, 18] and BUNDLE [19, 18] as described in the previous sections. TRILL is written in Prolog and can handle \(\mathcal{SHIQ}\) KBs, while BUNDLE is written in Java and uses an underlying non-probabilistic OWL reasoner to collect the set of justifications w.r.t. OWL 2 (essentially \(\mathcal{SROIQ}(\mathbf{D})\)) KBs. In particular, BUNDLE embeds Pellet [19],
Hermit [14], FaCT++ [15] and JFaCT2 as OWL reasoners, and three justification generators, namely GlassBox (only for Pellet), BlackBox and OWL Explanation. Moreover, BUNDLE also encapsulates TRILL.
Footnote 2: [http://jfact.sourceforge.net/](http://jfact.sourceforge.net/)
In this paper we refer to the extension of the reasoners TRILL and BUNDLE as _TRILLInc_ and _BUNDLEInc_ respectively. In their original version, both reasoners, in case of an inconsistent KB, can be used only to collect the justifications for the inconsistency of the KB, while their extension apply the notions explained in this paper.
As regards TRILL and _TRILLInc_, on the almost 8000 lines of code of TRILL, we needed to add only 144 lines of code and to modify another hundred or so lines of code to implement the computation of \(P_{C}(Q)\). To implement the computation of the repair semantics we needed to add approximatively 170 lines of code. _TRILLInc_ implements the extension of the tracing function. The code of _TRILLInc_, together with the KBs used in this section, is available on GitHub3.
Footnote 3: [https://github.com/rzese/trill_inc](https://github.com/rzese/trill_inc)
As regards BUNDLE and _BUNDLEInc_, we adapted the code so that all the internal reasoners can be used to solve queries w.t.y. possibly inconsistent KBs. The code of _BUNDLEInc_ implements the addition of the negation of the query in the KB by means of the fresh concept \(C_{Q_{P}}\). It is available on Bitbucket4.
Footnote 4: [https://bitbucket.org/machinelearningunife/bundle/src/bundle_inc/](https://bitbucket.org/machinelearningunife/bundle/src/bundle_inc/)
We considered KBs similar of those presented in Test 3 of [2], where the authors built 5 KBs of increasing size containing the following axioms for \(i\) varying from 1 to \(n\), with \(n\in\{2,4,6,8,10\}\):
\[B_{i-1}\sqsubseteq P_{i}\sqcap Q_{i}\qquad P_{i}\sqsubseteq B_{i}\qquad Q_{i} \sqsubseteq B_{i}\]
where \(B_{i}\), \(Q_{i}\), \(P_{i}\) are simple concepts. However, in principle, they can be concept expressions of any expressivity handled by the reasoner. These KBs present a number of justifications for the query \(Q=x:B_{n}\) that grows exponentially with \(n\). The choice of this KB is to force the creation of a number of justifications that grows exponentially, preferring larger number of justifications instead of bigger KBs. The most expensive operation in a reasoner based on the tableau is the application of expansion rules since it needs the management of choice points, backtracking and, possibly, creating new tableaux depending on the implementation of the reasoner. These KBs stress the entire inference process, from justification finding to the management of the BDDs. Moreover, it is easy to decide where to put inconsistency. In this test, we created a KB for each \(n\in\{3,4,5,6,7,8,9,10\}\) to which we added the class assertion \(x:B_{0}\). Every axiom of the KB has been made probabilistic by annotating it with a random value of probability. Then, for each KB, we created a second version in which we added a disjoint-classes axiom asserting that classes \(B_{j}\) and \(B_{k}\) are disjoint, with \(j,k\) set as explained below. This, combined with the class assertion axiom, makes the KBs inconsistent. We built a KB for each value of \(n\) in order to see how the running time changes as \(n\) increases. We run the query \(Q=x:B_{n}\) 10 times w.r.t. each KB in order to compute the average running time for answering the query. We compared the running time of the original version of the reasoner to solve \(Q\) w.r.t. the KBs without the disjoint-classes axiom, with the running time taken by our version of the reasoner w.r.t.:
1. KBs without the disjoint-classes axiom, so consistent, to see how much overhead we add to the whole process and so, how much the introduced extension affects the reasoning in case of consistent KBs;
**(2)**: KBs with the disjoint-classes axiom considering the classes \(B_{0}\) and \(B_{1}\) (\(j=0,k=1\)), thus inconsistent, where there are only two justifications of the inconsistency consisting both of three axioms, to see how the running time changes in the best case, i.e., when collecting justifications for the inconsistency is trivial;
**(3)**: KBs with the disjoint-classes axiom considering the classes \(B_{n}\) and \(B_{n-1}\) (\(j=n,k=n-1\)), thus inconsistent, whose justifications are the same of those of the query, to see how the running time changes in the worst case. In the worst case, i.e., \(n=10\), the KB contains 30 axioms but there are \(2^{n}+2^{n}=1024+1024\) justifications for the query \(Q\), a situation difficult to achieve even with large KBs.
Table 1 shows, for each KB, the ratio between the running time of TRILL and that of _TRILLInc_ in the settings above. Average ratios between the running time of TRILL and _TRILLInc_ are also shown, computed by considering all the KBs and the KBs of sizes from 5 to 10, i.e., the KBs with a significant number of justifications and so KBs where the extension could affect more the running time. The average running time in seconds with their Relative Standard Deviation in percentage (%RSD) are shown for both TRILL and _TRILLInc_.
All the tests have been performed on a Linux machine with equipped with Intel(r) CoreTM i7-8565U CPU @ 1.80GHz, with 16 GiB of RAM.
As one can see, _TRILLInc_ adds an overhead which spans between less than 1% (Ratio 1.00, \(i=2\), setting **2**) and 220% (Ratio 3.21, \(i=6\), setting **3**). For small KBs, the overhead on the reasoning time is mitigated by an initialization phase of the reasoner, which affects all the executions. With the increase of the size of the KBs, this phase becomes more and more negligible. However, in the worst case, setting **3**, the extension to the reasoner implemented in _TRILLInc_ increases the running time of a factor 2.43 on average, which increases to an average of 2.88 considering only the larger KBs (size 5 to 10). Moreover, from setting **1** we can see that, in the case of consistent KBs, the overhead is around 15%
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c}
**KB Size** & **TRILL** & _TRILLInc_ & & _TRILLInc_ & & _TRILLInc_ & \\ (\(i\)) & **(s)** & **setting 1 (s)** & **Ratio** & **setting 2 (s)** & **Ratio** & **setting 3 (s)** & **Ratio** \\ \hline \hline
2 & 0.003 \(\pm\)8.378 & 0.003\(\pm\)0.618 & 1.02 & 0.003\(\pm\)0.725 & 1.00 & 0.003\(\pm\)11.333 & 1.04 \\
3 & 0.003 \(\pm\)1.151 & 0.003\(\pm\)1.070 & 1.11 & 0.003\(\pm\)0.544 & 1.20 & 0.004\(\pm\)7.013 & 1.41 \\
4 & 0.004 \(\pm\)1.009 & 0.005\(\pm\)4.823 & 1.24 & 0.005\(\pm\)0.738 & 1.23 & 0.008\(\pm\)0.829 & 2.10 \\
5 & 0.011 \(\pm\)0.528 & 0.013\(\pm\)0.860 & 1.19 & 0.012\(\pm\)0.781 & 1.17 & 0.030\(\pm\)1.034 & 2.88 \\
6 & 0.056 \(\pm\)0.730 & 0.066\(\pm\)0.626 & 1.19 & 0.066\(\pm\)1.024 & 1.18 & 0.180\(\pm\)0.604 & 3.21 \\
7 & 0.411 \(\pm\)0.355 & 0.479\(\pm\)0.448 & 1.16 & 0.478\(\pm\)0.425 & 1.16 & 1.233\(\pm\)0.241 & 3.00 \\
8 & 3.277 \(\pm\)0.555 & 3.710\(\pm\)0.320 & 1.13 & 3.713\(\pm\)0.462 & 1.13 & 9.116\(\pm\)0.224 & 2.78 \\
9 & 26.040\(\pm\)0.393 & 30.126\(\pm\)0.509 & 1.16 & 31.312\(\pm\)0.430 & 1.20 & 71.869\(\pm\)0.206 & 2.76 \\
10 & 216.529\(\pm\)0.529 & 251.822\(\pm\)0.440 & 1.16 & 251.285\(\pm\)0.335 & 1.16 & 577.014\(\pm\)0.348 & 2.66 \\ \hline \multicolumn{8}{c|}{**Avg on results of size 2 to 10**} & 1.15 & & 1.16 & & 2.43 \\ \multicolumn{8}{c|}{**Avg on results of size 5 to 10**} & 1.17 & & 1.17 & & 2.88 \\ \end{tabular}
\end{table}
Table 1. Average running time in seconds \(\pm\) relative standard deviation in percentage (%RSD) computed on 10 executions of the query with the original version of the reasoner TRILL and the extended version of the reasoner _TRILLInc_ in settings **1**, **2**, and **3**, together with the ratio of the two time measurements. The penultimate line contains the average of the ratios, while the last line contains the average of the ratios considering only sizes greater than 4.
of the running time, which is acceptable given that we also compute the repair semantics. So, the presented extension does not significantly affect the running time w.r.t. consistent or inconsistent KBs with few, small justifications for the inconsistency.
There is also a fourth case, when the query is \(x:B_{1}\) while \(B_{n}\) and \(B_{n-1}\) are disjoint. In this case, when comparing the performance of _TRILLInc_ with that of TRILL (w.r.t. the KBs without the disjoint-classes axiom) the increase is similar to those of setting **2**. However, one should note that, in case of an inconsistent KB, classical reasoners such as TRILL do not perform inference. In this case, the only query one can pose is whether the KB is inconsistent, then repair it using the collected justifications if possible, and ask the original query w.r.t. the debugged KB. Therefore, considering all these steps and that the proposed extension combines the search for justifications for queries with that for justifications of inconsistency instead of doing them separately, the ratio may decrease significantly and possibly become smaller than 1.
In order to empirically compare our approach with the repair semantics, we ran BUNDLE using its default settings, i.e., using Pellet with the GlassBox justification generator, against CQApri [1], a system implemented for querying DL-Lite KB under AR, IAR and Brave semantics. CQApri is implemented in Java and exploits a relational database to store the assertions of the KB, a query rewriting engine for the DL-Lite language and a SAT solver to answer queries under AR semantics.
For the comparison, we considered a KB used by the authors of CQApri to test their system5. They used a simplified version of the Lehigh University Benchmark (LUBM) ontology [1], which differs from the original version for the removal of the axioms that cannot be modelled by DL-Lite, and generated different ABoxes of increasing size containing inconsistencies w.r.t. the simplified LUBM TBox.
Footnote 5: Available at [https://www.lri.fr/~bourgaux/CQApri](https://www.lri.fr/~bourgaux/CQApri)
In particular, we considered the version u1conf1, from which we created an OWL KB to use with BUNDLE containing all the assertions that CQApri stores in the database. The resulting KB models one university and contains 108,864 axioms in the ABox (28,023 class assertions, 47,640 object property assertions and 33,201 data property assertions) for a total of 127,320 axioms in the KB. From this KB, we randomly created 200 Boolean queries of the form \(a_{i}:C_{i}\) and \((a_{i},b_{i}):R_{i}\) by sampling individuals \(a_{i}\) and \(b_{i}\), classes \(C_{i}\) and object properties \(R_{i}\). Each of these 200 queries is created to have at least one justification. We have also created 100 queries in the same way but having zero justifications, in order to test the performance of BUNDLE even in case the query is not entailed. The latter set of queries cannot be given in input to CQApri, because it exits with an error.
Table 2 shows the results in terms of averaged running time for BUNDLE and CQApri to solve the 200 queries with justifications, while Table 3 contains more details about the results of BUNDLE when solving queries with and without justifications. The average time in milliseconds for computing the repair semantics on the set of 200 queries is \(0.05\pm 0.2\) in case of no justifications and \(0.08\pm 0.5\) in case of 1 justification for the query. The high standard deviation in the first case is due to the fact that the computation is almost instantaneous, i.e., many queries present a repair computation time of 1 or less than 1 millisecond, while in some cases this time reaches 6 milliseconds. But this difference could also depend on the load of the test machine. Finally, the time for computing the repair semantics is \(0.13\pm 0.42\) in case of more than 1 justifications. The high standard deviation in this case is due to the high variability in the number of justifications.
As can be seen, BUNDLE has an average running time of more than 218 seconds, while CQApri can solve the same queries in little less than 12 seconds on average. However, it is important to bear in mind the main differences between the two systems:
* CQApri is tailored to DL-Lite, so it imposes a stronger limitation on the expressivity than BUNDLE, which considers OWL 2 (\(\mathcal{SROIQ}(\mathbf{D})\)). This means that CQApri cannot be used to run the queries of Example 2.7 because complex concepts such as intersections can be used only as superclasses.
* CQApri makes use of a DBMS to store and access the assertions of the ABox, while BUNDLE stores internally the entire KB. Thus, CQApri can scale better than BUNDLE in terms of size of KB.
* CQApri answer queries under Brave, AR, and IAR semantics, while BUNDLE adds the resolution under DISPONTE and returns the set of justifications for both the inconsistency and the query, allowing to have a full analysis of the query and collect information for debugging the KB at the same time. CQApri can return the repairs, but in this comparison we did not ask for that, because we think it is unfair to ask more information to a system in a comparison only regarding query answering. We would like to highlight that also for queries with 79 justifications, the check for the repair semantics in BUNDLE is almost instantaneous (with an average time to compute the answer under the repair semantics averaged on all the 200 queries of \(0.0001\pm 0.0003\) seconds).
\begin{table}
\begin{tabular}{c|c|c|c}
**System** & **Avg. running time \(\pm\) std. dev.** & **Min. running time** & **Max. running time** \\ \hline \hline BUNDLE & 218.29 \(\pm\) 977.44 & 4.15 & 7,527.21 \\ CQApri & 11.80 \(\pm\) 0.08 & 11.74 & 12.04 \\ \end{tabular}
\end{table}
Table 2. Running time in seconds needed by BUNDLE and CQApri to answer the set of 200 queries with at least one justification. The table shows the average running time \(\pm\) the standard deviation, the minimum and maximum running time for both systems.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**0 justifications**} \\ \hline Avg. running time \(\pm\) std. dev. & Min. running time & Max. running time \\ \hline
5.33 \(\pm\) 0.20 & 4.62 & 5.81 \\ \hline \hline \multicolumn{3}{|c|}{**1 justification**} \\ \hline Avg. running time \(\pm\) std. dev. & Min. running time & Max. running time \\ \hline
57.09 \(\pm\) 244.39 & 4.15 & 1853.48 \\ \hline \multicolumn{3}{|c|}{**More than 1 justifications**} \\ \hline Avg. running time \(\pm\) std. dev. & Min. running time & Max. running time \\ \hline
1,648.74 \(\pm\) 1,943.94 & 4.47 & 7,527.21 \\ \hline Avg. number of justifications & Min. number of justifications & Max. number of justifications \\ \hline
35.32 \(\pm\) 19.29 & 9 & 79 \\ \hline \end{tabular}
\end{table}
Table 3. Statistics about BUNDLE to answer queries with at least one justification and queries without justifications. The table shows the average running time in ms \(\pm\) the standard deviation, the minimum and maximum running time in the cases the query has 0, 1 and more than 1 justifications. Moreover, in the case of more than 1 justifications, the table shows the average number \(\pm\) the standard deviation, the minimum and maximum number of justifications.
* [4] CQApri can also solve conjunctive queries, while BUNDLE only Boolean queries.
We considered BUNDLE in the comparison in order to show that every DL reasoner available can be used to exploit DISPONTE to answer queries w.r.t. possibly inconsistent KBs. However, all the considerations done about BUNDLE are also valid for TRILL.
## 7. Conclusions
We have presented a simple but effective approach to cope with inconsistent KBs, which does not require changing the syntax of the logic and exploits the probabilistic semantics DISPONTE. Inference algorithms that use the tableau and are able to build the set of all the justifications of a query can be easily adapted to the new task. This allows the application of this approach to every DL language equipped with a suitable set of tableau expansion rules. We implemented the proposed extension in TRILL, developing _TRILLInc_, and BUNDLE, developing BUNDLEInc, and tested them w.r.t. two different KBs. For the future, we plan to study the generalization to FOL of the presented extensions of the semantics.
## Acknowledgment
This work was partly supported by the "National Group of Computing Science (GNCS-INDAM)".
|
2305.09468
|
Graph-Based Deep Learning for Sea Surface Temperature Forecasts
|
Sea surface temperature (SST) forecasts help with managing the marine
ecosystem and the aquaculture impacted by anthropogenic climate change.
Numerical dynamical models are resource intensive for SST forecasts; machine
learning (ML) models could reduce high computational requirements and have been
in the focus of the research community recently. ML models normally require a
large amount of data for training. Environmental data are collected on
regularly-spaced grids, so early work mainly used grid-based deep learning (DL)
for prediction. However, both grid data and the corresponding DL approaches
have inherent problems. As geometric DL has emerged, graphs as a more
generalized data structure and graph neural networks (GNNs) have been
introduced to the spatiotemporal domains. In this work, we preliminarily
explored graph re-sampling and GNNs for global SST forecasts, and GNNs show
better one month ahead SST prediction than the persistence model in most oceans
in terms of root mean square errors.
|
Ding Ning, Varvara Vetrova, Karin R. Bryan
|
2023-04-03T11:26:54Z
|
http://arxiv.org/abs/2305.09468v1
|
# Graph-Based Deep Learning for Sea Surface Temperature Forecasts
###### Abstract
Sea surface temperature (SST) forecasts help with managing the marine ecosystem and the aquaculture impacted by anthropogenic climate change. Numerical dynamical models are resource intensive for SST forecasts; machine learning (ML) models could reduce high computational requirements and have been in the focus of the research community recently. ML models normally require a large amount of data for training. Environmental data are collected on regularly-spaced grids, so early work mainly used grid-based deep learning (DL) for prediction. However, both grid data and the corresponding DL approaches have inherent problems. As geometric DL has emerged, graphs as a more generalized data structure and graph neural networks (GNNs) have been introduced to the spatiotemporal domains. In this work, we preliminarily explored graph re-sampling and GNNs for global SST forecasts, and GNNs show better one month ahead SST prediction than the persistence model in most oceans in terms of root mean square errors.
## 1 Introduction
The variability of SSTs, or SST anomalies, is associated with multiple climate oscillations or extreme events, such as the El Nino-Southern Oscillation (ENSO), the Indian Ocean Dipole (IOD) oscillation, and marine heatwaves. The ability to accurately forecast SST variability would allow mitigation of its potential impact, such as by collecting healthy samples for repopulation of impacted ecosystems and adjusting aquaculture production beforehand.
A number of DL models have been developed to predict SSTs and/or related events. Early work started with convolutional neural networks (CNNs). Ham et al. (2019, 2021) used a CNN to predict ENSO up to 18 months in advance and Cachay et al. (2021) used a GNN to improve the forecasts for one to six lead months. IOD forecasts have been made using a CNN (Feng et al., 2022) and a long short-term memory (LSTM) network (Pravallika et al., 2022) respectively. A CNN was developed to forecast SSTs and marine heatwaves around Australia (Boschetti et al., 2022). Later work started to address the combination of multiple neural network classes for SST forecasts. Taylor & Feng (2022) combined a U-Net (Ronneberger et al., 2015) with an LSTM (Taylor, 2021) to forecast global SSTs up to 24 months ahead and validated the forecasts with a focus on specific SST variability-related oscillations (ENSO and IOD) and events (the "Blob" marine heatwave). The DL models outlined above input sequences or grids, i.e. Euclidean data, and used image or video processing techniques to perform SST forecasts. However, there is a potential for further improvement via utilizing the structure of climatological data, which are different from images and videos.
Non-Euclidean graphs could be an alternative to grids. Graph representation learning has been successfully applied to domains such as social networks (Gupta et al., 2021) and bioinformatics (Yi
et al., 2022). The teleconnections of climate events (Tsonis et al., 2006), either through atmosphere, oceanic circulation, or large-scale oceanic waves, are increasingly considered as an important factor for developing DL methods (Cachay et al., 2021; Taylor & Feng, 2022) and could be modeled by graphs. Grids and CNNs still have some inherent problems, such as replacement for missing values, rotation equivariance (Defferrard et al., 2019), and receptive fields (Luo et al., 2016), making them difficult to use in modeling global oceans. Graph-based DL for SST forecasts is not as well explored as the grid-based. Hence, we investigated whether graphs and graph-based DL are suited for SST forecasts. We started by extending the work by Taylor & Feng (2022) to the graph domain and found that GNNs generally outperform the persistence model for one month ahead SST forecasts globally.
## 2 Data
**Dataset.** The dataset for SST forecasts is from ERA5 (Hersbach et al., 2020). ERA5 is a reanalysis product that provides monthly estimates of a large number of atmospheric, land and oceanic variables at global scale with a spatial resolution of 0.25\({}^{\circ}\), from 1950 to 1978 (the preliminary version) and from 1959 to the present (the current version).
**Data Preprocessing.** We downloaded the ERA5 data with the univariate SST from both versions. Two versions of the data were joined along the time axis, using the preliminary version from January 1950 to December 1978 and the current version from January 1979 to August 2022. Following Taylor & Feng (2022), we used the Climate Data Operators (CDO) (Schulzweida, 2019) to process the joined dataset to a [64, 128, 872] latitude (64\({}^{\circ}\)S to 62\({}^{\circ}\)N in 2\({}^{\circ}\) increments), longitude (180\({}^{\circ}\)W to 180\({}^{\circ}\)E in 2.8125\({}^{\circ}\) increments), and month (January 1950 to August 2022 in one month increment) grid. The unit of SSTs is Kelvin. We normalized the data to the [-1, 1] range using the following formula:
\[\tilde{x}_{i}=\frac{x_{i}-x_{min}}{x_{max}-x_{min}}\cdot 2-1,\]
where \(x_{i}\) is a raw ERA5 SST value, \(x_{min}\) and \(x_{max}\) are the minimum and the maximum over all data, and \(\tilde{x_{i}}\) is a normalized SST value, which resulted in a normalized [64, 128, 872] grid. Normalization primarily helps to stabilize numerical calculations and accelerate the rate of convergence to a solution (Taylor & Feng, 2022). The first 760 time steps were used for training and the remaining were used for testing. Unlike Taylor & Feng (2022), we did not use the two-meter atmospheric temperature variable to interpolate the land pixels in the SST grid.
## 3 Methods
### Graph Construction
We constructed the graphs by defining the adjacency matrix and the node attribute matrix. We have not found suitable relational variables for SST forecasts, so the edge attribute matrix was left empty.
**Node Attribute Matrix.** Let \(\textbf{l}\in\mathbb{R}^{M\times N\times T}\) denote a tensor that represents the preprocessed SST grid, where \(M\) is the number of points in the latitudinal direction, \(N\) is the number of points in the longitudinal direction, and \(T\) is the number of monthly time steps. There is an SST time series of \(T\) elements at every latitude and longitude coordinate. The node attribute matrix \(\boldsymbol{V}\in\mathbb{R}^{X\times T}\), where \(X=M\times N\) is the number of nodes, was acquired by flattening every 2D slice \(\textbf{l}_{:,i,t}\) of **l** at time step \(t\). \(V_{x,t}\) is the SST value at the \(x^{\text{th}}\) node at time step \(t\).
**Adjacency Matrix.** We constructed a set of undirected graphs and a set of directed graphs. For the undirected graphs, an element \(A\) in the adjacency matrix \(\boldsymbol{A}\) is defined by an indicator function:
\[A_{x,y}=\textbf{1}_{|\rho(\boldsymbol{V}_{x,:},\boldsymbol{V}_{y,:})|>c},\] \[A_{x,y}=A_{y,x},\]
where \(\boldsymbol{V}_{x,:}\) and \(\boldsymbol{V}_{y,:}\) are the SST time series at any two nodes, \(\rho(\cdot)\) is the Pearson correlation coefficient in this case but could be other measures, and \(c\) is a threshold as a controllable parameter. For the directed graphs, with regards to one lead time forecasts, when the correlation between the time series at node \(x\) and one lead time series at node \(y\) is above the threshold, we consider that there is
an edge between the two nodes, the source node is \(x\) and the destination node is \(y\). Therefore, an element \(\tilde{A}\) in the adjacency matrix \(\tilde{\mathbf{A}}\) for a directed graph is defined as
\[\tilde{A}_{x,y}=\mathbf{1}_{|\rho(\mathbf{V}_{x,0:T-1},\mathbf{V}_{y,1:T})|>c}.\]
The decrease in the correlation threshold \(c\) leads to a substantial increase in the number of edges and node degrees. We generated multiple sets of SST graphs, with the statistics shown in Table 1. Besides, all graphs have isolated nodes and no self-loops, and graphs in the same set are homogeneous.
These graph data have been made available for download and the link is in the Appendix.
### Graph Neural Networks
We applied widely-used GNN classes to perform learning on SST graphs: graph convolutional networks (GCNs) (Kipf and Welling, 2016), graph attention networks (GATs) (Velickovic et al., 2017), and GraphSAGE (Hamilton et al., 2017) for undirected graphs, and relational GCNs (RGCNs) (Schlichtrull et al., 2018) for directed graphs. These GNN models were implemented in Python using PyTorch (Paszke et al., 2019) and PyTorch Geometric (Fey and Lenssen, 2019).
The forecasting task here is node regression with sliding windows. We aimed at forecasting SSTs at every node one month ahead. In each iteration, the inputs were a set of \(w\) graphs at earlier time steps, \(\mathbf{V}_{\cdot,t-w+1,\ldots,t}\), where \(w\) is the forecasting window size, and the output was one graph at the time step for prediction, \(\mathbf{V}_{\cdot,t+1}\). Following Taylor and Feng (2022), we used a window size of 12.
We deployed a similar structure for all GNNs: there are two layers, where the first layer inputs \(w\) features and outputs 30 features, and the second layer inputs 30 features and outputs 1 feature. The optimizer is the root mean square propagation, with a 0.001 learning rate, a 0.9 alpha, a 0.9 momentum, and a 0.0001 weight decay. The activation is the hyperbolic tangent. The loss is the mean squared error. The root mean squared error (RMSE) is reported. The number of training epochs is 20. For the GAT, the number of heads is eight; for the RGCN, the number of relations is two and the number of bases is four. The GCN, the GAT, and the GraphSAGE were all trained using undirected graphs with \(c=0.95\); the RGCN was trained using directed graphs with \(c=0.9\). We chose these two values of \(c\) as they lead to similar average node degrees. In turn, it allowed us to make use of limited computational resources during the exploratory phase in order to identify an appropriate GNN class. Our future plan is to experiment with different values of \(c\) and consequently with graphs of larger size.
## 4 Results and Discussion
One model was trained for each model class. We calculated RMSEs on the test data for each node. The average RMSEs of all nodes are summarized in Table 2 for all GNN models and for the persistence model as a baseline against which to compare performance.
Only the GraphSAGE outperforms the persistence model in terms of average RMSEs. The GCN and the GAT may need further hyperparameter tuning and more complex structures. For the RGCN, the problem might arise with the directed graphs. Additionally, the GraphSAGE took the least amount of time to train, indicating its superior time efficiency when applied to the SST graphs. In order
\begin{table}
\begin{tabular}{c c c c c}
**Number of nodes** & **Is directed** & \(\mathbf{c}\) & **Number of edges** & **Average node degree** \\ \hline & No & NA & 0 & 0 \\ & No & 0.99 & 8090 & 1.4 \\ & No & 0.97 & 88510 & 15.33 \\
**5774** & **No** & **0.95** & **325546** & **56.38** \\ & No & 0.9 & 2949098 & 510.75 \\ & **Yes** & **0.9** & **292260** & **50.62** \\ & Yes & 0.8 & 5125450 & 887.68 \\ \end{tabular}
\end{table}
Table 1: Statistics of the SST graphs from ERA5. The average node degree is the average number of edges per node. The sets used to train GNN models are in bold.
to further investigate the performance of the GraphSAGE, we obtained the difference between the persistence RMSE and the RMSE of the GraphSAGE per node, shown in Figure 1.
The GraphSAGE model generally outperforms persistence across the world, especially in the temperate zones, possibly because the changes in temperate SSTs are stable. The model performs poorly in the tropics, given that the changes in SSTs in the tropics are slight and irregular. The predictions near continents are generally better. The predictions near Antarctica are generally poor.
In the Appendix, we selected some locations from Figure 1 in Figure 2 and created time series plots and scatter plots for the GraphSAGE in Figures 3 and 4 respectively. At most locations, the general trends and cycles are predicted. The predictions of minor variations or extreme values could be improved.
### Future Work
This work constitutes a step towards the forecasting of seasonal SST anomalies and marine heat-waves using graph-based deep learning methods. The following work is suggested.
**Model Tuning.** For the current GNN models, there is room for improvement by tuning hyperparameters and adding auxiliary techniques. Similar to the U-Net for SST forecasts (Taylor & Feng, 2022), graph U-Nets (Gao & Ji, 2019) could be another GNN class for consideration.
**Graph Construction.** So far, we have not included edge attributes to reflect GNNs' capability of learning relational variables. Finding these useful oceanic or atmospheric variables will possibly improve the forecasts. In addition, aspects such as selecting non-parametric measures for \(\rho(\cdot)\) and removing seasonality would also alter the results. Graph construction from grids is an ongoing problem due to its flexibility and influence on overall performance.
**Anomaly Prediction.** Forecasting SST anomalies and their associated extreme events is of greater ecological and socioeconomic value. When predicting anomalies, the node regression will be reformulated as a node imbalanced regression task, which requires additional techniques to handle.
**Long Lead Forecasts.** Accurate long lead SST forecasts will help with planning and taking actions earlier to mitigate the impacts of SST extremes. We are interested in forecasts from three months to two years in advance. Taylor & Feng (2022) have demonstrated that using an autoregressive approach by repeatedly feeding the short lead predictions back to models and adding an LSTM layer make long lead forecasts achievable.
|
2304.06472
|
Binary Interaction with Multiple Fluid Type Cosmology Under Modified
Gravity Frame
|
In the present chapter, we have established multiple fluid cosmological
models under interaction scenarios. the interaction model we have established
is a binary type of interaction scenario where three types of fluids are bound
with interaction. We have incorporated variable gravitational constant and
variable cosmological constant. The whole work has proceeded with modified
gravity geometry where the modified effect over Einstein's gravity act like a
variable cosmological constant.
|
Alokananda Kar, Shouvik Sadhukhan
|
2023-04-04T08:02:41Z
|
http://arxiv.org/abs/2304.06472v1
|
# Binary Interaction with Multiple Fluid Type Cosmology Under Modified Gravity Frame
###### Abstract
In the present chapter, we have established multiple fluid cosmological models under interaction scenarios. The interaction model we have established is a binary type of interaction scenario where three types of fluids are bound with interaction. We have incorporated variable gravitational constant and variable cosmological constant. The whole work has proceeded with modified gravity geometry where the modified effect over Einstein's gravity acts like a variable cosmological constant. We have used linear dark matter (DM), inhomogeneous fluid and a new type of non-linear model as interacting components. We have discussed the cosmic phases only with fluid dynamical approaches i.e., through the scale factor variations of effective energy density, pressure, equation of state (EOS) parameter and gravitational constant (\(\Lambda\)).
Variable \(G-\Lambda\), Modified gravity, Non-linear Fluid, Inhomogeneous fluid, Gravitational Constant
## 1 Introduction
Recent observational studies proved that the universe should be in the accelerated expanding stage [Capozziello.et.al (2002,2003 and 2006)]. The observational studies developments brought several problems in front of researchers viz. Cosmological horizon problem, Magnetic monopole problem, and Vacuum energy problem. For the resolution of these problems, the idea of the exotic nature of cosmic matter has been brought into physics [Kar.et.al (2020-2022) and Sadhukhan.et.al (2020)]. Interacting Dark matter, Non-linear fluid cosmology are among many examples of exotic matters. [Chattopadhyay.et.al (2009,2010 and 2017)]
Dark matter is one candidate that discusses such an expansion scheme of the universe but for the non-relativistic case, they can't. The non-relativistic dark matter is considered cold dark matter
and they produce zero pressure in large-scale structures [Debnath.et.al (2008 and 2013)]. The warm dark matter or hot dark matter can produce large positive energy, which is again considered relativistic dark matter and theoretically, relativistic cosmic matter can dominate the universe only after certain initial conditions or causes [Dirac.et.al (1974 and 1979)]. Thus, the implementation of non-exotic matter or dark matter can't solve the problems regarding accelerated expanding cosmology instead brings different problems. Hence, exotic type of cosmic matter or exotic dark matter has come into play [Hoyle.et.al (1964 and 1971)].
The exotic dark matter acts like the Cardassian model where the exotic nature comes from the self-interaction produced between dark matters [Zeldovich.et.al (1968)]. This self-interaction brings the non-linearity and inhomogeneity into the equation of the state of dark matter. Hence, inhomogeneous, and non-linear models came into play [Banerjee.et.al (1985)]. Those inhomogeneous and non-linear models are sometimes called fluid-type dark energy models. Another important cause behind the introduction of the inhomogeneous model is to provide a pause of cosmic inflation and bring the reheating phase a start in late time accelerated expanding phases. Chaplygin gas, and Van-Der-Waals fluid are two examples of non-linear models [Bergmann.et.al (1968)].
The modified gravity is an alternative geometry analysis mechanism which can substitute the non-linear and inhomogeneous type self-interactions to discuss the accelerated expansion of the universe. The modified gravity provides the modifications in the Einstein Hilbert action with some higher ordered terms. \(f(R),f(G),f(T)\) and \(f(Q)\) gravities are the common forms of modified gravity theory. In our present work, we have given the derivations of \(f(R,T)\) gravity with the functional form of \(f(R,T)=R+2\lambda T\)[Nojiri.et.al (2006)] Introduction of variable gravitational constant and cosmological constant and their coupling during the dissociation of the divergence of energy momentum tensor, can also be considered as an alternative to the modified gravity which can produce accelerated expanding universe model with the variation of \(G\) and \(\Lambda\). The variable gravitational constant can give the variation of expansion scheme during the phase transitions. The variable cosmological constant varies the repulsive effect on cosmic bubbles due to \(\Lambda\) -type dark energies [Brevik.et.al (2007)].
In our present chapter, we have established interaction between three types of fluid systems viz. linear dark matter, inhomogeneous fluid, and a new type of three parameter-based non-linear
fluid. The three fluids discussed the binary interaction scenario. The interaction picture provided the functional variations of gravitational constant and different cosmic phases. We have included the mathematical analysis only [Kar.et.al (2020-2022)].
The chapter has proceeded according to; in section 2 we have given the theoretical mechanism of the whole fluid dynamical process including the interaction scenario and the analysis of the mathematical results. Finally, in section 3 we have concluded the chapter.
## 2 Theoretical formalism of the model
The theoretical and mathematical mechanism of our present chapter started with the following action for the multiple fluid cosmologies. Here, the cosmic matter Lagrangian is a coupled form of linear dark matter, inhomogeneous fluid, and a new type of non-linear fluid system. The system works on Modified gravity geometry and the presence of variable gravitational constant. Hence, the action can be written as follows.
\[S=\frac{1}{16\pi}\int d^{4}x\sqrt{-g}\left(\frac{f(R,T)}{G}+L_{m}^{eff}\right) \tag{1}\]
After applying the least action principle to equation (1) we can find the following dynamical equation of cosmic dynamics. The functional form we considered for modified gravity is \(f(R,T)=R+2\lambda T\) with \(T=Tr\big{[}T_{\mu\nu}\big{]}=\rho_{eff}-3p_{eff}\). Hence, we have considered \(\Lambda=\lambda\big{(}\rho_{eff}-p_{eff}\big{)}\). Thus, we obtain the following equation.
\[G_{\mu\nu}=8\pi G(t)T_{\mu\nu}^{eff}+\Lambda g_{\mu\nu} \tag{2}\]
Here we can write \(T_{\mu\nu}^{eff}=-\frac{2}{\sqrt{-g}}\frac{\delta\big{(}\sqrt{-g}L_{m}^{eff} \big{)}}{\delta g^{\mu\nu}}\) and \(G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}\). Now the symmetry and other geometric modifications due to application or coupling of cosmic matters with geometry can be discussed using the line element of the system and the evolution of the cosmic dynamical equation given in equation (3). Hence, we can consider the following line element.
\[ds^{2}=dt^{2}-a^{2}(t)(dr^{2}+r^{2}d\Omega^{2}) \tag{3}\]
We have considered the flat FRW geometry with scale factor \(a(t)\) and angular coordinate \(d\Omega^{2}=d\theta^{2}+\sin^{2}(\theta)d\phi^{2}\). Using this line element into the Einstein equation discussed in
equation (3) we can find the following two equations of cosmic dynamics i.e., the FRW equations. Hence, we can write as follows.
\[3H^{2}=8\pi G\rho_{eff}+\Lambda\] (4a) And, \[3H^{2}+2\dot{H}=-8\pi Gp_{eff}+\Lambda \tag{4b}\]
Now for each cosmological model, we should nullify the divergence of the Einstein tensor as well as the energy-momentum tensor. The divergence-less condition of energy-momentum tensor can provide the idea of conservation of energy-momentum for an effective system of cosmic fluids under cosmic horizons. Using the divergence-less condition of Einstein tensor as discussed in equation (2), we can find the following equation.
\[8\pi\dot{G}\rho_{eff}+\Lambda+8\pi G\left(\dot{\rho}_{eff}+3H(\rho_{eff}+p_{eff })\right)=0 \tag{5}\]
Now from the divergence-less property of the energy-momentum tensor of the effective fluid system and the equation (5), we can find the following two equations.
\[\dot{\rho}_{eff}+3H\big{(}\rho_{eff}+p_{eff}\big{)}=0\] (6a) And, \[8\pi\dot{G}\rho_{eff}+\dot{\Lambda}=0 \tag{6b}\]
Hence, we can find,
\[G(a)=G_{0}-\int\frac{\Lambda(a)}{8\pi\rho_{eff}(a)}da \tag{7}\]
For the cosmological constant, we assumed it to be variable like the relation \(\Lambda=\alpha H^{2}\). Hence, we can find the following differential equation for the time variation solution of the scale factor.
\[2a(t)\ddot{a}(t)+4\dot{a}^{2}(t)+8\pi G(a)\big{(}p_{eff}-\rho_{eff}\big{)}a^{2 }(t)-2\alpha H_{0}^{2}a^{2(\beta+1)}=0 \tag{8}\]
In the interaction scenarios, we consider the equation of states as \(p_{m}=\omega_{m}\rho_{m}\), \(p_{f}=A\rho_{f}+BH^{2}\) and \(p_{Nf}=C\rho_{Nf}+D\rho_{Nf}^{2}-\frac{D}{\rho_{Nf}^{a_{1}}}\). Considering this equation of states in multiple fluid cosmology, we can expand the energy momentum conservation equation (6a) as follows.
\[\dot{\rho}_{m}+3H(\rho_{m}+p_{m})=-3H\rho_{m}\delta_{1}-3H\rho_{m}\delta_{2} \tag{9a}\] \[\dot{\rho}_{f}+3H\big{(}\rho_{f}+p_{f}\big{)}=3H\rho_{m}\delta_{1} \tag{9b}\]
And,
\[\dot{\rho}_{Nf}+3H\big{(}\rho_{Nf}+p_{Nf}\big{)}=3H\rho_{m}\delta_{2} \tag{9c}\]
The solutions for energy densities and pressures can be given in table 2. For the derivations of the gravitational constant, we found the same conditions as follows in table 1.
The functional forms of the gravitational constant for the two secondary conditions can be given as follows according to the order of the conditions list.
\[G(a)=G_{0}-\frac{\lambda}{8\pi}\ln(\rho_{m}+\rho_{f}+\rho_{Nf})+\frac{\lambda} {8\pi}*Constant*\ln(a) \tag{10a}\]
\begin{table}
\begin{tabular}{|l|c l|} \hline
**Primary Conditions:** & \(\bullet\) & Dark matter dominant: \(\rho_{m}\gg\rho_{f}\gg\rho_{Nf}\) \\ & \(\bullet\) & Inhomogeneous fluid dominant: \(\rho_{f}\gg\rho_{m}\gg\rho_{Nf}\) \\ & \(\bullet\) & Non-linear fluid dominant: \(\rho_{Nf}\gg\rho_{f}\gg\rho_{m}\) \\ & \(\bullet\) & Mixture dominant: \(\rho_{m}\approx\rho_{f}\approx\rho_{Nf}\) \\ \hline
**Secondary Conditions:** & \(\bullet\) & Mixture dominant Case with very large \\ & & Powers: \(-3(1+\omega_{m})\approx-3(1+A)\approx 1+2\beta\gg a^{\frac{3}{4\sqrt{4D^{2}E+1}}}\gg 1\) \\ & \(\bullet\) & Mixture dominant Case with small Powers: \\ & & \(-3(1+\omega_{m})\approx-3(1+A)\approx 1+2\beta\approx a^{\frac{3}{4D^{2}E+1}}\gg 1\) \\ & & \\ \hline \end{tabular}
\end{table}
Table 1: List of Conditions for this model
And,
\[G(a)=\sigma_{0}-\frac{\lambda}{8\pi}\ln(\rho_{m}+\rho_{f}+\rho_{Nf})+\frac{\lambda }{8\pi}*Constant_{1}*\ln(a) \tag{10b}\]
Here, the terms \(Constant\) and \(Constant_{1}\) are as follows.
\[Constant=\frac{3(\delta_{1}+\delta_{2}-\omega_{m})(1+\omega_{m}) \rho_{m0}-3(1+A)A\rho_{f0}+\frac{2(1+2\beta)(2+\beta)B}{3A+2\beta+4}-3(1+ \omega_{m})(\delta_{1}+\delta_{2})\rho_{m0}}{\rho_{m0}+\rho_{f0}-\frac{3B}{3A+ 2\beta+4}}\]
And,
\[Constant_{1}=\frac{3(\delta_{1}+\delta_{2}-\omega_{m})(1+ \omega_{m})\rho_{m0}-3(1+A)A\rho_{f0}+\frac{2(1+2\beta)(2+\beta)B}{3A+2\beta+4 }-3(1+\omega_{m})(\delta_{1}+\delta_{2})\rho_{m0}-\frac{1}{2D}\sqrt{4D^{2}E+ 1}c_{1}c_{2}}{\rho_{m0}+\rho_{f0}-\frac{3B}{3A+2\beta+4}-\frac{c_{1}}{D}\sqrt {4D^{2}E+1}}\]
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Name of fluid & Energy density & Pressure \\ \hline Linear dark Matter & \(\rho_{m}=\rho_{m0}a^{-3(1+a_{m})}\) & \(p_{m}=(\omega_{m}-\delta_{1}-\delta_{2})\rho_{m0}a^{-3(1+a_{m})}\) \\ \hline Inhomogeneous fluid & \(\rho_{f}=\rho_{f0}a^{-3(1+A)}-\frac{3B}{3A+2\beta+4}a^{1+2\beta}\) & \(p_{f}=\delta_{1}\rho_{m0}a^{-3(1+a_{m})}+(A)\rho_{f0}a^{-3(1+A)}+\frac{B(2\beta +4)}{3A+2\beta+4}a^{2\beta+1}\) \\ \hline New Non-linear fluid & \(\rho_{Nf}=\) \\ \(\sqrt{E+\frac{1}{4D^{2}}\left(\left(\frac{\left(\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\right]} {\rho_{Nf0}+\frac{1}{2D}}\right]}a^{\frac{3}{4D^{2}E+1}+1}\right)}}{\left( \frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\right]}{\rho_{Nf0}+ \frac{1}{2D}}\right]}a^{\frac{3}{4D^{2}E+1}+1}}{\left(\frac{\rho_{Nf0}+\frac{1 }{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\right]}{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0} +\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[ \frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0} +\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[ \frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0} +\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[ \frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0} +\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[ \frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+ \frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+ \frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+ \frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{ \rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+ \frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+ \frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0} +\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D} \left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{ Nf0}+\frac{1}{2D}\left[\frac{\rho_{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{ 0}{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+\frac{1}{2D}\left[\frac{\rho_{Nf0}+
\[\omega_{eff}=\left((\omega_{m}-\delta_{1}-\delta_{2})\rho_{m0}a^{-3(1+\omega_{m})}+ \delta_{1}\rho_{m0}a^{-3(1+\omega_{m})}+(A)\rho_{f0}a^{-3(1+\omega_{m})}+\frac{ \beta(2\theta+\omega)}{3\alpha+2\beta+4}a^{2\beta+1}+\delta_{2}\big{(}\rho_{m0} a^{-3(1+\omega_{m})}\big{)}-\right.\]
\[\left.\left(\sqrt{E+\frac{1}{4D^{2}}}\left(\frac{\left(\frac{\left(\frac{\rho_{ f0}+\frac{1}{4D^{2}}}{\rho_{f0}+\frac{1}{4D^{2}}}\left[\frac{\mu-1}{4D^{2}} \right]}\right)^{\frac{3}{4D^{2}\theta+1}}}{\left(\frac{\rho_{f0}+\frac{1}{4D^ {2}}}{\left[\frac{\mu-1}{4D^{2}}\right]}\right)^{\frac{2}{4(2D^{2}\theta+1}} \right)}-\frac{1}{2D}\left(\frac{\left(\frac{\left(\frac{\rho_{f0}+\frac{1}{4D ^{2}}}{\rho_{f0}+\frac{1}{4D^{2}}}\left[\frac{\mu-1}{4D^{2}}\right]}\right)^{ \frac{3}{4(2D^{2}\theta+1}}\right)}{\left(\frac{\rho_{f0}+\frac{1}{4D^{2}}}{ \left[\frac{\mu-1}{4D^{2}}\left[\frac{\mu-1}{4D^{2}}\right]}\right)^{\frac{2}{ 4(2D^{2}\theta+1}}\right)}\right)}\right)\right)\right)\right)\left(\rho_{m0}a^{-3(1+ \omega_{m})}+\rho_{f0}a^{-3(1+\omega_{m})}-\frac{3B}{3\lambda+2\beta+4}a^{1+2 \beta}+\]
\[\left.\sqrt{E+\frac{1}{4D^{2}}}\left(\frac{\left(\frac{\left(\frac{\rho_{f0}+ \frac{1}{4D^{2}}}{\rho_{f0}+\frac{1}{4D^{2}}}\left[\frac{\mu-1}{4D^{2}}\right] }\right)^{\frac{3}{4(2D^{2}\theta+1}}\right)}{\left(\frac{\rho_{f0}+\frac{1}{ 4D^{2}}}{\left[\frac{\mu-1}{4D^{2}}\left[\frac{\mu-1}{4D^{2}}\right]}\right) }{\left(\frac{\rho_{f0}+\frac{1}{4D^{2}}}{\left[\frac{\mu-1}{4D^{2}}\left[ \frac{\mu-1}{4D^{2}}\right]}\right)^{\frac{2}{4(2D^{2}\theta+1}}\right)} \right)}\right)\right)\right)\right) \tag{12}\]
## 3 Conclusion
The chapter contains the mathematical analysis of multiple fluid cosmologies where we have tried to develop different equations of states through the coupled actions for multiple fluid models with interaction scenarios. The linear dark matter, inhomogeneous fluid and new type of non-linear fluids have been used here under modified geometry. The idea of variable gravitational constant and cosmological constant has also been interpreted into cosmic dynamics. The equation of state contains six dynamical parameters and one power parameter which can explain cosmological phases like Phantom phase, Quintessence, CDM, WDM and radiation as well as a smooth transition between phases like Quintom, Graceful exit etc.
## 4 Future Works and Possibilities
The equations of states derived here will be used in the detailed thermodynamic analysis in near future. This model will also be used to discuss the fifth force formalism as well as brane cosmic interpretations. The modified gravity to cosmological constant transition in action used here, will be used in detail to find the suitable entropies for the generalized second law of thermodynamics for the model.
|
2306.10469
|
Transferring Neural Potentials For High Order Dependency Parsing
|
High order dependency parsing leverages high order features such as siblings
or grandchildren to improve state of the art accuracy of current first order
dependency parsers. The present paper uses biaffine scores to provide an
estimate of the arc scores and is then propagated into a graphical model. The
inference inside the graphical model is solved using dual decomposition. The
present algorithm propagates biaffine neural scores to the graphical model and
by leveraging dual decomposition inference, the overall circuit is trained
end-to-end to transfer first order informations to the high order informations.
|
Farshad Noravesh
|
2023-06-18T03:58:41Z
|
http://arxiv.org/abs/2306.10469v1
|
# Transferring Neural Potentials For High Order Dependency Parsing
###### Abstract
High order dependency parsing leverages high order features such as siblings or grandchildren to improve state of the art accuracy of current first order dependency parsers. The present paper uses biaffine scores to provide an estimate of the arc scores and is then propagated into a graphical model. The inference inside the graphical model is solved using dual decomposition. The present algorithm propagates biaffine neural scores to the graphical model and by leveraging dual decomposition inference, the overall circuit is trained end-to-end to transfer first order informations to the high order informations.
## 1 Introduction
Dependency parsing is the basis of many complex pipelines for problems in natural language processing such as machine summarization, machine translation, event extraction, semantic parsing,semantic role labeling(SRL), emotion analysis, dialogue systems and information processing. Thus, any error in dependency parsing could propagate to downstream task and therefore any advance in this field could lead to major improvement in NLP tasks. There are two main approaches to dependency parsing.
The first approach is transition based which has incremental local inference and involves using datastructures such as buffer and stack (Nivre 2008),(Buys & Blunsom 2015). This approach has the limitation of resolving relatively short sentences and is a trade-off between speed and accuracy.
The second approach is graph based and can handle any long sentence but the inference time has usually long time complexity.
There are many technical issues for improving state of the art dependency parsers. Thus these research directions include:
1. nonprojective cases
2. high order features
3. faster inference algorithms
4. training data scarcity and the need for few shot learning
5. span-span modeling
6. reranking
The amount of nonprojective examples of dependency parsing varies from one language to another. At inference time, Eisner algorithm which is a bottom up CKY-like algorithm can not resolve nonprojective cases. For nonprojective parsing, many algorithms based on maximum spanning tree are used such as (Zmigrod et al., 2021),(McDonald et al., 2005),(Levi et al., 2015) or leveraging maximum subgraph parsing for the case of nontrees in semantic dependency parsing as is shown in (Kuhlmann and Jonsson, 2015). An alternative approach to finding maximum spanning tree is to formulate it as an integer linear programming(ILP) like (Riedel et al., 2006). One advantage of ILP formulation is the capability to model many constraint like each verb has a single subject in a direct way. The other advantage is that it can be used for nonprojective cases (Riedel and Clarke, 2006).
Most articles in the literature are devoted to first order parsing (Dozat and Manning, 2016) which only use one edge and node as sources of input features and neglect the richness of language structure. Although (Li et al., 2022) uses graph neural networks(GNN) and creates the graph dynamically, but it still does not model high order features. A good way to leverage GNN to consider higher order features like grandparents, grandchildren and siblings, is described in (Ji et al., 2019) which recursively aggregates the neighbors' information and the graph at each layer appears as a soft parse tree, and finally it uses a MST algorithm for decoding. The novelty of this GNN is that it represents both the head representation of each node, and dependent representation of each node. The drawback is that the number of parameters to learn is quite large and it also suffers from the
curse of dimensionality and therefore needs many data to train efficiently in this high dimensional vector space. An interesting idea to get around this difficulty in GNN, is considering each node as an edge in the dependency structure, which is explained in (Yang & Tu 2022). An alternative for GNN is stack-pointer networks in (Ma et al. 2018) and siblings and grandchildren features have been modeled. Although this model is fast but has all the limitations of deep learning such as interpretability, being data hungry and struggling with the curse of dimensionality.
One way to generalize to high order features is using ILP and see it as a structural prediction problem (Martins et al. 2009_b_). In order to unify all approaches, graphical models are used as a promising paradigm as is shown in (Niculae & Martins 2020). Prior knowledge could be encoded as hard constraint (Martins et al. 2009_a_) and keeps polynomial number of constraints in general. (Martins et al. 2009_a_) uses nonlocal(high order) features. Another idea that can be combined is dual decomposition which is inspired by optimization (Martins, Smith, Figueiredo & Aguiar 2011),(Martins, Figueiredor, Aguiar, Smith & Xing 2011). A good approach to unify the loopy belief propagation parser of (Smith & Eisner 2008) and the relaxed linear program (Martins et al. 2009_a_) is explained in (Martins et al. 2010) that considers model assumptions in a factor graph. (Gormley et al. 2015) considers Feed-forward topology of inference as a differentiable circuit and considers high order interactions in a factor graph. (Gormley et al. 2015) models each potential function as a loglinear form.
Although high order features are crucial to obtain state of the art models for dependency parsing but there is another factor which is even more important and is described in (Gan et al. 2021). The basic idea is to measure the relation between the spans in contrast to measuring the relations between words in classical dependency parsing. This approach is a proper generalization since each word is always a span of length one, and subspans could be evaluated from spans recursively which could be considered as a dynamic programming paradigm.
## 2 Related Works
An inspiring and natural approach to high order dependency parsing is described in the seminal work of (Smith & Eisner 2008) that formulates it as an approximate learning and inference over a graphical model and the global constraints are encoded inside the model and Loopy Belief Propagation(LBP) is a simple approximation that is used for that. (Smith & Eisner 2008) incrementally adjusts the numerical edge weights that are fed to a fast first-order parser. One of the
main difficulties is satisfy hard constraints such as tree constraint which ensures the resulting graph is a tree. The probability distribution of all configurations(all assigments \(\mathcal{A}\)) is defined by the following Markov random field(MRF)
\[p(\mathcal{A})=\frac{1}{\mathcal{Z}}\prod_{m}F_{m}(\mathcal{A}) \tag{2.1}\]
where \(F_{m}\) is the m-th factor function which could be unary, binary, ternary, or global. From a different classification angle, these factors could be either hard or soft. A hard factor has a value 0 on violated constraint parses, acting as a constraint to rule them out such as TREE constraint which ensures the final graph is a tree or even a harder constraint such as PTREE which ensures trees to be projective. Another important hard constraint is EXACTLY1 which does not allow any word to have more than one parent. Soft factors in (2.1) can easily be modeled by the following loglinear model:
\[F_{m}(\mathcal{A})=\exp\sum_{h\in features(F_{m})}\theta_{h}f_{h}(\mathcal{ A},W,m) \tag{2.2}\]
Nine types of soft constraints and seven hard constraints are described in (Smith & Eisner 2008). An interesting experimental and combinatorial exploration is discovering which set of soft and hard constraints are sufficient for a reasonable accuracy which experimentally measures the degree of sensitivity of each of these constraints to the final accuracy.
The main difficulty in training this model is the fact that the normalizing constant in the denominator depends implicitly on the learning parameters and therefore can not be neglected but Belief Propagation(BP) provides an estimate of that marginal distribution. Thus the gradient of the normalizing constant can easily be computed as follows.
\[\nabla_{\theta}\log\mathcal{Z}=\sum_{m}\mathbb{E}_{p(\mathcal{A})}[\nabla_{ \theta}F_{m}(\mathcal{A})] \tag{2.3}\]
(Gormley et al. 2015) considers approximations and parsing in (Smith & Eisner 2008) as a differentiable circuit to improve accuracy. It uses a different objective function which is based on the \(L2\) distance between the approximate marginals and the gold marginals.
Main Results
### Terminology
Let \(W=W_{0},\ldots,W_{n}\) denote the input sentence where \(W_{0}\) is the root. The corresponding part of speech(POS) tags are \(T_{1},\ldots,T_{n}\). There are \(O(n^{2})\) links in the dependency parse that can be enumerated by \(\{L_{ij}:0\leq i\leq n,1\leq j\leq n\}\)
### Transferring Neural Potentials
By borrowing from (Dozat & Manning 2016), the scores can easily be calculated as follows:
\[h_{i}^{(arc-dep)} =MLP^{(arc-dep)}(r_{i}) \tag{3.1}\] \[h_{j}^{(arc-head)} =MLP^{(arc-head)}(r_{j})\] \[s_{i}^{(arc)} =H^{(arc-head)}U^{(1)}h_{i}^{(arc-dep)}+H^{(arc-head)}u^{(2)}\]
Unary and binary potentials could be defined as follows:
\[\psi_{Y_{k}} =\exp s_{i(k)j(k)} \tag{3.2}\] \[\psi_{Y_{k},Y_{k^{\prime}}} =\psi_{Y_{k}}+\psi_{Y_{k^{\prime}}}+\phi_{Y_{k},Y_{k^{\prime}}}\]
where \(i(k)\) and \(j(k)\) is the simple lookup table mapping from the actual dependency graph to the graphical model. Please note that the score of the labels are also defined similarly. The best way to understand the idea of transferring neural potentials is to imagine that the cheap and fast first order parser is the baseline and the goal is to perturb these edge scores to be adjusted to the global constraints through high order features. There are two paradigms that can resolve this issue. The first paradigm says that the weights of the first order parser does not receive any feedback from high order features and the misalignment is modeled by a new term \(\phi_{Y_{k},Y_{k^{\prime}}}\) that is small only for cases that high order features do not have any conflict with first order features and we call it a perfect alignment case. The second paradigm couples first order with high order features in a bidirectional way and allows to change the scores of first order parser by end to end training and the error is propagated all the way downstream to influence and tune the first order parser. The first paradigm can be used as a warm start of the second paradigm to increase the speed of training process since the initial weights are at a reasonable space and just a perturbation of it could satisfy the high order dependency
constraints. The present paper assumes that \(\psi_{Y_{k}}+\psi_{Y_{k^{\prime}}}\) can sufficiently model the interactions of edges and there is no need to model the mutual interaction \(\phi_{Y_{k},Y_{k^{\prime}}}\) explicitly, since the model is trained end to end and all potentials are based on neural networks and the mutual interaction is implicitly considered. There are two main approaches to inference for the best parse. The first one is based on sum-product algorithm also known as belief propagation algorithm. After calculating the beliefs from final message passing iteration, the marginal probability can be approximated. This should be done for all variable nodes of the factor graph to get all parts of the parse. The second approach simultaneously maximizes the objective by finding the best assignment which is also called the MAP assignment task. The second approach is mathematically richer since the integrality gap can be evaluated in contrast to loopy belief propagation that only can hope to reach the convergence and the evaluation is hard. These two approaches are explained here:
#### 3.2.1 Loopy Belief Propagation
After sending messages iteratively from variables, \(y_{i}\), to factors, \(\alpha\) and, from factors to variables, the algorithm will eventually converge:
\[\begin{split} m^{(t)}_{i\rightarrow\alpha}(y_{i})& \propto\prod_{\beta\in\mathcal{N}(i)\backslash\alpha}m^{(t-1)}_{ \beta\to i}(y_{i})\\ m^{(t)}_{\alpha\to i}(y_{i})&\propto\sum_{y _{\alpha}\sim y_{i}}\psi_{\alpha}(y_{\alpha})\prod_{j\in\mathcal{N}(\alpha) \backslash i}m^{(t-1)}_{j\rightarrow\alpha}(y_{i})\end{split} \tag{3.3}\]
where \(\mathcal{N}(i)\) and \(\mathcal{N}(\alpha)\) are the neighbors of \(y_{i}\) and \(alpha\) respectively. Beliefs at each variable and factor are computed as follows:
\[\begin{split} b_{i}(y_{i})&\propto\prod_{\alpha \in\mathcal{N}(i)}m^{(t_{max})}_{\alpha\to i}(y_{i})\\ b_{\alpha}(y_{\alpha})&\propto\psi_{\alpha}(y_{ \alpha})\prod_{i\in\mathcal{N}(\alpha)}m^{(t_{max})}_{i\rightarrow\alpha}(y_{ i})\end{split} \tag{3.4}\]
This approach is used in (Gormley et al., 2015) in the inference step.
#### 3.2.2 MAP Inference
In the present work, this approach is chosen since it is fast, parallelizable and has a rich mathematical analysis. Linear programming(LP) relaxation is used to solve
MAP inference as is explained in (Jaakkola & Sontag, 2010). The factor graph has an equivalent Markov random field(MRF) and thus the objective is as follows:
\[\begin{split}\text{MAP}(\theta)&=\max_{\mu}\sum_{i \in V}\sum_{x_{i}}\theta_{i}(x_{i})\mu_{i}(x_{i})+\sum_{ij\in E}\sum_{x_{i},x_{ j}}\theta_{ij}(x_{i},x_{j})\mu_{ij}(x_{i},x_{j})\\ &=\max_{\mu}\theta.\mu\end{split} \tag{3.5}\]
subject to :
\[\begin{split}\mu_{i}(x_{i})&\in\{0,1\}\;\forall i \in V,x_{i}\\ \sum_{x_{i}}\!\!\mu_{i}(x_{i})&=1\;\forall i\in V \\ \mu_{i}(x_{i})&=\sum_{x_{j}}\!\!\mu_{ij}(x_{i},x_{j} )\;\forall ij\in E,x_{i}\\ \mu_{j}(x_{j})&=\sum_{x_{i}}\!\!\mu_{ij}(x_{i},x_{j} )\;\forall ij\in E,x_{j}\end{split} \tag{3.6}\]
where \(\theta_{i}\) and \(\theta_{ij}\) are unary and binary potentials respectively. This is a pairwise relaxation. We can tighten the relaxation by enforcing the joint consistency of edges in a cluster of variables using the framework of lift-and-project methods but is out of the scope of the present paper since a fast algorithm is more preferred to a highly accurate algorithm. The lifting refers to introducing the new high level variables and the projection refers to projecting to the original variables. An alternative framework is to use cutting plane algorithms. When using these higher order methods, the number of constraints and the variables, grows exponentially in the size of the clusters considered and is therefore prohibitive. The constraints in (3.6) can be generalized to the cluster based constraint as is done in (Batra et al., 2011),(Sontag et al., 2008) to have tighter relaxation. A different LP relaxation for MAP assignment problem is by reducing it to an instance of a Bipartite Multi-cut problem as is shown in (J. Reddi et al., 2010). A good survey of all LP relaxations for MAP inference in discrete Markov random fields is described in (Kannan et al., 2019). A cutting-plane algorithm is used in the present paper as follows: After solving the pairwise LP relaxation, there are two cases. The first case is that the solution is integer, the MAP assignment is done, and algorithm is terminated. To handle the second case, one can add a valid constraint to the relaxation. Valid constraint is a constraint that does not cut off any of the integral vertices. Solving (3.6) is computationally expensive and is not efficient. Thus, a natural approach is to use dual decomposition which is explained in (Martins, Figueiredor, Aguiar,
Smith & Xing 2011), (Koo et al. 2010),(Martins, Smith, Figueiredo & Aguiar 2011). Block coordinate descent is used for dual decomposition in (Belanger et al. 2014) while the present paper used ADMM as is leveraged in (Martins, Smith, Figueiredo & Aguiar 2011).
#### 3.2.3 Dual Decomposition
Following the ADMM approach of (Martins, Smith, Figueiredo & Aguiar 2011) to dual decomposition, first the primal problem is defined as follows:
\[\begin{split}& P:\max_{z_{s}\in y_{s}}\sum_{s=1}^{S}f_{s}(z_{s}) \\ &<u(r)>_{r\in R}\in\mathbb{R}^{|R|}\\ s.t.& z_{s}(r)=u(r)\;\forall s,r\in\bar{R}_{s}\end{split} \tag{3.7}\]
After doing relaxation and writing the dual form, the master problem is:
\[\begin{split}& D:\min_{\lambda=<\lambda_{1},\ldots,\lambda_{S}>} \sum_{s=1}^{S}g_{s}(\lambda_{s})\\ & s.t.\sum_{s:r\in\bar{R}_{s}}\lambda_{s}(r)=0\;\forall r\in R \end{split} \tag{3.8}\]
where \(g_{s}(\lambda_{s})\) are the solution to the following slaves
\[\max_{z_{s}\in Z_{s}}f_{s}(z_{s})+\sum_{r\in\bar{R}_{s}}\lambda_{s}(r)z_{s}(r )-\frac{\rho}{2}\sum_{r\in\bar{R}_{s}}(z_{s}(r)-u^{t}(r))^{2} \tag{3.9}\]
Since the scores \(f_{s}(z_{s})\) in (3.9) is modeled by a linear form like \(f_{s}(z_{s})=\sum_{r\in R_{s}}\theta_{s}(r)z_{s}(r)\), the slaves can be written as:
\[\max_{z_{s}\in Z_{s}}\;\sum_{r\in\bar{R}_{s}}(\theta_{s}(r)+\lambda_{s}(r))z_ {s}(r)-\frac{\rho}{2}\sum_{r\in\bar{R}_{s}}(z_{s}(r)-u^{t}(r))^{2} \tag{3.10}\]
Note that \(\theta_{s}(r)\) in (3.10) are neural potentials that are estimated from the deep learning module and these coefficients vary at each iteration of the overall circuit. To solve (3.10) using a generic quadratic solver, it is written in the following form:
\[\max_{z_{s}\in Z_{s}}\;\sum_{r\in\bar{R}_{s}}(\theta_{s}(r)+\lambda_{s}(r)+ \rho u^{t}(r))z_{s}(r)-\frac{\rho}{2}\sum_{r\in\bar{R}_{s}}(z_{s}(r))^{2} \tag{3.11}\]
The Lagrange variables can be updated as follows:
\[\lambda_{s}^{t+1}(r)=\lambda_{s}^{t}(r)-\eta_{t}(z_{s}^{t+1}(r)-u^{t+1}(r)) \tag{3.12}\]
where \(\eta_{t}\) is the step size. Applying ADMM algorithm, u has a closed form solution as a simple average which is obtained by projected subgradient method:
\[u^{t+1}(r)=\frac{1}{\delta(r)}\sum_{s:r\in\bar{R}_{s}}z_{s}^{t+1}(r) \tag{3.13}\]
where \(\delta(r)\) is the cardinality of set \(\{s:r\in R_{s}\}\). The loop iterates until primal and dual residuals defined in (Martins, Smith, Figueiredo & Aguiar 2011) violate the constraint. To fully perceive the details of these variables, consider a sentence with 5 tokens as follows: \(w_{1},\ldots,w_{5}\)
\(z_{3f}^{gp}=\{y_{34},y_{45}\}\)
The minimal dependencies are defined as all second order dependencies that are either consecutive siblings or grand parents and are shown in Figure 1 with constraints that are shown in Figure 2.
Figure 1: right and left minimal dependencies
Figure 3 shows the factor graph generated for 5 token sentences which includes 6 constraints and 7 overlapping basic components. Nots that only two types of higher order constraints namely grandparent and consecutive siblings are used in the present paper since there is always a tradeoff between computational complexity and exactness of the solutions. These two types of constraints are more essential than the rest of them and have more impact in any selection process.
Figure 2: forward constraints
In order to connect the dual decomposition to the first order deep learning model, a mapping is defined that assigns a score to each of the components. The weighted combinations of these first order scores supports how solution of the \(z_{s}\) variables can shape the global dependency parsing graph:
\[f_{s}(z_{s})=\sum_{r}z_{(s,r)}\theta(s,r) \tag{3.14}\]
Once the \(z\) variables are solved, the dependency graph can be read. Note that the number of components of each slave is fixed but each basic component can be connected to any number of constraints. The equality constraint in equation (3.7) ensures the consistency of the selection.
### Training And Prediction
By drawing inspiration from (Gormley et al., 2015), everything is trained end to end like a circuit and the inference mechanism of the factor graph is coupled with the neural estimation of edge scores. A drawback of the present approach like any other deep learning model is the lack of sufficient training data since deep
Figure 3: factor graph
learning models are data hungry and suffer from curse of dimensionality. There is a compromise between speed of convergence and the accuracy of modeling. This is the reason that very few slaves(constraints) are considered for high order modeling. The more constraint added to the model, the longer it takes to converge. Minimum Bayes risk(MBR) is used to produce a tree that minimizes the expected loss as follows:
\[\begin{split} h_{\theta}(x)&=\operatorname*{arg\, min}_{\hat{y}}\mathbb{E}_{y\sim p_{\theta}(.|x)}l(\hat{y},y)\\ &=\operatorname*{arg\,max}_{\hat{y}}\sum_{i:\hat{y}_{i}=ON}p_{ \theta}(y_{i}=ON|x)\end{split} \tag{3.15}\]
At test time, maximum spanning tree automatically ensures that the resulting graph is a tree.
```
1:Input: batch of sentences
2:repeat
3: calculate word embedding
4: calculate the scores using (3.1)
5: calculate the neural potentials using first order estimation (3.2)
6:repeat
7:for each slave \(s=1,\ldots,S\) do
8: make an update for slave \(z_{s}\) using (3.11)
9:endfor
10: update u using (3.13)
11: update \(\lambda\) using (3.12)
12:\(t\gets t+1\)
13: until primal and dual residuals are below a threshold
14: round \(u,z,\lambda\)
15: backpropagate the loss in (3.15) to adjust the neural scores
16: until maxIter
17:if not tree
18: use maximum spanning tree algorithm
19: return the dependency parsing tree
```
**Algorithm 1** outputs dependency parsing tree
### Experiments
Universal dependency dataset for English language is used for all of the experiments in the present paper. The maximum length of the sentence is 71 tokens but at most 60 token sentences are used for training since few datapoints for longer sentences are not enough for an adequate training and makes some noise.
When high order sentences are not enough for an adequate training and makes some noise. When high order sentences are not enough for the sentence, the high order sentences are not enough for the sentence. The high order sentences are not enough for the sentence, it is not enough for the sentence to be used for training since few datapoints for longer sentences are not enough for an adequate training and makes some noise. When high order sentences are not enough for the sentence, the high order sentences are not enough for the sentence to be used for training.
The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the high order sentences. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the sentence to be used for training. The high order sentences are not enough for the high order sentences. The high order sentences are not enough for the high order sentences.
structures in languages have high deviations between first order and high order parsers.
|
2305.16548
|
Annotating and Detecting Fine-grained Factual Errors for Dialogue
Summarization
|
A series of datasets and models have been proposed for summaries generated
for well-formatted documents such as news articles. Dialogue summaries,
however, have been under explored. In this paper, we present the first dataset
with fine-grained factual error annotations named DIASUMFACT. We define
fine-grained factual error detection as a sentence-level multi-label
classification problem, and we evaluate two state-of-the-art (SOTA) models on
our dataset. Both models yield sub-optimal results, with a macro-averaged F1
score of around 0.25 over 6 error classes. We further propose an unsupervised
model ENDERANKER via candidate ranking using pretrained encoder-decoder models.
Our model performs on par with the SOTA models while requiring fewer resources.
These observations confirm the challenges in detecting factual errors from
dialogue summaries, which call for further studies, for which our dataset and
results offer a solid foundation.
|
Rongxin Zhu, Jianzhong Qi, Jey Han Lau
|
2023-05-26T00:18:33Z
|
http://arxiv.org/abs/2305.16548v1
|
# Annotating and Detecting Fine-grained Factual Errors for Dialogue Summarization
###### Abstract
A series of datasets and models have been proposed for summaries generated for well-formatted documents such as news articles. Dialogue summaries, however, have been under explored. In this paper, we present the first dataset with fine-grained factual error annotations named DiaSumFact. We define fine-grained factual error detection as a sentence-level multi-label classification problem, and we evaluate two state-of-the-art (SOTA) models on our dataset. Both models yield sub-optimal results, with a macro-averaged F1 score of around 0.25 over 6 error classes. We further propose an unsupervised model EnDeRanker via candidate ranking using pretrained encoder-decoder models. Our model performs on par with the SOTA models while requiring fewer resources. These observations confirm the challenges in detecting factual errors from dialogue summaries, which call for further studies, for which our dataset and results offer a solid foundation.1
Footnote 1: The dataset and code are available at [https://github.com/731935354/Dia-Sum-Fact](https://github.com/731935354/Dia-Sum-Fact)
## 1 Introduction
Factual inconsistency in abstractive summarization -- a phenomenon where model-generated summaries contain facts that are inconsistent with the source document -- is a widely known problem and has been studied extensively in the document summarization community. An example is shown in Figure 1, where the source document is a dialogue -- the type of documents that this paper focuses on.
Existing work covers topics on factual inconsistency including error typology and factuality annotations of state-of-the-art neural summarization models Maynez et al. (2020); Huang et al. (2020); Pagnoni et al. (2021); Goyal and Durrett (2021); Fabbri et al. (2021); Gao and Wan (2022); Tang et al. (2022), automatic factual error detectors Wang et al. (2020); Goyal and Durrett (2020); Kryscinski et al. (2020); Durmus et al. (2020); Zeng et al. (2021); Scialom et al. (2021), methods to correct factual errors in summaries Cao et al. (2020); Dong et al. (2020); Chen et al. (2021) and methods to produce factually more consistent summaries Zhao et al. (2020); Cao and Wang (2021); Tang et al. (2022); Zhu et al. (2021); Aralikatte et al. (2021); Chen et al. (2021); Balachandran et al. (2022). Almost all of these works focus on news summarization based on two datasets: CNN/DailyMail Hermann et al. (2015); Nallapati et al. (2016) and XSum Narayan et al. (2018).
Dialogue summarization (cf Figure 1), which aims to produce a condensed version of a dialogue while maintaining its salient information, is equally important due to its application to summarizing meeting transcripts Li et al. (2019); Zhu et al. (2020); Zhong et al. (2022), daily conversations Chen and Yang (2020); Liu and Chen (2021); Feng et al. (2021), customer service dialogues Liu et al. (2019); Zou et al. (2021) and medical dialogues Joshi et al. (2020); Krishna et al. (2021). However, factual consistency in dialogue summarization is under explored as there are currently no benchmark datasets that contain fine-grained error categories. This paper aims to fill in this gap.
To investigate factual consistency in dialogue
Figure 1: Example summaries that are factually consistent and inconsistent with a source dialogue.
summarization, we release DiaSumFact with fine-grained sentence-level annotations regarding factual consistency for 475 model summaries (1,340 sentences) from six neural dialogue summarization models on two popular datasets: SAMSum Gliwa et al. (2019) and QMSum Zhong et al. (2021). We adopt a two-dimensional typology that considers the semantic roles and verifiability of error spans separately.
We formulate factual error detection as a sentence-level multi-label classification task and use DiaSumFact to evaluate two state-of-the-art factual error detection models designed for document summarization. As there are no existing error detection model for fine-grained error categories, we adapt the two binary classification models to fit to our task. Empirical results show that they don't work well on the task, indicating its difficulty and the domain gap between document summarization and dialogue summarization.
We then propose two models: BertMulti and EnDeRanker. BertMulti is a multi-class classification model trained on synthetic data, which is created by corrupting sentences from reference summaries Kryscinski et al. (2020). EnDeRanker is a simple unsupervised model that can leverage any pretrained encoder-decoder model to detect factual errors. Given a model-generated summary sentence containing a span of interest for error detection, EnDeRanker computes log likelihood scores for the sentence and its variants containing replacement spans fetched from the source dialogue. The scores are computed as BARTScore Yuan et al. (2021), which will be explained in 4.2. We compare the scores of the sentences to determine if the span of interest and hence the summary sentence contains a factual error. We run experiments with T5 Raffel et al. (2020), BART Lewis et al. (2020) and PEGASUS Zhang et al. (2020), fine-tuned either on news summarization or dialogue summarization, as the encoder-decoder for EnDeRanker. The results show that BertMulti and EnDeRanker performs on par with the adapted state-of-the-art models in terms of macro-averaged F1.
Motivated by the strong complementarity between models, we further present two ensemble models combining the four models above. The results, while exceeding those of the individual models, are still far from indicating a practical model for factual error detection over dialogue summaries. This calls for further studies, for which our dataset and results form a solid foundation.
To summarise, this paper makes the following contributions:
* We annotate and present DiaSumFact, the first dataset with fine-grained sentence-level factual errors for dialogue summarization, providing rich annotation including error classes, erroneous spans and explanation.
* We investigate the effectiveness of adapting state-of-the-art factual error detection models for document summarization on model-generated dialogue summaries, demonstrating the difficulty of the task.
* We propose BertMulti, a weakly-supervised multi-class classifier and EnDeRanker, an unsupervised factual error detector that requires no human labeled data for training and can leverage existing pre-trained encoder-decoder models. Both models perform on par with adapted SOTA factual error detection models for document summarization.
* Our experiments and analyses reveal the strengths and weaknesses of different factual error detection models, and point out future directions to improve them.
## 2 Related Work
**Error typology and datasets.** There are a few existing datasets on factual errors. Some of them use binary (factually consistent or inconsistent) labels Kryscinski et al. (2020); Wang et al. (2020) and 5-point Likert Scale labels Fabbri et al. (2021); Gao and Wan (2022), which require lower efforts to annotate, but they do not provide information on how and where factual errors were made. To support fine-grained analysis, multi-class and multi-dimensional typologies are designed. Pagnoni et al. (2021) propose a linguistically motivated annotation framework that covers semantic frame errors, discourse errors and content verifiability errors. Goyal and Durrett (2021) use a 2-dimensional typology, where content verifiability and semantic error types are considered separately. Cao et al. (2022) focus on hallucinations and consider both factual and non-factual hallucination. Tang et al. (2022) unify different error types from previous
works into a hierarchical taxonomy. These datasets mostly focus on news summaries.
DialSummEval Gao and Wan (2022) is another popular dataset that contains annotation on factual consistency of model-generated dialogue summaries. The core difference of our work is that we consider fine-grained error categories and the text span (i.e., starting and ending position) of an error. Thus it provides a more elaborate, diagnostic assessment as to what and where goes wrong when a summary is not factually consistent. In comparison, DialSummEval only considers coarse-grained assessment of factuality using 5-point Likert Scale Joshi et al. (2015), without specifying the actual error type (e.g., entity error).
**Factual error detection models.** Most popular factual error detectors are based on either textual-entailment or question-answering (QA).
Textual-entailment-based models are generally binary classifiers that take as input the source document and a model-generated summary. For example, Kryscinski et al. (2020) train binary factual error classifiers using synthetic data. Zeng et al. (2021) use a gradient-based adversarial method to improve model accuracy. Goyal and Durrett (2020) leverage dependency-level entailment achieving better performance and interpretability.
QA-based models first generate questions from a model-generated summary (or source dialogue), and then answer those questions based on its source dialogue (or a model-generated summary). The factual consistency is decided by the similarity between the ground truth answer and the predicted answer. For example, Wang et al. (2020); Durmus et al. (2020) use a precision-oriented method that generates questions from model-generated summaries and answer them using the source document. Scialom et al. (2019) instead generate questions from a source document and answer them using the summary, making it a recall-oriented method. Scialom et al. (2021) combine recall and precision-oriented techniques into a single framework. Fabbri et al. (2022) refine the model component design and obtain a QA-based method that outperforms textual-entailment-based methods.
Our unsupervised method EnDeRanker compares a span (e.g., a person name) in a model-generated sentence with candidates (e.g., other people's names in the dialogue) and decide the factual consistency of the span based on its rank among candidates. It achieves comparable macro F1 with adapted SOTA factual error detectors for document summarization but requires no labelled resources.
## 3 The DiaSumFact Dataset
This section presents our DiaSumFact dataset and procedures to construct the dataset.
### Data Source
To cover dialogues from different domains, we selected two popular datasets SAMSum Gliwa et al. (2019) and QMSumZhong et al. (2021). SAMSum contains daily conversations and gold summaries. QMSum comes with queries and answers based on meeting transcripts. The answers to each query can be seen as a summary to an aspect of the meeting transcript.
For both SAMSum and QMSum, we randomly sampled 60 dialogues and their summaries in its test split.2 For QMSum, we only chose queries whose gold utterances contain no more than 700 tokens according to Bert tokenizer. 3 We manually filtered out dialogues with sensitive contents (e.g., dirty words and potential bias on gender or race). More statistics on the dataset can be found in Appendix Table 5 and Table 6.
Footnote 2: For QMSUM we also have the queries, in addition to the dialogues and summaries.
Footnote 3: \(50\%\) of the queries on aspects of meeting transcripts satisfy this constraint.
Footnote 4: We use _limydub/bart-large-sumsum_ for BART and _transformersbook/pegasus-sumsum_ for PEGASUS. Both are from [https://huggingface.com/models](https://huggingface.com/models).
### Summary Generation Models
We generally choose models with publicly accessible pretrained model checkpoints or generated outputs instead of training models ourselves.
On SAMSum, we use five models: **BART**Lewis et al. (2020), **PEGASUS**Zhang et al. (2020), **S-BART**Chen and Yang (2021), **CONDIGSUM**Liu et al. (2021) and **GPT-3**Brown et al. (2020). For **S-BART** and **CONDIGSUM**, we obtain model outputs from the original papers. For **BART** and **PEGASUS**, we generate output by running their pre-trained models.4 For **GPT-3**, we fine-tune _curie_ over SAMSum dataset and generate summaries using the official API.5
Footnote 4: We use _limydub/bart-large-sumsum_ for BART and _transformersbook/pegasus-sumsum_ for PEGASUS. Both are from [https://huggingface.com/models](https://huggingface.com/models).
On QMSum, we use three models: **PEGASUS**, **BART** and **DialogLM**Zhong et al. (2022). Since we only focus on specific queries (i.e., queries that
only ask about an aspect of a meeting, instead of summarizing the whole meeting), which is a subset of the original dataset, we fine-tuned them using specific queries only. The fine-tuned models achieve ROUGE scores that are better or comparable to state-of-the-art models on the complete dataset.6
Footnote 6: The ROUGE scores of the fine-tuned models are shown in Appendix A.2. We also tried 2-shot GPT-3 but found that it didn’t work well in preliminary experiments and for that reason didn’t include GPT-3.
### Typology of Factual Errors
Motivated by Goyal and Durrett (2021); Pagnoni et al. (2021), we adopt a 2-dimensional typology that treats semantic role and content verifiability of error spans separately.
On the semantic role dimension, we consider six error classes **Entity Error (EntE)**, **Predicate Error (PredE)**, **Circumstance Error (CirE)**, **Coreference Error (CorefE)**, **Link Error (LinkE)** and **Others**, with definitions and examples shown in Table 1. EntE, PredE, CirE are semantic frame errors, and CorefE, LinkE are discourse errors. When a sentence in the summary does not contain any factual error, we label it as **No Error**.
For content verifiability, we consider **Intrinsic Error** (i.e., the error span consists of tokens from the source dialogue) and **Extrinsic Error** (i.e., the error span consists of tokens not mentioned in the source dialogue), a.k.a. hallucinations. This dimension is only defined for EntE, PredE and CirE.
### Annotation Procedure
We recruited 12 workers for the annotation task, including nine PhD students majored in natural language processing and three Master's students majoring in linguistics and information technology. All annotators are fluent English speakers. We take an in-house annotation approach because a trial on Amazon Mechanical Turk did not yield meaningful results, even though high-quality crowd-sourced workers were sourced through strict constraints. The 12 annotators form six pairs randomly where each pair annotates 10 dialogues from each dataset.
The annotation is done in three stages: pilot study, full annotation and annotation adjudication.
An annotation task involves analysing a dialogue and the summaries generated by all corresponding models. During the pilot study, annotators are required to go through the definition and examples for each error class to learn the labelling typology. Then, they will work on two pilot tasks, which are the same for all workers. For each task, a source di
\begin{table}
\begin{tabular}{l l l l} \hline \hline & Lucas: Where r u? I’m waiting at the airport. & **Example Summary** & **In/Ex** \\ & Vanessa: There was a foul-up with the flight. I’m trying to get another ticket. & & \\ Dialogue & Lucas: OMG. How come? & & \\ & Vanessa: No bloody idea. All of the flights are booked cos students are returning from holidays. & & \\ & Lucas: I’ve called the airport and they said there’s a flight to New York at 9:45 p. m. & & \\ & Vanessa: Great, I’ll book it now. & & \\ \hline
**Error** & **Description** & **Example Summary** & **In/Ex** \\ \hline EntE & The core arguments or their attributes in a semantic frame are wrong, such as the subjects and objects. & _Vanessa is waiting at the airport._ & In \\ \hline PredE & The predicate, which is usually a verb, of a semantic frame is wrong. & _Lucas has emailed the airport and got some information about the flight to New York._ & Ex \\ \hline CirE & The non-core arguments, such as location modifiers, temporal modifiers are wrong. & _Lucas is waiting at the train station._ & Ex \\ \hline CorefE & A pronoun or a reference (e.g., this picture) has a wrong antecedent or has no antecedents. & _Vanessa is trying to get another ticket for themselves._ & N/A \\ \hline LinkE & The relationship, e.g., a causal relationship, between statements is wrong. & _Vanessa will book the flight to New York at 9:45 pm because students are returning from holidays._ & N/A \\ \hline Others & This class covers the errors that do not fall into the above classes. & / & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 1: Factual error type descriptions and examples. **In/Ex** refers to Intrinsic Error (In) and Extrinsic Error (Ex).
alogue and a model-generated summary are shown at the same time, and the annotator needs to label any factual errors in each individual sentence in the summary. When all sentences in the summary are done, another summary generated by a different model will be shown. Models are anonymized and their generations are shown in random order.
During the full annotation stage, we assign each annotator 10 tasks from each dataset, which are different from the tasks in pilot study. The annotations are only done for the semantic role dimension.
In the adjudication stage, the two annotators of a pair along with an annotation quality controller (one of the authors of this paper) go through the annotations to resolve any disagreements,
and detailed notes were taken for reaching the final decisions (which is released as part of the dataset as it can be useful for future analysis). Annotation mistakes are also corrected in this process. In the end, a total of 1340 sentences (\(99.7\%\)) with agreed annotations were obtained, while the rest of the sentences were discarded because no agreement can be made.
Note that the annotations on the content verifiability dimension are manually created by the annotation quality controller based on the detailed meeting notes of the last stage. It is a product of a post-annotation process because the original annotators did not explicitly label the error type as extrinsic or intrinsic. Instead, the annotators mark an **Extrinsic Error** for all error spans that are not mentioned in the source dialogue. The annotation quality controller takes this information and further split them into EntE, PredE and CirE based on the semantic role of an error span, and assign **Intrinsic Error** to all original EntE, PredE and CirE, thus obtaining a 2-dimensional annotation.
### Inter-annotator Agreement
We use Cohen's Kappa (McHugh, 2012) to evaluate the inter-annotator agreement. The scores in each group before adjudication are as follows. We first evaluate the agreement for binary label by merging all error types into a single negative class. The scores are 0.39, 0.44, 0.57, 0.59, 0.43, 0.51. For multi-class label, the scores are 0.34, 0.33, 0.44, 0.31, 0.31, 0.25. After adjudication we have full agreement for all instances (as explained in Section 3.4).
### Results on the Summarization Models
We summarize the performance results of the summarization models as derived from the annotations in this subsection. Figure 2 and Figure 3 show the factual error class distribution of the summarization models evaluated on SAMSum and QMSum.
Overall, \(33.3\%\) and \(41.9\%\) sentences in model-generated summaries contain one or more factual errors in SAMSum and QMSum, respectively. The average number of errors for a factually inconsistent sentence is \(1.14\). This indicates a broad existence of factual errors in the model-generated summaries, thus emphasizing the importance to resolve factual errors in dialogue summarization.
Semantic frame errors (i.e., EntE, PredE and CirE) are more frequent than discourse errors (i.e., CorefE and LinkE) overall, while their distributions are not the same on both datasets. SAMSum has a higher portion of factually inconsistent sentences caused by semantic frame errors (\(76.9\%\)) than QMSum has (\(58.9\%\)), while QMSum has a higher portion of discourse errors (\(24.0\%\)) than SAMSum (\(11.3\%\)). We observe two main reasons for this
Figure 3: Intrinsic and Extrinsic error distribution for EntE, PredE and CirE of different summarization models on SAMSum and QMSum.
Figure 2: Semantic factual error distribution of different summarization models on SAMSum and QMSum.
discrepancy. First, the sentences in QMSum are longer and exhibit more complex discourse structures, especially causal relations, which can be challenging for models to summarize. Second, models fine-tuned on QMSum tend to copy large chunks of the input dialogue. Many pronouns are directly copied from the source dialogue without proper context, causing Coreference Errors (CoreFE).
Among the different summarization models, BART and PEGASUS have been evaluated on both datasets where BART generates summaries with fewer factual errors consistently. On SAMSum, \(24.0\%\) of the sentences generated by BART contain factual errors, which is the fewest, while the highest portion is reported by GPT-3, i.e., \(58.7\%\). CONDIGSUM and S-BART are variants of BART that achieve better ROUGE scores than BART using contrastive learning and dialogue structure information, respectively. Our results reveal that both models produced more sentences with factual errors than BART did, indicating that improvement in ROUGE may not help with the factual consistency of summaries. This result emphasizes the importance of more benchmark datasets for dialogue summarization model evaluation. On QMSum, BART is still the best, while DialogLM produced the highest proportion of sentences with factual errors.
On the content verifiability dimension, models on QMSum produce more extrinsic errors than on SAMSum. A potential reason is that reference summaries in QMSum contain more tokens outside the source dialogue. For SAMSum, all models are mainly dominated by intrinsic errors, while GPT-3 produces more extrinsic errors than intrinsic ones.
## 4 Detecting Factual Errors
In this section, we automate factual error detection in model-generated summaries. We first adapt two state-of-the-art factual error detection models from document summarization. We then propose a weakly supervised multi-class classifier and a simple yet effective unsupervised model that can utilize any pretrained encoder-decoder model to identify factual errors. Finally, we present ensemble-based models combining all techniques above.
**Problem statement.** We formulate factual error detection as a sentence-level multi-label classification task, i.e., given an input dialogue and a sentence from a model-generated summary, we classify whether the sentence contains any (semantic role) factual errors as outlined in Section 3.3.
### Adapted State-of-the-Art Models
**DAE**Goyal and Durrett (2020) is based on dependency-level entailment, which predicts whether a dependency arc in a model-generated sentence is entailed by the input document (e.g., a dialogue in our problem). To adapt it to our problem, we design rules to map from dependency arc types to our factual error classes, as shown in Table 2. Given a summary sentence, we use the trained DAE provided by the authors to predict dependency arcs in the sentence. The union of all factual error classes corresponding to the types of the predicted erroneous dependency arcs will be used as our factual error predictions. Note that not all factual error classes have corresponding dependency arc types and hence not all error classes can be detected by this model.
**QAFactEval**Fabbri et al. (2022) is a QA-based factual error detector. Given a question generation model (QG) and a question answering model (QA), which are trained on existing datasets for the question answering task, it works as follows: (1) Question-worthy spans (\(s\)), which are noun phrases and named entities, are extracted from a model-generated summary. (2) For each \(s\), a question is generated by QG based on \(s\) and the summary. (3) The QA model predicts an answer \(a\) based on the question and the source document. (4) The similarity between \(s\) and \(a\) is measured by some metric. (5) The factual consistency of the summary is made based on the similarity scores for all \(s\) in it.
We use the learned metric LERC (QuIP) mentioned in the paper and report a factual error if the similarity score between \(s\) and \(a\) is smaller than a threshold \(T_{qa}\) (a hyper-parameter). Question-worthy spans of different semantic roles correspond
\begin{table}
\begin{tabular}{l l} \hline \hline Dependency Arc Types & Error Class \\ \hline nsubj, obj, obl-agent, iobj, dobj, \\ nmod, vocative, appos, nummod, compound, amod, det, clf, flat & EntE \\ \hline obl:tmod, advmod & CirE \\ \hline aux & PredE \\ \hline other arc types & Others \\ \hline \hline \end{tabular}
\end{table}
Table 2: Rules to map from dependency arc types to our factual error classes.
to our semantic role-based factual error classes, as outlined in Algorithm 1 in Appendix. We obtain the semantic role of a question-worthy span by a pre-trained structured prediction model in AllenNLP 2.9.3.7
Footnote 7: We use _structured-prediction-srl-bert_ and choose the semantic role of the shortest span containing \(s\).
**Weakly-Supervised-Classifier** is a multi-class classifier that we construct. It takes as input a source dialogue and a generated summary sentence to predict factual error classes in the sentence, motivated by Kryscinski et al. (2020). We create synthetic training data by corrupting sentences in reference summaries as follows.
For Entity Error, Circumstance Error and Coreference Error, we replace named entities or pronouns with those randomly picked from the same category. For Predicate Error, we replace verbs with other randomly chosen verbs. We match the form (e.g., tense) of the selected verbs to the original one. Negative replacements for all above classes are extracted from either the source dialogue or the whole dataset. For Link Error, we replace a discourse marker corresponding to causal relation (e.g., because) with another one indicating a reversed causal relation (e.g., so). More details on our synthetic data generation are in Appendix A.3.1.
We use cross entropy loss to train the classifier, which is based on BERT Devlin et al. (2019) with a linear layer on top of [CLS] representation for classification. We concatenate the source dialogue and a sentence, delimited by [SEP], as input.
### EnDeRanker
Here, we present our proposed unsupervised model, EnDeRanker. Given a generated summary sentence, it first identifies a set of _spans of interest_ (SOI) which may correspond to factual errors. For each SOI, EnDeRanker replaces it with different candidate spans and calculates a score for each span including the SOI. The factuality of the SOI is then decided based on its score among the scores of all candidate spans. Figure 4 summarizes the workflow of EnDeRanker. Below we detail core steps of EnDeRanker: (1) _SOI identification_, (2) _candidate span generation_, (3) _span scoring_ and (4) _ranking-based factual error detection_.
**Span of interest identification.** An SOI is a snippet in a sentence for factual error classification. We consider noun phrases, named entities and verbs as SOIs, which are obtained using spaCy 3.1.4.8 We obtain the semantic roles of the SOIs like for QAFactEval, which will be used to decide the error class of an SOI later.
Footnote 8: [https://spacy.io/](https://spacy.io/)
**Candidate span generation.** For each SOI, we create a set of candidate spans that can potentially replace it in the model generated summary sentence. For a named entity SOI, the candidate spans are entities of the same named entity class (e.g., **PERSON**) of the SOI extracted from the input dialogue. For the **PERSON** class, in particular, we include all speaker names on top of all other **PERSON** named entities extracted. For a verb SOI, we extract all verbs from the input dialogue according
Figure 4: The workflow of our EnDeRanker model.
to the Part-of-Speech tags and match the form (e.g., tense) with the SOI. For a noun phrase SOI, all noun phrases from the input dialogue are considered as candidate spans. All candidate spans are extracted using spaCy 3.1.4.
**Span scoring.** Let \(D\) be an input dialogue and \(S\) be a generated summary sentence with \(n\) tokens \(\{w_{1},w_{2},\cdots,w_{n-1},w_{n}\}\), which includes a candidate span or an SOI, denoted by \(c\). We adopt a encoder-decoder model \(\mathbb{M}\) to calculate a sentence score for \(S\) conditioned on \(D\) as follows, which is used as the score of span \(c\), denoted by \(\text{score}_{c}\). \(\mathbb{M}\) can be any pre-trained encoder-decoder model, such as a summarization model.
\[\text{score}_{c}=\frac{1}{n}\sum_{i=1}^{n}\log p(w_{i}|w_{<i},D) \tag{1}\]
Intuitively, the score is the average log likelihood of each token \(w_{i}\) in \(S\), conditioning on the previous tokens in \(S\) (i.e., \(w_{<i}\)) and \(D\). Here, \(w_{0}\) is the starting token of the decoder.
**Ranking-based factual error detection.** Given a set of candidate spans \(C=\{c_{1},c_{2},\cdots,c_{|C|}\}\) of an SOI, we form \(|C|\) sentences by replacing the SOI with each of the candidate spans. We calculate span scores for the SOI and the candidate spans, and rank the spans by their scores in descending order. If the SOI has a rank larger than a threshold \(T\) (a hyper-parameter), we report it as erroneous and determine its error class based on its semantic role, as summarized in Algorithm 1 (cf. Appendix). The same process is repeated for all SOIs in \(S\). The union of all error classes detected for the SOIs is the final factual error classes predicted for \(S\).
### Ensemble Modeling
We further build two simple ensemble models based on the four models above: Most **Frequent Voting** (FreqVoting) and **Logistic regression** (Logistic). FreqVoting takes all predicted error classes from the four models above and uses the class(es) with the largest frequency as the final prediction. For Logistic, we train a logistic regression model for each factual error class that takes the binary outputs from the four models above as features. We use the union of all factual error classes predicted by the different logistic regression models as the final prediction.
### Experiments
To evaluate the models described in the last section, we perform 5-fold cross validation [12] using DiaSumFact.9 Implementation details and parameter settings are discussed in Appendix A.3. We record the F1 scores (mean and standard deviation) of the models on each error class in Table 3.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Model & NoE & EntE & CirE & PredE & CorefE & Others & Micro Avg & Macro Avg \\ \hline \hline \multicolumn{10}{c}{Adapted state-of-the-art models} \\ \hline QAFactEval & \(0.68_{0.04}\) & \(\underline{0.45}_{0.03}\) & \(\underline{0.23}_{0.11}\) & \(0.00_{0.00}\) & \(0.11_{0.06}\) & \(0.00_{0.00}\) & \(0.51_{0.03}\) & \(0.25_{0.02}\) \\ DAE & \(0.77_{0.02}\) & \(0.32_{0.05}\) & \(0.03_{0.06}\) & \(0.00_{0.00}\) & \(0.00_{0.00}\) & \(\underline{0.34}_{0.11}\) & \(0.59_{0.02}\) & \(0.24_{0.02}\) \\ \hline \multicolumn{10}{c}{Weakly Supervised multi-class classifier} \\ \hline BertMulti & \(0.72_{0.00}\) & \(0.20_{0.00}\) & \(0.08_{0.00}\) & \(0.09_{0.00}\) & \(\underline{0.29}_{0.00}\) & \(0.08_{0.00}\) & \(0.54_{0.00}\) & \(0.24_{0.00}\) \\ \hline \multicolumn{10}{c}{**EnDERanker (ours)**} \\ \hline BART-large-cnn & \(0.67_{0.06}\) & \(0.34_{0.07}\) & \(0.04_{0.06}\) & \(0.15_{0.04}\) & \(0.12_{0.10}\) & \(0.00_{0.00}\) & \(0.47_{0.07}\) & \(0.22_{0.01}\) \\ BART-large-samsum & \(0.67_{0.06}\) & \(0.35_{0.08}\) & \(0.03_{0.04}\) & \(0.21_{0.06}\) & \(0.21_{0.13}\) & \(0.00_{0.00}\) & \(0.47_{0.05}\) & \(0.24_{0.02}\) \\ PEGASUS-cnn & \(0.71_{0.03}\) & \(0.37_{0.08}\) & \(0.04_{0.05}\) & \(0.18_{0.05}\) & \(0.14_{0.09}\) & \(0.00_{0.00}\) & \(0.52_{0.04}\) & \(0.24_{0.01}\) \\ PEGASUS-samsum & \(0.67_{0.04}\) & \(0.37_{0.09}\) & \(0.06_{0.07}\) & \(0.19_{0.06}\) & \(0.16_{0.11}\) & \(0.01_{0.02}\) & \(0.46_{0.05}\) & \(0.24_{0.01}\) \\ T5-large-cnn & \(0.68_{0.04}\) & \(0.35_{0.09}\) & \(0.03_{0.04}\) & \(0.15_{0.04}\) & \(0.06_{0.03}\) & \(0.01_{0.02}\) & \(0.47_{0.05}\) & \(0.21_{0.02}\) \\ T5-large-samsum & \(0.70_{0.08}\) & \(0.35_{0.10}\) & \(0.04_{0.05}\) & \(\underline{0.22}_{0.08}\) & \(0.14_{0.03}\) & \(0.00_{0.00}\) & \(0.51_{0.09}\) & \(0.24_{0.03}\) \\ \hline \multicolumn{10}{c}{Ensemble learning (**including our EnDeRanker model**)} \\ \hline FreqVoting & \(0.79_{0.03}\) & \(0.40_{0.05}\) & \(0.05_{0.11}\) & \(0.10_{0.08}\) & \(0.12_{0.10}\) & \(0.01_{0.02}\) & \(0.62_{0.03}\) & \(0.24_{0.03}\) \\ Logistic & \(\underline{0.80}_{0.03}\) & \(0.44_{0.05}\) & \(0.20_{0.13}\) & \(0.00_{0.00}\) & \(0.11_{0.10}\) & \(0.03_{0.03}\) & \(0.61_{0.03}\) & \(\underline{0.26}_{0.04}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: F1 scores for factual error detection models with a break down on each error class based on our annotated dataset DiaSumFact. We report the average score and standard deviation over 5-fold cross validation. **Link Error (LinkE)** is merged into **Others** because almost no model can detect it. The best score for each column is underlined.
**Results**: All models can detect EntE significantly and consistently better than the other classes. Different models show advantage on different error classes, while no model can outperform all the others on all error classes.
QAFactEval performs the best on EntE (0.45) and CirE (0.23) but poorly on the other error classes. The reason is that only named entities and noun phrases are treated as question-worthy spans. Future work may consider question-worthy spans of different types, such as verbs and discourse markers, to cover more error classes.
DAE performs well on EntE and Others, while it suffers on CirE, PredE and CorefE. The main reason is that not all error classes are covered in the rules mapping from dependency arc to error class. Since a dependency arc is related to two words, rule designing is not easy. Future work may leverage learned models to predict error class automatically.
BertMulti shows the best results on CorefE (\(0.29\)) but poor performance on CirE, PredE and Others, despite its high performance on synthetic validation dataset (\(0.98\) accuracy). It indicates the difference between synthetic and real factual errors.
Our proposed model EnDeRanker using different pretrained encoder-decoder models generally exhibits strong results on EntE, PredE and CorefE, while more improvements need to be done on CirE and Others. Among all variants of EnDeRanker, PEGASUS-cnn performs on par with QAFactEval in terms of macro-averaged F1 score, while it does not require question generation and question answering models.
The two ensemble models improve on the micro and macro-averaged F1, indicating complementarity among the models. For most error classes, the ensemble models usually have the best or second best performance.
Overall, none of the models yielded a particularly high F1 score for any error class. It shows that fine-grained factual error detection in dialogue summaries is a challenging problem which calls for further studies, for which our results and dataset will serve as a solid foundation.
## 5 Conclusions
We created a fine-grained multi-faceted dataset named DiaSumFact on factual consistency of dialogue summarization. DiaSumFact offers insights into how and where current neural summarization models fail when they produce factually inconsistent details in dialogue summaries. It can also serve as a testbed for automating factual error detection. Our proposed error detection method, EnDeRanker, is shown to perform on par with state-of-the-art models even though it requires no labelled training data. That said, we ultimately found that even ensembling several error detection methods do not produce results that are good enough for practical use, indicating opportunities for future research in this area.
## 6 Limitations
EnDeRanker is only tested on DiaSumFact. Further tests on more datasets are required to establish its general applicability.
## 7 Ethics Statement
This study is conducted under the guidance of the ACL code of Ethics. We manually filtered out potential offensive content and removed all information related to the identification of annotators. The annotators are all fairly paid based on the Australian minimum wage. The annotation protocol is approved under Human Ethics LNR Application with reference number 2022-24233-30104-3.
## Acknowledgements
This research was undertaken using the LIEF HPC-GPGPU Facility hosted at the University of Melbourne. This Facility was established with the assistance of LIEF Grant LE170100200. We want to thank Gisela Vallejo, Han Sun, Miao Li, Rui Xing, Wei Gao, Yanchuan Chang, Yulia Otmakhova, Zheng Wei Lim, Zhexi Li, Zhuohan Xie for their help in the annotation.
|
2306.17306
|
Simultaneous nanorheometry and nanothermometry using intracellular
diamond quantum sensors
|
Viscoelasticity of the cytoplasm plays a critical role in cell morphology and
division. In parallel, local temperature is coupled to viscoelasticity and
influences cellular bioenergetics. Probing the interdependence of intracellular
temperature and viscoelasticity provides an exciting opportunity for the study
of metabolism and disease progression. Here, we present a dual-mode quantum
sensor, capable of performing simultaneous nanoscale thermometry and rheometry
in a dynamic cellular environment. Our technique uses nitrogen-vacancy centres
in nanodiamond, combining sub-diffraction resolution single-particle tracking
in a fluidic environment with optically detected magnetic resonance
spectroscopy. We demonstrate nanoscale sensing of temperature-dependent
viscoelasticity in complex media. We then use our sensor to investigate the
interplay between intracellular forces and cytoplasmic rheology in live cells,
revealing details of active trafficking and nanoscale viscoelasticity.
|
Qiushi Gu, Louise Shanahan, Jack W. Hart, Sophia Belser, Noah Shofer, Mete Atature, Helena S. Knowles
|
2023-06-29T21:18:19Z
|
http://arxiv.org/abs/2306.17306v1
|
# Simultaneous nanorheometry and nanothermometry using intracellular diamond quantum sensors
###### Abstract
Viscoelasticity of the cytoplasm plays a critical role in cell morphology and division. In parallel, local temperature is coupled to viscoelasticity and influences cellular bioenergetics. Probing the interdependence of intracellular temperature and viscoelasticity provides an exciting opportunity for the study of metabolism and disease progression. Here, we present a dual-mode quantum sensor, capable of performing simultaneous nanoscale thermometry and rheometry in a dynamic cellular environment. Our technique uses nitrogen-vacancy centres in nanodiamond, combining sub-diffraction resolution single-particle tracking in a fluidic environment with optically detected magnetic resonance spectroscopy. We demonstrate nanoscale sensing of temperature-dependent viscoelasticity in complex media. We then use our sensor to investigate the interplay between intracellular forces and cytoplasmic rheology in live cells, revealing details of active trafficking and nanoscale viscoelasticity.
## 1 Main
Nanorheology addresses the question of how soft materials deform and flow at the nanoscale [1, 2]. Of significant interest in nanorheology is the study of complex cellular media such as the cytoplasm, which heavily influence cellular processes such as transport [3], division [4, 5] and morphological changes [6]. These properties, like many others in the cell, are linked to local biochemical energetics where temperature plays a critical role [7, 8]. It is well-established that cells regulate their viscoelastic properties in response to external temperature changes through homeoviscous adaption [9, 10] and viscoadaption [11]. Variations in intracellular temperature, rheology and their interdependence at the nanoscale remain outstanding questions today [12, 13] in the pursuit of a deeper understanding of cellular homeostasis, disease progression [14] and pathways for cancer treatment [15]. The current challenges for existing biosensing tools include small length scales and poor signal-to-noise ratio of the phenomena under investigation.
Optical techniques can provide means for investigating intracellular phenomena at the nanoscale in a non-invasive way. These methods are often susceptible to variations in autofluorescence [16], spectral transmission [17] and refractive index [18], which are typically present in complex biochemical environments. The interdependence of physical properties in biological systems can also be obfuscated by local inhomogeneity. Further, a change in one property, for
example temperature, can often affect others such as viscosity, the speed of chemical reactions or the rate of cell division. The relationship between two properties is thus hard to capture effectively if the level of an external perturbation cannot be measured accurately and independently. Multi-modal sensors offer the opportunity to reveal such interdependence.
Among the many approaches to nanoscale sensing in biological systems that are currently being explored, nanoparticles provide a platform which enables robust optical intracellular sensing [17, 19]. Nanodiamonds containing nitrogen-vacancy centres (NV) are one of the leading candidates: their properties include stable photoluminescence (PL), minimal cytotoxicity at high concentrations [20, 21], amenability to surface functionalisation [22], and robustness against changes in pH [23]. The ground-state spin transition which is utilised for sensing can be effectively uncoupled from background fluorescence fluctuations, enabling NV measurements to be unaffected by local changes in the optical environment. The NV has the capability to measure several different quantities, as demonstrated separately for temperature [24], magnetic field [25], electric field [26], pressure [27], reactive oxygen species [28] and through targeted surface functionalisation, pH [29]. These demonstrations position the NV as a promising candidate for multi-modal sensing implementations.
In this work, we perform nanothermometry and nanorheology using optically detected magnetic resonance (ODMR) and particle tracking of NV-containing nanodiamonds. We first demonstrate the operational protocol and achieve 3.7-nm spatial resolution with 9.6-ms update rate and a temperature sensitivity of \(2.3\,\mathrm{\SIUnitSymbolCelsius}/\sqrt{\mathrm{Hz}}\). We quantify the performance in multiple well-controlled fluidic environments and then employ our sensor inside live human cancer cells and and reveal different regimes of intracellular dynamics, while simultaneously measuring temperature. This dual-modality sensing is performed on a custom biosensing chip, capable of microscopic temperature control and coherent spin manipulation.
## 2 Calibrating the sensor performance of nanodiamonds
To achieve optical readout of the NV spin, which underlies the sensing concept of nanodiamonds, we use a home-built confocal microscope (Supplementary Information Section 1). Figure 1**(a)** and its inset illustrate the experimental arrangement, where a nanodiamond moves inside a cell while sensing local temperature. We use nanodiamonds that contain an ensemble of 100-300 NVs, with a radius of \(\sim\)25 nm. Nanoparticles of comparable sizes move with diffusion coefficients exceeding \(3\times 10^{4}\,\mathrm{nm^{2}\,s^{-1}}\) in cells [30] (Supplementary Information Section 2). These dynamic environments require the nanodiamond to be tracked throughout optical spin readout measurements. We achieve this through a double-plane orbital tracking method [31], which provides real-time feedback control of the nanodiamond's location, as illustrated in Fig. 1**(b)**. The excitation laser performs circular orbits in the transverse plane with a period of 9.6 ms. The two confocal collection planes are offset symmetrically by \(\sim\)50 nm in opposite axial directions from the laser focus and collect the NV PL along the two offset circular paths. The asymmetries in PL around the orbit and between the top and bottom planes provide feedback parameters in the transverse and axial directions respectively, updating the centre of the orbital tracking to the nanodiamond position (Supplementary Information Section 3). In Fig. 1**(c)** we demonstrate the tracking of such a nanodiamond diffusing in glycerol.
Whilst tracking the nanodiamond in real time, we simultaneously perform continuous-wave ODMR for temperature sensing. The ground state zero-field splitting of the NV can be optically read out by driving the NV spin from the \(m_{s}=0\) state to the \(m_{s}=\pm 1\) states. On resonance, this leads to a decrease in PL, as shown in Fig. 1**(d)**. These transition frequencies are dependent on temperature. We sweep the microwave frequency over the target range every \(\sim\)1 ms and monitor the NV PL continuously to identify the spin resonances. We infer the change in temperature from the change in central frequency of the full ODMR spectrum. The central frequency is extracted using an
interpolation method (Methods, Supplementary Information Section 4).
The ODMR-based thermometry technique requires the delivery of microwaves to the region of interest. This typically leads to heating of the substrate and the intracellular medium. These effects can be a challenge to control and may vary from sample to sample. To achieve reproducible temperature control, sample heating with minute-scale temporal resolution (Supplementary Information Section 5) and microwave delivery for the manipulation of NV spins, we developed a custom fabricated chip. All measurements are performed using a gold-patterned glass coverslip comprising a coplanar waveguide, two resistive heaters and a resistive temperature detector (RTD), as highlighted in Fig. 1**(a)** with green, red and blue regions, respectively. A polydimethylsiloxane (PDMS) open-top well is incorporated into the sensing chip to contain liquid samples when necessary.
To benchmark the thermometry modality, we first quantify the temperature sensitivity of a stationary nanodiamond dropcast on the quantum sensing chip in the absence of any fluidic environment. As seen in Fig. 2**(a)**, the substrate temperature is adjusted in steps of \(4\,^{\circ}\mathrm{C}\) and the ODMR central frequency shifts proportionally. We extract a temperature dependence of \(\kappa=-60.0(4)\,\mathrm{kHz/^{\circ}C}\) as shown in Fig. 2**(b)**. Consistent with previous results, our experiments show that this value can vary between nanodiamonds in the range \(-53.6(1.0)\,\mathrm{kHz/^{\circ}C}\leq\kappa\leq-91.0(1.0)\,\mathrm{kHz/^{ \circ}C}\) and therefore needs to be calibrated for every nanodiamond (Supplementary Information Section 6). Using the Allan deviation, Fig. 2**(c)** shows an extracted sensitivity of \(2.3\,^{\circ}\mathrm{C}/\sqrt{\mathrm{Hz}}\) which agrees with the shot noise-limited sensitivity as predicted by the Cramer-Rao bound, \(2.1\,^{\circ}\mathrm{C}/\sqrt{\mathrm{Hz}}\), to within 10 % (Supplementary Information Section 7).
To benchmark the rheometry modality, we start by verifying the dynamic tracking accuracy of the single particle tracking method. The scanning mirrors and the objective lens are moved
Figure 1: **Diamond-based nanothermometer and nanorheometer.****(a)** An illustration of the cross-section of a cell grown on a custom sensing chip, consisting of a resistive temperature detector (blue), two resistive heaters (red) and a coplanar waveguide (green), which is used for accurate temperature control and microwave delivery. **Inset:** A nanodiamond interacts with its complex surroundings in the cytoplasm, including the microtubules (blue), actin filaments (pink) and mitochondria (yellow in background). **(b)** Real-time tracking is achieved by collecting the PL from a nanodiamond at two axially offset planes separated by \(100\,\mathrm{nm}\) (red) as the excitation laser (green) orbits the last inferred position of the nanodiamond with a radius of \(50\,\mathrm{nm}\). Corrections (\(\delta\)) in the transverse and axial directions are made to counteract any imbalance in PL along the orbit (indicated by the intensity of the red orbit) and between the top and bottom imaging planes (grey shaded planes). **(c)** An example trajectory of a nanodiamond undergoing Brownian motion in glycerol. The background grid has a spacing of \(1\,\mathrm{\mu m}\). **(d)** The transition frequencies of the NV ground state are temperature dependent and probed using ODMR (top right), with the central frequency of the ODMR spectrum decreasing with increasing temperature (blue to red).
such that a stationary nanodiamond on the substrate exhibits a predefined trajectory that mimics Brownian motion (Methods, Supplementary Information Section 8). Figure 2**(d)** demonstrates the difference between the predefined particle trajectory (blue data) and the tracker readout (cyan data) over a 10 min interval which is used to determine the tracking accuracy. To analyse the stochastic diffusive motion, we compute the 2D mean square displacement (MSD), \(\mathrm{MSD}(\tau)=\langle|\mathbf{r}(t+\tau)-\mathbf{r}(t)|^{2}\rangle\), where \(\mathbf{r}\) is the position vector in the transverse plane and \(\tau\) is the time interval. The MSD depends linearly on the time interval for a particle undergoing Brownian motion, as \(\mathrm{MSD}=4D\tau\), where \(D\) is the diffusion coefficient. The measured diffusion coefficient agrees with the input diffusion coefficient as highlighted in Fig. 2**(e)**. In our system, we reach an upper bound of \(D=5\times 10^{4}\,\mathrm{nm^{2}\,s^{-1}}\), exceeding the typical intracellular diffusion coefficients observed with similarly sized nanodiamonds (Supplementary Information Section 2). When the particle is stationary, we measure a resolution of 3.7 nm with a 9.6-ms update rate, as shown in Fig. 2**(f)**, which is \(\sim\)60 times smaller than the 250-nm radius defined by the \(1/e^{2}\) point-spread function. Our particle tracking is capable of following nanodiamonds in a range of dynamic environments with high enough velocity and spatial resolution to allow the extraction of viscoelastic moduli. The nanodiamonds simultaneously operate as quantum sensors for temperature without the need for measurement deadtime in either modality.
## 3 Dual-modality nanosensing in a viscosity-tuneable fluid
From the stochastic motion of nanoparticles we infer properties about the surrounding material using passive nanorheometry. This provides a quantitative description of the relationship
Figure 2: **Accuracy and precision of the nanodiamond nanothermometer and nanorheometer.****(a)** The substrate temperature (grey curve) is stepped by 4\(\,{}^{\circ}\)C every 15 minutes, with the corresponding temperature reported by NV ODMR (red data). **(b)** The frequency shift is proportional to the change in temperature, with a temperature dependence of \(\kappa=-60.0(4)\,\mathrm{kHz/^{\circ}C}\). **(c)** The temperature precision over an accumulation time is characterised by the Allan deviation, from which we extract a sensitivity of \(2.3\,{}^{\circ}\mathrm{C/\sqrt{Hz}}\). **(d)** Comparison between the known position (cyan data) in the x-direction of a nanodiamond moved in a Brownian motion-manner and the tracker-reported position (blue data), with the corresponding difference (\(\delta x\)) shown in the lower panel. The set diffusion coefficient is \(2\,\times 10^{3}\mathrm{nm^{2}/s}\) for this measurement. **(e)** The measured diffusion coefficient using the mean square displacement (MSD) at a time interval of 1 s shows a close agreement with the input diffusion coefficient. **(f)** The dynamic tracking accuracy depends on the diffusion coefficient. When the particle is stationary, our system has a benchmark spatial resolution of 3.7-nm with 9.6-ms update rate (black dashed curve).
between the nanodiamond motion and the external forces. To demonstrate the use of nanorheometry with simultaneous nanothermometry, we study nanodiamonds undergoing Brownian motion in glycerol. We choose glycerol as it can be assumed homogeneous and predominantly viscous and has a known temperature-dependent viscosity [32]. In Fig. 3**(a)** a particle is shown to travel several micrometers in 96 seconds. The particle motion is random, and thus we use the MSD to extract the diffusion coefficient, \(D\).
In glycerol \(D\) obeys the Stokes-Einstein relation, \(D=\frac{k_{\mathrm{B}}T}{6\pi\eta(T)}\), where \(T\) is the absolute temperature, \(k_{\mathrm{B}}\) is the Boltzmann constant, \(r\) is the hydrodynamic radius of the nanoparticle and \(\eta(T)\) is the temperature dependent viscosity. We study the temperature dependence of the diffusion coefficient over a \(17.5\,^{\circ}\mathrm{C}\) range by increasing and decreasing the temperature in steps of \(3.5\,^{\circ}\mathrm{C}\) every 5 minutes. In the case of glycerol, \(\eta(T)\) is linearly dependent on temperature in the range probed, \(\eta(T)=\eta_{0}+\mu(T-T_{0})\), with \(\mu=0.0208\,\mathrm{Pa}\cdot\mathrm{s}/^{\circ}\mathrm{C}\), \(T_{0}=35^{\circ}\mathrm{C}\) and \(\eta_{0}=0.301\,\mathrm{Pa}\cdot\mathrm{s}\). [32]. From the experimental measurements of the diffusion coefficient in Fig. 3**(b)** (red data), we extract the temperature dependence of the viscosity, \(\eta(T)\) (black solid curve), using only the radius of the particle as a fitting constant. The estimated hydrodynamic radius of the nanodiamond is \(28(1)\,\mathrm{nm}\), which agrees with the nominal distribution provided by the supplier, \(25\,\mathrm{nm}\). As one would intuitively expect, a proportion of the diffusion coefficient's temperature dependence can be attributed to thermal agitation alone as seen in Fig. 3**(b)** (grey dashed curve).
Figure 3**(c)** and **(d)** display the measured temperature, extracted from ODMR, and the diffusion coefficient, extracted from the nanodiamond trajectory, respectively. These nanoscale measurements were verified with the RTD-measured temperature and the corresponding extracted diffusion coefficient respectively (grey curves). Using the multimodal sensor, we are able to probe the link between viscosity and temperature in glycerol through two simultaneous and independent measurements.
## 4 Revealing temperature-dependent viscoelasticity of a complex medium
In addition to sensing predominantly viscous rheological behaviour, our probe can reveal the viscoelastic properties of biological environments such as DNA hydrogels [33] and the actin cytoskeleton [34]. To model these environments, we use the synthetic viscoelastic polymer network glycerol cross-linked xanthan (GCX). The complex modulus, \(G^{*}(f)=G^{\prime}(f)+iG^{\prime\prime}(f)\), is used to characterise viscoelastic materials and can be calculated from the MSD [2, 35], where \(f\) is the frequency of external perturbations at which G is measured. The real part of the complex modulus, \(G^{\prime}(f)\), is a measure of the elasticity of the material and the imaginary part, \(G^{\prime\prime}(f)\), is a measure of the viscous component. The ratio of the real and imaginary parts of the complex modulus establish whether an environment is dominated by viscosity or elasticity. This capability can be used to capture how the rheological properties of the medium reacts to external perturbations.
Figure 3**(e)** displays the temperature-dependent MSD, obtained from the nanodiamond single-particle trajectory. In Fig. 3**(f)** we demonstrate that \(|G^{*}|\), as well as its real and imaginary components decrease with temperature. This is the expected behaviour for viscoelastic materials from the time-temperature superposition principle [37], as previously observed in other materials like hydrogels [38]. We achieve sufficient sensitivity to distinguish between the two viscoelastic states with a 30-s averaging interval when we cycle the temperature by \(10.6\,^{\circ}\mathrm{C}\). Figure 3**(g)** presents the change in viscous (blue) and elastic (cyan) moduli at two distinct temperature values of \(28.7\,^{\circ}\mathrm{C}\) and \(39.3\,^{\circ}\mathrm{C}\).
## 5 Capturing signatures of external forces in live cells
Having benchmarked our dual-modal sensing approach in controlled environments, we next investigate the intracellular response to external temperature changes and the motion of nanodiamonds inside cells. We incubate HeLa cells with nanodiamonds and confirm internalisation using
3D confocal microscopy (see Methods and Supplementary Information Section 9). We expose the nanodiamond-containing cells to temperature cycles of \(5.0\,^{\circ}\mathrm{C}\) in steps of \(2.5\,^{\circ}\mathrm{C}\) lasting 5 minutes each. Figure 4**(a)** confirms the agreement of cell temperature measured independently by nanodiamonds (red circles) and the RTD temperature sensor on the sensing chip (gray curve).
Unlike glycerol and GCX, the cytoplasm of a cell is an active medium which is neither spatially homogeneous nor in thermal equilibrium. Molecular motors cause collective agitation of the cytoplasm [39, 40, 41, 42, 3]. As such, particle motion represents the combined effect of both the material properties and the cellular activity. The power spectral density (PSD) of a particle's location, \(\langle x^{2}(\omega)\rangle\), which is the Fourier transform of the MSD, can be used to model this behaviour. For media where the force-displacement relation is linear, this PSD is related to the power spectra of thermal stochastic forces, \(\langle\xi^{2}(\omega)\rangle\propto k_{\mathrm{B}}T\), [39, 3] and external forces due to cell agitation, \(\langle F_{\mathrm{ext}}^{2}\rangle\), by Hooke's law,
\[|K(\omega)|^{2}\langle x^{2}(\omega)\rangle=\langle\xi^{2}(\omega)\rangle+ \langle F_{\mathrm{ext}}^{2}\rangle. \tag{1}\]
Here, \(K(\omega)=(6\pi r)G^{*}(\omega)\) is the (complex) spring constant characterising the property of the medium, \(G^{*}(\omega)\) is the complex modulus introduced in the previous section and \(\omega\) is the angular frequency corresponding to the linear frequency, \(f\). Figure 4**(b)** presents the PSD of the nanodiamond location averaged over 30 seconds at \(f=40\,\mathrm{Hz}\). The PSD increases dramatically approximately 15 minutes into the measurement, for a duration of around 10 minutes. Variations in
Figure 3: **Temperature and rheology measurements in abiotic media.****(a)** An example of the nanodiamond trajectory projected onto the transverse plane over 96 s in glycerol. Scalebar: \(1\,\mathrm{\mu m}\). **(b)** The diffusion coefficient measured at different temperature values (red), with a linear fit (solid black curve) from which a hydrodynamic radius of \(28(1)\,\mathrm{nm}\) is extracted. The grey dashed line shows the temperature dependence of the diffusion coefficient assuming a fixed viscosity of \(0.919\,\mathrm{Pa}\cdot\mathrm{s}\) corresponding to glycerol at \(21\,^{\circ}\mathrm{C}\).**(c, d)** The simultaneous determination of temperature (red circles) and viscosity (blue circles) in glycerol, a purely viscous medium. The grey curve in (c) shows the temperature read out by the sensing chip and (d) shows the corresponding diffusion coefficient using the radius extracted from (b). **(e, f)** The mean square displacement (MSD), and viscous (\(G^{\prime\prime}\)) and elastic (\(G^{\prime}\)) moduli in a viscoelastic medium, glycerol-crosslinked xanthan (GCX), at \(T_{\mathrm{C}}=28.7\,^{\circ}\mathrm{C}\) (blue circles) and \(T_{\mathrm{H}}=39.3\,^{\circ}\mathrm{C}\) (red circles) obtained from nanodiamond tracking. **(g)** Temperature dependence of \(G^{\prime}\) and \(G^{\prime\prime}\) at \(f=2.7\,\mathrm{Hz}\) for alternating temperatures \(T_{\mathrm{C}}\) (blue shaded) and \(T_{\mathrm{H}}\) (red shaded) as measured by the sensing chip (grey curve). (e and f) are calculated from the first 3 minutes of data at \(T_{\mathrm{C}}\) and \(T_{\mathrm{H}}\) in (g) (first blue and first red shaded regions).
the PSD can be explained by a combination of changes in \(|K(\omega)|\) and \(\langle F_{\mathrm{ext}}^{2}(\omega)\rangle\). Particular biological events such as cell division can result in large changes in viscoelasticity [4, 5] and thus changes in \(|K(\omega)|\). In the absence of such events, active microrheology [3, 43] and whole-cell AFM [44] experiments suggest that cell viscoelasticity remains constant over the time scale of hours. The changes we observe are therefore likely dominated by the external forces.
Nanodiamond internalisation involves the endocytic pathway [45] and therefore single-particle trajectories are expected to show active trafficking together with the Brownian motion of the particle. As the nanodiamond spends the majority of time in Brownian motion, the time-averaging used in the MSD and PSD analysis can hide transient features in the trajectory. Figure 4**(c)** shows the full trajectory of a nanodiamond in a cell over 40 minutes. We analyse this data by categorising segments of nanodiamond trajectories according to periods of statistically significant directional persistence, characterised by the directionality ratio, \(\gamma=d/l\), where \(d\) and \(l\) are the displacement and distance of a trajectory portion respectively (Supplementary Information Section 10). Figure 4**(d)** shows an example of a nanodiamond trajectory containing directed motion segments. We compare the results from our segmentation method with the spread of anomalous diffusion exponents, \(\alpha\). Through the relation MSD \(\propto\tau^{\alpha}\), the displacement behaviour is typically classified into subdiffusive (\(\alpha<1\)), diffusive (\(\alpha=1\)) and superdiffusive (\(\alpha>1\)) states. Separating the trajectories into segments reveals that when the nanodiamonds are not in the directed motion state, they on average exhibit Brownian-like behaviour, as can be seen from the MSDs in Fig. 4**(e, top - blue)** resulting in a power-law exponent of 0.97(5) in Fig. 4**(f, middle)**. In comparison, Fig. 4**(e, top - red)** and **(f, top)**, show that the nanodiamonds in the directed motion state appear superdiffusive, with a power-law exponent of 1.65(5). The directed motion of the nanodiamonds could represent active trafficking around the cell interior. To investigate the effect of molecular motors, 50 \(\mu\)M of nocodazole was added to destabilise the microtubule network
Figure 4: **Nanodiamond multimodal sensing in live cells (a, b)** Simultaneous readout of temperature and power spectral density (PSD) in a cell. The grey curve in (a) shows the temperature read out by the sensing chip and the PSD in (b) corresponds to \(f=40\,\mathrm{Hz}\). The dashed line represents the upper bound of the thermal contribution to the PSD. **(c)** Trajectory of a nanodiamond in a cell over 40 min. xyz scalebar: 250 nm **Inset:** xy particle trajectory relative to the optical diffraction limit (black spot, diameter = 500 nm). **(d)** xy particle trajectory showing both non-directed (dark blue) and directed (light blue) motion. Scalebar = 500 nm. **(e)** The mean square displacement, MSD and ensemble averages (thick lines) for nanodiamonds with non-directed motion (blue) and directed motion (red) in untreated cells, and motion in cells treated with 50 \(\mu\)M nocodazole for 1 hour (grey). **(f)** Probability densities for the power-law exponents, \(\alpha\), for directed motion (red) and non-directed motion (blue) in untreated cells and cells which had been treated with 50 \(\mu\)M nocodazole for 1 hour (grey). The black curves show respectively fitted normal distributions.
[30]. Under this treatment, nanodiamond trajectories exhibited no directed motion. Further, the average power-law exponent of 0.3(1) indicates subdiffusive motion as presented in Fig. 4**(e, bottom)** and **(f, bottom)**. From this we can infer that in the absence of external forces caused by microtubule-associated processes, the cytoplasm behaves as an elasticity-dominated weak gel [3, 46] (Supplementary Information Section 12).
## 6 Conclusions
Multimodal quantum sensing opens new avenues for investigating perturbation and response accurately and independently at the nanoscale in active biological environments. We identify directed motion of the nanodiamonds as a possible indicator of active trafficking in the cell. Further, by removing the action of the molecular motors, we show that the cytoplasm is dominated by its elastic properties. Our results also show that within our measurement sensitivity and at the nanodiamond position, HeLa cells do not regulate their internal temperature in the presence of an external thermal perturbation (Supplementary Information Section 11).
The orbital tracking method we employ is not limited to the continuous-wave ODMR technique, and can be paired with more sophisticated quantum sensing protocols, such as nuclear magnetic resonance (NMR) [47] or spin electron double resonance (SEDOR) [48]. Techniques such as optical tweezers [49] and surface chemical functionalisation [50] can be combined with our dual-modal approach for precise localisation of nanodiamonds with respect to subcellular organelles, such as mitochondria [51]. The targeted delivery of nanodiamonds to subcellular regions would enable probing of potential hot spots, correlating local biochemical events with thermogenesis. This could be used to address the topic of nanoscale temperature gradients in live cells [52] and be further extended to studying non-biological soft matter. Active rheology techniques offer an opportunity to further explore the relationship between external forces and the spring constant in cells. Combining nanodiamond sensors with super-resolution imaging techniques [53] in a multi-scale imaging setting offers an exciting opportunity to probe physical properties in the context of their biological environment.
**Acknowledgments.** We would like to thank Ljiljana Fruk, David Jordan, Thomas Krueger, Erik Miska, Brian Patton, Hannah Stern, Ross Waller and Hengyun Zhou for insightful discussions. This work is supported by the Gordon and Betty Moore Foundation Grant (GBMF7872). Q.G. acknowledges financial support by the China Scholarship Council, the Cambridge Commonwealth, European & International Trust and the Pump Priming Grant from the Cambridge Centre for Physical Biology. L.S. acknowledges support from the Winton Programme for Sustainability and Robert Gardiner Memorial Scholarship. S.B. acknowledges financial support from EPSRC (PhD Studentship EP/R513180/1), the Alireza Studentship of Lucy Cavendish College and the German Academic Scholarship Foundation. N.S. acknowledges support from the Sperling Studentship. H.S.K. acknowledges the Royal Society University Research Fellowship.
|
2308.06994
|
Hybrid Emission Modeling of GRB 221009A: Shedding Light on TeV Emission
Origins in Long-GRBs
|
Observations of long duration gamma-ray bursts (GRBs) with TeV emission
during their afterglow have been on the rise. Recently, GRB 221009A, the most
energetic GRB ever observed, was detected by the {LHAASO} experiment in the
energy band 0.2 - 7 TeV. Here, we interpret its afterglow in the context of a
hybrid model in which the TeV spectral component is explained by the
proton-synchrotron process while the low energy emission from optical to X-ray
is due to synchrotron radiation from electrons. We constrained the model
parameters using the observed optical, X-ray and TeV data. By comparing the
parameters of this burst and of GRB 190114C, we deduce that the VHE emission at
energies $\geq$ 1 TeV in the GRB afterglow requires large explosion kinetic
energy, $E \gtrsim 10^{54}$~erg and a reasonable circumburst density, $n\gtrsim
10$~cm$^{-3}$. This results in a small injection fractions of particles
accelerated to a power-law, $\sim 10^{-2}$. {A significant fraction of shock
energy must be allocated to a near equipartition magnetic field, $\epsilon_B
\sim 10^{-1}$, while electrons should only carry a small fraction of this
energy, $\epsilon_e \sim 10^{-3}$. Under these conditions required for a proton
synchrotron model, namely $\epsilon_B \gg \epsilon_e$, the SSC component is
substantially sub-dominant over proton-synchrotron as a source of TeV photons.}
These results lead us to suggest that proton-synchrotron process is a strong
contender for the radiative mechanisms explaining GRB afterglows in the TeV
band.
|
Hebzibha Isravel, Damien Begue, Asaf Pe'er
|
2023-08-14T08:04:47Z
|
http://arxiv.org/abs/2308.06994v1
|
# Hybrid Emission Modeling of GRB 221009A: Shedding Light on TeV Emission Origins in Long-GRBs
###### Abstract
Observations of long duration gamma-ray bursts (GRBs) with TeV emission during their afterglow have been on the rise. Recently, GRB 221009A, the most energetic GRB ever observed, was detected by the LHAASO experiment in the energy band 0.2 - 7 TeV. Here, we interpret its afterglow in the context of a hybrid model in which the TeV spectral component is explained by the proton-synchrotron process while the low energy emission from optical to X-ray is due to synchrotron radiation from electrons. We constrained the model parameters using the observed optical, X-ray and TeV data. By comparing the parameters of this burst and of GRB 190114C, we deduce that the VHE emission at energies \(\geq\) 1 TeV in the GRB afterglow requires large explosion kinetic energy, \(E\gtrsim 10^{54}\) erg and a reasonable circumburst density, \(n\gtrsim 10\) cm\({}^{-3}\). This results in a small injection fractions of particles accelerated to a power-law, \(\sim 10^{-2}\). A significant fraction of shock energy must be allocated to a near equipartition magnetic field, \(\epsilon_{B}\sim 10^{-1}\), while electrons should only carry a small fraction of this energy, \(\epsilon_{e}\sim 10^{-3}\). Under these conditions required for a proton synchrotron model, namely \(\epsilon_{B}\gg\epsilon_{e}\), the SSC component is substantially sub-dominant over proton-synchrotron as a source of TeV photons. These results lead us to suggest that proton-synchrotron process is a strong contender for the radiative mechanisms explaining GRB afterglows in the TeV band.
Gamma-ray bursts(629) -- Synchrotron emission(856) -- Gamma-ray transient sources(1853) -- source : GRB 221009A -- GRB 190114C -- particle acceleration 0000-0002-4281-8084]Hebzibha Isravel
0000-0002-4882-0888]Damien Begue
0000-0002-0788-0888]Asaf Pe'er
## 1 Introduction
Gamma Ray Bursts (GRB) are indisputably the universe's brightest extragalactic transient events. They feature a brief prompt phase emission, mostly observed in the energy extending from a few keVs to a few GeVs. It is then followed by the extended broadband afterglow, detected at all energy bands, from radio to a few hundreds of GeVs and possibly even higher (for reviews see _e.g._ Piran, 1999; Meszaros, 2006; Kumar and Zhang, 2015; Zhang, 2018). Detections of GRB afterglows at the highest energies (i.e., \(>300\) GeV) have been on the rise in the past two decades (for reviews, see, e.g., Nava, 2018; Miceli and Nava, 2022). Understanding the physics underlying this emission has become of highest importance as it holds the clues to better constrain and understand the afterglow of GRBs.
The recent development of highly sensitive ground-based detectors, such as the High Energy Stereoscopic System (H.E.S.S, Aharonian et al., 1997; H. E. S. S. Collaboration et al., 2021), the Major Atmospheric Gamma Imaging Cherenkov (MAGIC, Lorenz, 2005), and the more recent Large High Altitude Air Shower Observatory (LHAASO, Cao et al., 2019), has allowed for the detection of sub-TeV to \(\sim\) TeV signal in GRBs and measurements of their spectra in this band. Examples include GRB 180720B (Abdalla et al., 2019), GRB 190114C (Acciari et al., 2019), GRB 190829A (H. E. S. S. Collaboration et al., 2021) and GRB 201216C (Blanch et al., 2020).
The emission mechanism of GRBs with emission at the very high energy (VHE, \(\geq\) TeV) band is highly debated. With reference to the fireball scenario, the synchrotron self-Compton (SSC) model is prominent among the radiation mechanisms that aim to explain these signals. In this mechanism, photons emitted by the synchrotron process at low energies, are upscattered to VHE by the energetic electrons that emitted them (Ghisellini and Celotti, 1998; Dermer et al., 2000; Sari and Esin, 2001; Fraija et al., 2019; Wang et al., 2019; Derishev and Piran, 2021; Fraija et al., 2022; Yamasaki
& Piran, 2022). Alternatively, the proton-synchrotron mechanism has been suggested to produce a photon signal at these extreme energies (Vietri, 1997; Bottcher & Dermer, 1998; Israel et al., 2022; Zhang et al., 2022). The idea is that the same mechanism responsible for accelerating electrons also accelerates protons to high energies. The energetic protons then emit the observed VHE photons, reaching energies as high as TeV (Totani, 1998; Zhang & Meszaros, 2001; Israel et al., 2022). In addition, photo-pion and photo-pair production processes may also produce VHE photons (see e.g. Razzaque et al., 2010) although this requires a compact region, and it is not clear how this is obtained at late times.
The recent detection of TeV photons from GRB 221009A allows for the first time to probe in great details the GRB afterglow phase in this energy band. This GRB is by far the brightest GRB ever detected (Lesage et al., 2023). It was observed at the energy band 0.2 - 7 TeV by LHAASO (LHAASO-Collaboration et al., 2023). The isotropic equivalent luminosity in the band \(0.3-5\) TeV is \(7.3\times 10^{50}\) erg s\({}^{-1}\) and the observed peak flux is \(\sim 1.2\times 10^{-5}\) erg cm\({}^{-2}\) s\({}^{-1}\)(LHAASO-Collaboration et al., 2023). This GRB is a nearby burst, at a cosmological redshift \(z=0.151\)(de Ugarte Postigo A. et al., 2022; Castro-Tirado et al., 2022) as well as the brightest burst ever detected, with an isotropic equivalent burst energy \(E_{iso}\simeq 3\times 10^{54}\) erg (Frederiks et al., 2022). The half opening angle of the jet is estimated to be \(\sim 0.8^{\circ}\)(LHAASO-Collaboration et al., 2023).
Several authors considered the conventional SSC model to interpret the VHE afterglow spectrum of GRB 221009A. This process is a natural outcome of the classical synchrotron-SSC emission model, and can account for emission in this energy band. However, it is not clear yet whether this model can explain the broad-band data (at all wavelengths), given the strong constraints on the TeV band flux from radio, optical and X-ray data (Gonzalez et al., 2022; Miceli & Nava, 2022). Furthermore, this model cannot explain a \(\gtrsim 10\) TeV energy photons (Huang. Y., 2022) originally claimed to be observed (Gonzalez et al., 2022; Ren et al., 2022; Kann et al., 2023; Das & Razzaque, 2023; Laskar et al., 2023). On the other hand, it is not clear if photons at these energies were detected 1 and required to explain the observed spectra (LHAASO-Collaboration et al., 2023). A recent work by Zhang et al. (2022) considered the possibility that proton-synchrotron may be the source of \(\gtrsim\) TeV energy photons in the reverse shock scenario, and concluded that this is a plausible scenario under certain conditions, in particular a very strong magnetic field.
Footnote 1: In their analysis, the LHAASO collaboration cut the spectrum at 7 TeV.
Here, we use the data available from optical through X-rays to TeV, and show that the synchrotron emission from relativistic protons can explain both the flux and the temporal features of the VHE afterglow of GRB 221009A, while its lower-energy afterglow counterpart is interpreted with the electron-synchrotron process. We determined two sets of parameters able to explain the observational features of this burst. Then by comparing these model parameters with those deduced for GRB 190114C (Isravel et al., 2022), we identify a set of consistent characteristics for the VHE afterglows with energies \(\gtrsim\) TeV, within the framework of the hybrid model we present.
This paper is structured as follows. In section 2 we review the available data on GRB 221009A obtained by various space-based and ground-based facilities. In section 3, we present our model within the context of the standard fireball scenario. We then use the data to constrain the values of the free physical parameters in section 4. The SEDs are then produced for three different cases in section 5. We investigate the common features encountered in the VHE afterglows of GRBs in section 6. Finally, our conclusions follow in section 7.
## 2 Observational data of GRB 221009A
The long-duration GRB 221009A triggered the Gamma-Ray Burst Monitor (GBM) on board the _Fermi_ spacecraft on October 9, 2022, at \(T_{0}=\) UT 13:16:59 (Veres et al., 2022). Initially, the GBM captured two separate emission episodes (Lesage et al., 2022). The first occurred between \(T_{0}\)-0 and \(T_{0}\)+43.4 s with a reported peak energy of \(375\pm 87\) keV and a fluence of \(2.12\pm 0.05\times 10^{-5}\) erg cm\({}^{-2}\) in the energy range 10-1000 keV. The second episode, being the brightest, exhibited numerous peaks during the time interval \(T_{0}\)+175 to \(T_{0}\)+1458 s. Due to the saturation of the detectors caused by the accumulation of photons in several of these peaks, the exact flux can hardly be measured. Yet, the KONUS-WIND collaboration recently reported the fluence \(\sim 0.21\) erg cm\({}^{-2}\) within the energy band 20 keV - 10 MeV (Frederiks et al., 2023).
The High Energy (HE) X-ray telescope on board the _Insight_-Hard X-ray Modulation Telescope (_Insight_-HXMT) also triggered and monitored this burst on \(9^{th}\) October 2022, at 13:17:00.050 UT (Tan et al., 2022). This instrument's primary goal is to observe GRBs and electromagnetic counterparts of gravitational waves (Cai et al., 2021). The _Insight_-HXMT together with the Gravitational Wave High-energy Electromagnetic Counterpart All-sky Monitor (GECAM-C)
measured the emission in the energy band \(\sim\)10 KeV to 6 MeV starting from the precursor of the event until the early afterglow phase for a duration of about \(\sim\)1800 s (An et al., 2023). It was determined that the burst has a total isotropic energy of \(\approx 1.5\times 10^{55}\) ergs.
The _Fermi_-Large Area Telescope (LAT) subsequently observed this GRB between 200 and 800 s following the GBM trigger (Pillera et al., 2022). It is the brightest GRB ever detected by LAT, with a maximum reported photon energy of 99.3 GeV, observed 240 s after \(T_{0}\). Due to the extreme brightness, the _Fermi_-LAT detector was saturated during the time period 200-400 s (corresponding to "bad" time intervals, where the exact flux could not be measured due to the saturation; see Omodei. N. 2022,b). The LAT data in the energy band 0.1 -1 GeV between 400 s and 800 s was modeled by a power-law spectrum \(dN/dE=N_{0}\left(E_{LAT}/E_{f}\right)^{p_{0}}\) resulting in a spectral index \(p_{0}=1.87\pm 0.04\) and in a photon flux of \(\Phi_{\gamma}=6.2\pm 0.4\times 10^{-3}\) ph cm\({}^{-2}\) s\({}^{-1}\)(Pillera et al., 2022).
Nearly 53.3 minutes after the GBM trigger, at UT 14:10:17, the _Swift_-Burst Alert Telescope (BAT) also triggered and observed GRB 221009A in the hard X-ray band (Dichiara et al., 2022). Starting 143 s after the BAT trigger, _Swift_-XRT slewed and monitored the then steadily declining X-ray light curve with a photon index \(1.836\pm 0.012\) and a temporal index of \(1.509\pm 0.004\)(Evans et al., 2007, 2009).
The optical afterglow in the R-band was measured at 18:45 UT, 4.6 hours after the BAT trigger, corresponding to 5.5 hours after the GBM trigger, with magnitude 16.57 \(\pm\) 0.02 by the Observatiorio Sierra Nevada (OSN) in Spain (Hu et al., 2022). Considering the strong galactic extinction in the R-band, the AB magnitude is estimated to be 3.710 (Schlegel et al., 1998). The optical and infrared data between 0.2 and 0.5 days are presented in O'Connor et al. (2023) and Gill & Granot (2023), and are corrected for the galactic extinction of 1.32 mag. For instance, 179 s after the BAT trigger, _swift_-UVOT observed GRB 221009A and recorded a magnitude of 16.68 \(\pm\) 0.03 in the white filter (Kuin et al., 2022).
Finally, LHAASO's Water Chernokov Detector Array (WCDA) (Cao et al., 2019) observed GRB 221009A within its field of view at the time of the GBM trigger. Within a span of 3000 s from the burst trigger, more than 60,000 photons in the energy band 0.2 \(\sim\) 7 TeV were detected by the LHAASO (LHAASO-Collaboration et al., 2023). Around the phase of the main burst, LHAASO recorded the flux of \(\sim 6\times 10^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\) at 1 TeV at the time period between \(T_{0}+220s-T_{0}+230s\), after correcting for extra-galactic background light (EBL) attenuation2.
Footnote 2: As the VHE photons traversing through cosmological sources experience \(\gamma\gamma-\) pair-production by interacting with the EBL, which substantially attenuates the intrinsic spectrum of the source (Ackermann et al., 2012).
The LHAASO collaboration and the Fermi-GBM collaboration deduced different times for the onset of the afterglow. Lesage et al. (2023) for the Fermi-GBM collaboration argued that the beginning of the afterglow phase was \(\sim 597\) s after the trigger. This is based on the inability of a single decay function to explain the lightcurve at earlier times. On the other hand, interpretation of the LHAASO data in the framework of the external shock, based on the temporal decay of the light-curve lead to estimating the onset of the afterglow at this band already at 226 s after the GBM trigger (LHAASO-Collaboration et al., 2023). The origin of this discrepancy can be due to the superposition of both prompt signal (which should originate from a small radius) and afterglow signal (originating from a forward shock propagating ahead of the jet, at larger radius) in the observations around few hundreds seconds. Therefore, here we will model the LHAASO emission as part of the afterglow and will not attempt to model the GBM data, which, as we will show below, is much brighter than the predicted GBM flux within the framework of our model (assumed to be produced by electron-synchrotron in this energy band).
## 3 Model Description
In this section, we detail the afterglow dynamics and the emission mechanisms, which serve as the basis of our model attempting to explain the VHE observation of GRB 221009A. We set our analysis within the framework of the fireball evolution scenario (Paczynski, 1990; Piran et al., 1993; Meszaros et al., 1998), further assuming that the high-energy component (GeV \(\lesssim E_{\gamma}\leq\) TeV) and the low-energy component (eV \(\lesssim E_{\gamma}\leq\) MeV) of the observed spectrum are produced by synchrotron radiation from the accelerated protons and electrons, respectively via the external shock acceleration. More details on the processes and the model can be found in Isravel et al. (2022), and we remind here only the key assumptions and equations.
When the relativistic jet originating from the compact GRB progenitor encounters the stationary ambient environment, an outward propagating shock-wave is created (Paczynski & Rhoads, 1993; Medvedev & Loeb, 1999). This shock collects and accelerates the ambient matter (both protons and electrons) and generates in-situ a magnetic field. The accelerated particles then produce the observed multi-wavelength emission (see _e.g._Sari & Piran, 1995; Sari et al.
1998; Panaitescu & Kumar 2000). During the afterglow phase, the emission occurs while the outflow expands in a self-similar way, following the Blandford & McKee (1976) solution. We assume here that the ultra-relativistic expansion can be considered adiabatic, _i.e._ that the radiative losses of the plasma behind the shock are negligible. This is a good approximation for our scenario as accelerated protons should carry most of the internal energy while they do not radiate efficiently.
Under those assumptions, the Lorentz factor of the jet, at a given observed time \(t\), is determined only by the isotropic-equivalent explosion kinetic energy \(E\), and the ambient ISM density \(n\):
\[\Gamma(E,n;t)=\left[\frac{17E(1+z)}{1024\pi nm_{p}c^{5}t^{3}}\right]^{1/8}=61.3 \ E_{54}^{1/8}n_{0}^{-1/8}t_{3}^{-3/8}, \tag{1}\]
where \(c\) is the speed of light, \(m_{p}\) is the mass of the proton and we took the redshift to be \(z=0.151\) relevant for GRB 221009A. Here and below, \(Q=10^{x}Q_{x}\) in cgs units is employed. Using \(t=r/(4\Gamma^{2}(r)c)\), one can express the location of the blast wave as a function of the observed time,
\[r(E,n;t)=\left[\frac{17Et}{4\pi nm_{p}c(1+z)}\right]^{1/4}=3.9\times 10^{17}\ E_{54}^{1/ 4}n_{0}^{-1/4}t_{3}^{1/4}\ \ \ {\rm cm}. \tag{2}\]
Finally, we define the comoving shock expansion (dynamical) time as \(t_{dyn}=r/(\Gamma c)=2.1\times 10^{5}\ E_{54}^{1/8}t_{3}^{5/8}n_{0}^{-1/8}\) s.
In order to estimate the observed spectrum, we need to specify the magnetic field and the particle distribution functions. For the former, we take the standard assumption that an (uncertain) fraction, \(\epsilon_{B}\), of the post-shock thermal energy is used in generating a magnetic field. This gives
\[B=\sqrt{32\pi\epsilon_{B}\Gamma^{2}nm_{p}c^{2}}=7.5\ E_{54}^{1/8}\epsilon_{B,- 1}^{1/2}n_{0}^{3/8}t_{3}^{-3/8}\ {\rm G}. \tag{3}\]
For the radiating particles, namely protons and electrons, we assume that a fraction \(\xi_{x}\) of all the particle is injected in the radiative zone with a power-law distribution between some minimum Lorentz factor \(\gamma_{m,i}\) and maximum Lorentz factor, \(\gamma_{\rm max,i}\), such that they carry a fraction \(\epsilon_{i}\) of the available internal energy. The power-law index is referred to as \(p_{i}\). Here, the subscript \(i\) refers either to electrons \(i=e\) or to protons \(i=p\). Energetic consideration provides the constraint \(\epsilon_{B}+\epsilon_{e}+\epsilon_{p}<1\).
The minimum Lorentz factors of the protons and electrons are readily obtained as
\[\gamma_{m,p} \simeq 6\ f_{p}\xi_{p}^{-1}E_{54}^{1/8}n_{0}^{-1/8}t_{3}^{-3/8} \epsilon_{p,-1}, \tag{4}\] \[\gamma_{m,e} = 450f_{e}\xi_{e}^{-1}E_{54}^{1/8}n_{0}^{-1/8}t_{3}^{-3/8} \epsilon_{e,-2}. \tag{5}\]
where \(f_{i}\) is a function of \(p_{i}\) and is equal to \(f_{i}=(p_{i}-2)/(p_{i}-1)\) for \(p_{i}>2\) and \(f_{i}=\ln{(\gamma_{m}/\gamma_{\rm max})}\) for \(p_{i}=2\)(Sari et al., 1998).
Another characteristic particle Lorentz factor is obtained by equating the synchrotron cooling time to the dynamical time, providing the cooling Lorentz factor of the particle. The synchrotron cooling time is given by \(t_{syn}=(6\pi m_{i}c)/(\gamma_{i}B^{2}\sigma_{T,i})\), where \(\sigma_{T,e}\) is the Thomson cross-section and \(\sigma_{T,p}=(m_{e}^{2}/m_{p}^{2})\sigma_{T,e}\). This gives \(\gamma_{c,i}=(6\pi m_{i}c)/(\sigma_{T,i}B^{2}\Gamma t)\), resulting in
\[\gamma_{c,e} = 222.3\ t_{3}^{1/8}\epsilon_{B,-1}^{-1}E_{54}^{-3/8}n_{0}^{-5/8}, \tag{6}\] \[\gamma_{c,p} = 1.4\times 10^{12}\ t_{3}^{1/8}\epsilon_{B,-1}^{-1}E_{54}^{-3/8}n_{ 0}^{-5/8}. \tag{7}\]
A proton synchrotron model requires the magnetic field to be large, and therefore, from Equations (5) and (6), it follows that \(\gamma_{c,e}<\gamma_{m,e}\), hence the electrons are in the fast cooling regime, and the electron distribution function is a broken power-law with index of 2 between \(\gamma_{c,e}\) and \(\gamma_{m,e}\), and \(p_{e}-1\) above \(\gamma_{m,e}\). However from Equation (4), it is seen that \(\gamma_{m,p}\ll\gamma_{c,p}\), meaning that the proton population is in the slow cooling regime.
The maximum Lorentz factor \(\gamma_{\rm max,i}\) of the accelerated particles is obtained by equating the acceleration time to the synchrotron cooling time. It comes
\[\gamma_{\rm max,i}=(6\pi q/\alpha\sigma_{T,i}B)^{1/2} \tag{8}\]
where the numerical coefficient \(\alpha\) prescribes the acceleration efficiency. The maximum Lorentz factor for protons is
\[\gamma_{\rm max,p}=7.8\times 10^{10}~{}\alpha^{-1/2}E_{54}^{-1/16}n_{0}^{-3/16}t_ {3}^{3/16}\epsilon_{B,-1}^{-1/4}. \tag{9}\]
Since \(\gamma_{\rm max,p}<\gamma_{c,p}\), the proton distribution function is a single power-law above \(\gamma_{m,p}\) with an exponential cutoff at the very high energy end of the proton distribution spectrum, producing an exponential cut-off in the resulting photon spectrum. For electrons, \(\gamma_{\rm max,e}\) is \(m_{p}/m_{e}\) times smaller than \(\gamma_{\rm max,p}\), but as the electron-synchrotron flux at such high energies is very small, we omit the discussion on it here.
Each of those characteristic Lorentz factors in the particle distribution functions is associated to a characteristic synchrotron frequency such that \(\nu=(3qB\gamma^{2}\Gamma)/(4\pi(1+z)m_{i}c)\). At these frequencies, the observed synchrotron spectrum presents a spectral break. For the electrons, the observed spectrum is \(F_{\nu}\propto\left(\nu^{1/3},\nu^{-1/2},\nu^{-p_{e}/2}\right)\) for \((\nu<\nu_{c,e};~{}\nu_{c,e}<\nu_{m,e};~{}\nu_{m,e}<\nu)\), where \(\nu\) is the observed frequency. The proton synchrotron spectrum is shaped as \(F_{\nu}\propto\left(\nu^{1/3},\nu^{-(p_{p}-1)/2}\right)\) for \((\nu<\nu_{m,p};~{}\nu_{m,p}<\nu_{\rm max,p})\).
The characteristic frequencies associated with particles at \(\gamma_{m}\) are given by
\[h\nu_{m,p} =7.6\times 10^{-9}~{}f_{p}^{2}E_{54}^{1/2}t_{3}^{-3/2}\epsilon_{B, -1}^{1/2}\epsilon_{p,-1}^{2}\epsilon_{p}^{-2}~{}{\rm eV}, \tag{10}\] \[h\nu_{m,e} =1.42~{}f_{e}^{2}E_{54}^{1/2}t_{3}^{-3/2}\epsilon_{B,-1}^{1/2} \epsilon_{e,-2}^{2}\xi_{e}^{-2}~{}{\rm eV}. \tag{11}\]
The cooling frequency for the electrons is
\[h\nu_{c,e}=0.35~{}E_{54}^{-1/2}t_{3}^{-1/2}\epsilon_{B,-1}^{-3/2}n_{0}^{-1}~{} {\rm eV}, \tag{12}\]
and the maximum frequency for the protons reads
\[h\nu_{\rm max,p}\sim 23~{}\alpha^{-1}E_{54}^{1/8}n_{0}^{-1/8}t_{3}^{-3/8}~{}~{} ~{}{\rm TeV}. \tag{13}\]
Hence, the shock accelerated protons can emit synchrotron photons at energies as high as \(10~{}\) TeV above those detected by LHAASO.
The spectral flux is calculated as follows. The maximum power emitted by a single particle via the synchrotron process at the observed peak frequency is \(P_{\nu_{\rm max}}=(2m_{i}c^{2}\sigma_{T,i}B\Gamma/9q)(1+z)\). Assuming the total number of radiating particles to be \(N_{i}=4\pi\xi_{i}nr^{3}/3\), the peak flux associated with the electron synchrotron process is
\[F_{\nu_{\rm peak,e}}=\frac{N_{e}P_{\nu_{e,\rm max}}}{4\pi d_{L}^{2}}=2.7\times 1 0^{-21}\frac{\xi_{e}E_{54}\epsilon_{B,-1}^{1/2}n_{0}^{1/2}}{d_{L,27}^{2}}~{}~{ }~{}{\rm erg~{}cm^{-2}~{}s^{-1}~{}Hz^{-1}}, \tag{14}\]
where we normalised the luminosity distance \(d_{L}\) to \(10^{27}\) cm, given the proximity of GRB 221009A at redshift \(z=0.151\)(de Ugarte Postigo A. et al., 2022; Castro-Tirado et al., 2022) corresponding to \(d_{L}=2.23\times 10^{27}\) cm (assuming a flat \(\Lambda\)CDM cosmology with \(H_{0}=69.6\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{M}=0.286\), and \(\Omega_{\Lambda}=0.714\), Wright, 2006). Since in our scenario the electrons are in the fast cooling regime, analysing the spectrum reveals that \(\nu_{c,e}<\nu_{o}<\nu_{m,e}<\nu_{XRT}\), where \(\nu_{XRT}=0.3\) keV is the low energy threshold of the XRT instrument and \(\nu_{o}\sim 1\) eV is the typical frequency of the optical band. The flux at frequencies lower than \(\nu_{m,e}\) and greater than \(\nu_{c,e}\) is given by
\[F_{\nu,e}=F_{\nu_{\rm peak,e}}(\nu/\nu_{c,e})^{-1/2}, \tag{15}\]
while the flux at frequencies higher than \(\nu_{m,e}\) is given by
\[F_{\nu,e}=F_{\nu_{\rm peak,e}}(\nu_{m,e}/\nu_{c,e})^{-1/2}(\nu/\nu_{m,e})^{-p_ {e}/2}. \tag{16}\]
Similarly, the peak flux associated with the proton-synchrotron process is given by
\[F_{\nu_{\rm peak,p}}=\frac{N_{p}P_{\nu_{p,\rm max}}}{4\pi d_{L}^{2}}=1.46 \times 10^{-24}\frac{\xi_{p}E_{54}\epsilon_{B,-1}^{1/2}n_{0}^{1/2}}{d_{L,27}^{2}}~ {}~{}~{}{\rm erg~{}cm^{-2}~{}s^{-1}~{}Hz^{-1}}, \tag{17}\]
and the spectrum within the frequency range \(\nu_{m,p}\leq\nu\leq\nu_{\rm max,p}\) for the slow-cooling synchrotron process is a power-law
\[F_{\nu,p}=F_{\nu_{\rm peak,p}}(\nu/\nu_{m,p})^{\frac{-(p_{p}-1)}{2}}. \tag{18}\]
## 4 Limitations on the model parameters: an analytical approach
The best fit to the temporal decay of the X-ray flux as observed by the _swift_-XRT consists of five breaks at times between \(T_{0}+3.27\times 10^{4}\) s to \(T_{0}+4.5\times 10^{6}\) s. The corresponding decay indices at the break times are presented in the XRT catalogue of the burst (Evans et al., 2009). At observed times \(t<3\times 10^{4}\) s, the decay index in the XRT band is identified to be \(\sim-3/2\).3 At later times, the lightcurve becomes steeper, and may be associated with a jet break. In order to reproduce the temporal decay in the XRT band with \(\nu_{c}<\nu_{m}<\nu_{XRT}\) (fast cooling regime, as required by the electron synchrotron model), the condition \(p_{e}\sim 8/3\sim 2.67\) must be satisfied as \(F_{\nu}\propto t^{\frac{2-3p}{4}}\).4
Footnote 3: [https://www.swift.ac.uk/xrt_live_cat/01126853/](https://www.swift.ac.uk/xrt_live_cat/01126853/)
Footnote 4: In the slow cooling regime, the requirement on the electrons power law index is even higher with \(p\sim 3\) for \(\nu_{m}<\nu_{XRT}<\nu_{c}\).
On the other hand, the TeV temporal decay is \(-1.115\pm 0.012\)(LHAASO-Collaboration et al., 2023). This result is challenging for the SSC model, as it seem outside the expected temporal slope for electrons with a power law index of \(2.67\), which is \(-1.625\)(Sari and Esin, 2001) in the fast cooling regime.5 We can therefore exploit the proton-synchrotron model, with proton power law index of \(\approx 2.2-2.3\) in the slow cooling regime, as explained above. This is the index needed to explain the LHAASO data. In fact, setting \(p_{e}\equiv p_{p}\sim 8/3\) leads to an energy crisis, as the energy budget in the proton-synchrotron component will be very high. Therefore, in presenting our results for GRB 221009A, we will set a proton power law index \(p_{p}=2.3\), but allow for a different value of \(p_{p}=2.2\) in interpreting the TeV data. We will keep the constraint \(p_{e}=8/3\) in order to satisfy the XRT temporal decay.
Footnote 5: In the slow cooling regime, assuming \(\nu_{m}<\nu_{XRT}<\nu_{c}\), the expected temporal decay in the TeV band is \(-2\), for \(p=3.0\).
### Constraints for a proton-synchrotron model
We use the observed flux of GRB 221009A to constrain the values of the free model parameters. We start with the flux at 1 TeV as observed by the LHAASO experiment at time \(t=235\) s after the trigger. By equating the LHAASO reported flux \([\nu F_{\nu}]_{p}\approx 2\times 10^{-6}\) ergs cm\({}^{-2}\) s\({}^{-1}\)(LHAASO-Collaboration et al., 2023) and the expected proton-synchrotron flux given in Equation (18) at 1 TeV, the fractional energy of the magnetic field as a function of the other free parameters is,
\[\epsilon_{B,-1}=0.1\cdot(8.94\cdot 10^{-8})^{\frac{4}{p_{p}+1}}(2.53\cdot 10 ^{18})^{\frac{2(p_{p}-1)}{p_{p}+1}}\,f_{p_{p}}^{-\frac{4p_{p}-4}{p_{p}+1}} \xi_{p}^{\frac{4(p_{p}-2)}{(p_{p}+1)}}\,E_{54}^{-\frac{p_{p}+3}{p_{p}+1}}\, \epsilon_{p,-1}^{-\frac{4(p_{p}-1)}{(p_{p}+1)}}\,n_{0}^{-\frac{2}{p_{p}+1}},\]
which upon setting \(p_{p}=2.3\) simplifies to
\[\epsilon_{B,-1}=9.1\times 10^{5}\ \xi_{p}^{4/11}E_{54}^{-53/33}\epsilon_{p,-1}^{- 52/33}n_{0}^{-20/33}. \tag{19}\]
This value of \(\epsilon_{B}\) may seem large, but so should be the kinetic energy of this burst, leading in fact to values of \(\epsilon_{B}\) smaller than unity, see the top panel plot on Figure 1.
Similarly, the acceleration efficiency parameter \(\alpha\) is obtained by comparing \(\nu_{\max,p}\) from Equation (13) with the observed energy of 7 TeV photon at \(t=235\) s,
\[\alpha=5.7\ E_{54}^{1/8}\,n_{0}^{-1/8}. \tag{20}\]
To obtain the values of the other model parameters, we refer to _swift_-XRT data available at much later times, where XRT data is available and the emission is clearly in the "afterglow" phase in the _swift_-XRT band as well. We use Equation (16), which gives the expected energy flux above \(\nu_{m,e}\), together with the observed _swift_-XRT flux at 3300 s of \(1.087\times 10^{-7}\)erg s\({}^{-1}\)cm\({}^{-2}\) (at 0.3 keV) and include Equation (19) to get the injection fraction of electrons,
\[\xi_{e}\simeq(8.94\cdot 10^{-8})^{\frac{1}{p_{p}+1}}\,(2.53\cdot 1 0^{18})^{\frac{p_{p}-1}{2p_{p}+2}}\,0.025^{-\frac{1}{p_{e}-2}}6.44^{-\frac{p_{ e}}{2(p_{e}-2)}}\,10^{\frac{1-p_{p}}{p_{e}-2}}\,f_{p_{e}}^{\frac{p_{e}-1}{p_{e}- 2}}\,f_{p_{p}}^{\frac{1-p_{p}}{p_{p}+1}}\] \[\times E_{54}^{\frac{2p_{p}-p_{e}+4}{(2p_{e}-4)p_{p}+2p_{e}-4}} \,\epsilon_{e,-2}^{\frac{p_{e}-1}{p_{e}-2}}\,\epsilon_{p,-1}^{\frac{1-p_{p}}{p _{p}+1}}\xi_{p}^{\frac{p_{p}-2}{p_{p}+1}}\,n_{0}^{-\frac{1}{2p_{p}+2}}\,.\]
Setting \(p_{e}=8/3\) and \(p_{p}=23/10\) gives
\[\xi_{e}\simeq 0.11\ E_{54}^{89/66}\xi_{p}^{1/11}\epsilon_{e,-2}^{5/2}n_{0}^{-5/33} \epsilon_{p,-1}^{-13/33}. \tag{21}\]
By balancing the _swift_-UVOT band flux of \(1.22\times 10^{-8}\) erg s\({}^{-1}\)cm\({}^{-2}\) at observed energy 4.77 eV and observed time \(t=4000\) s with the predicted synchrotron flux from Equation (15) and including Equations (19) and (21), we get
\[\epsilon_{e,-2}\approx 10\cdot 0.025^{\frac{1}{p_{e}-1}}0.06^{\frac{p_{e}-2}{p_{e}- 1}}6.44^{\frac{p_{e}}{2(p_{e}-1)}}E_{54}^{-1}{f_{p_{e}}}^{-1},\]
which gives for \(p_{e}=8/3\)
\[\epsilon_{e,-2}\approx 3.9\ E_{54}^{-1}. \tag{22}\]
Figure 1: Parameters \(\epsilon_{B}\) (top) and \(\xi_{e}\) (middle) as functions of the total energy and ambient density, \(E_{54}\) and \(n_{0}\) for \(\epsilon_{p}=0.8\) where assumed fraction of 10% of the protons are accelerated into a power law (\(\xi_{p}=0.1\), left) and 1% accelerated into a power law (\(\xi_{p}=0.01\), right). Bottom: the acceleration efficiency, \(\alpha\) as a function of \(E_{54}\) and \(n_{0}\). The plots are obtained from Equations (19), (23), and (20), respectively. The white region is forbidden as it corresponds to \(\epsilon_{B}>1\) as seen from the top panel. The assumptions \(p_{e}=8/3\) and \(p_{p}=2.3\) are enforced for these figures.
This enables to write the parameter \(\xi_{e}\) as
\[\xi_{e}=0.06\cdot(8.94\cdot 10^{-8})^{\frac{1}{p_{p}+1}}\;(2.53\cdot 10^{18})^{ \frac{p_{p}-1}{2p_{p}+2}}f_{p_{p}}{}^{\frac{1-p_{p}}{p_{p}+1}}\,E_{54}{}^{\frac{ -2p_{p}-3}{2p_{p}+2}}\;\epsilon_{p,-1}{}^{\frac{1-p_{p}}{p_{p}+1}}\,\xi_{p}{}^{ \frac{p_{p}-2}{p_{p}+1}}\,n_{0}{}^{-\frac{1}{2p_{p}+2}}\;,\]
which simplifies to
\[\xi_{e}=3.2\ \xi_{p}^{1/11}E_{54}^{-38/33}n_{0}^{-5/33}\epsilon_{p,-1}^{-13/33}, \tag{23}\]
for our chosen index values.
The values of \(\epsilon_{B}\), \(\alpha\) and \(\xi_{e}\), as constrained by Equations (19), (20) and (23) are plotted in Figure 1 as functions of \(E_{54}\) and \(n_{0}\) for \(\epsilon_{p}=0.8\) and two choices of \(\xi_{p}\), namely \(\xi_{p}=0.1\) (left column) and \(\xi_{p}=0.01\) (right column). Satisfying \(\epsilon_{B}<1\) directly requires that \(E_{54}\), \(n_{0}\) and \(\epsilon_{p}\) should be large while \(\xi_{p}\) needs to be small. Overall, we could constrain the parameters \(\epsilon_{B}\), \(\epsilon_{e}\), \(\xi_{e}\) and \(\alpha\) as functions of the other free model parameters. To provide satisfactory constraints with \(\epsilon_{e}+\epsilon_{B}+\epsilon_{p}<1\), the kinetic energy of this burst must be large with \(E_{54}>10\). Yet this is not too large compared to the prompt total isotropic energy. In fact, this high kinetic energy would correspond to an efficiency of around 10%, typical of other GRBs (see e.g. Zhang et al., 2007; Beniamini et al., 2016). We note that our goal here is to provide a set of parameters that could potentially explain the TeV observations via the proton synchrotron process and not to determine the best possible parameter values.
We thus find that a requirement of our model is that accelerated protons contribute for most of the internal energy of the shock, with \(\epsilon_{p}\gtrsim 0.1\). The magnetic field needs to be strong, \(\epsilon_{B}\gtrsim 10^{-2}\), and the circumburst density should be high, \(n_{0}\gtrsim 10\). All these require a high kinetic energy. Furthermore, the model requires that only a relatively small fraction of electrons and protons achieve a power-law distribution behind the shock, _i.e._\(\xi_{e}\sim 10^{-2}\) and \(\xi_{p}\lesssim 10^{-1}\), and that their spectral indices be different, \(p_{e}\neq p_{p}\).
### Constraints imposed by the synchrotron-self Compton (SSC) emission.
In the SSC emission mechanism, the electrons that emit the low-energy synchrotron-photons inverse Compton (IC) scatter the same photons to higher energies, thus contributing to the high energy component of the afterglow spectrum. As we derive in details in appendix A, for the parameters chosen, this component is sub-dominant. Here we present the results only for \(p_{e}=8/3\) and \(p_{p}=23/10\), but the general trend applies for other values of the injection index \(p_{p}\). Within the framework of our model, the Klein-Nishina effect for the IC component can be neglected. This is shown by using Equations (1), (5) and (11) at \(t=235\) s, to find
\[\frac{\gamma_{m,e}h\nu_{m,e}}{\Gamma}\sim 0.15\ E_{54}^{5/33}\epsilon_{p,-1}^{ 13/33}n_{0}^{5/33}\xi_{p}^{-1/11}\ \mathrm{MeV}. \tag{24}\]
Figure 2: The parameter regime \(\epsilon_{B}\), \(\epsilon_{e}\) and \(\epsilon_{p}\) is explored in the context of \(p_{e}=8/3,p_{p}=23/10,E_{54}=50\), \(n_{0}=50\), corresponding to \(\xi_{e}=0.01\) and \(\xi_{p}=0.04\) for the condition (25). Notably, the proton-synchrotron process dominates mostly in regions where \(\epsilon_{B}>\epsilon_{e}\) (Zone (ii)), while the SSC process takes over when \(\epsilon_{e}>\epsilon_{B}\) (Zone (i)). Zone (ii) is shown to have different boundaries based on the value of \(\epsilon_{p}\) and these boundaries visually distinguish both the regions. It is clear that the dominance of proton synchrotron emission becomes more prominent as \(\epsilon_{p}\) increases. The hatched regions represent the condition \(\epsilon_{B}+\epsilon_{p}+\epsilon_{e}\leq 1\) for each \(\epsilon_{p}\) value and regarded as forbidden zones.
This result is in the order of (and even smaller than) \(m_{e}c^{2}\), the energy at which the Klein-Nishina effect becomes important. One therefore only expects small modifications, if any, around the peak of the IC component.
The characteristic frequencies of the IC spectral component for this burst are presented in Appendix A. Equation (A2) gives the observed peak energy of the IC spectrum to be around \(h\nu_{m,IC}\sim 30~{}\mathrm{GeV}\). To determine the criterion that governs the dominance of the proton-synchrotron component over the SSC component, we compare the fluxes at 1 TeV and at 235 s as \([\nu F_{\nu}]_{p}/[\nu F_{\nu}]_{IC}\gtrsim 1\)(Zhang & Meszaros, 2001). We can expand it using Equations (18), (17), (10) and (A5) to get the following:
\[0.15~{}E_{54}^{3/40}n_{0}^{7/12}\xi_{e}^{4/3}\xi_{p}^{-3/10}\epsilon_{B,-1}^{1 99/120}\epsilon_{p,-1}^{13/10}\epsilon_{e,-2}^{-10/3}\gtrsim 1. \tag{25}\]
This condition is displayed in Figure 2. The findings depicted in Figure 2 indicate the notion that \(\epsilon_{B}\) must exceed \(\epsilon_{e}\) in order to satisfy the above condition. Also it is evident that a near equipartition value of \(\epsilon_{p}\) reinforces the significant contribution from proton -synchrotron process.
From Equation (A5), the flux at 1 TeV and \(t_{3}=0.235\), using the results of Equations (19) and (21) reads
\[[\nu F_{\nu}]_{IC}|_{1~{}\mathrm{TeV}}=3.4\times 10^{-14}~{}E_{54}^{313/396} \epsilon_{p,-1}^{182/99}n_{0}^{247/396}\xi_{p}^{-42/99}~{}~{}\mathrm{ergs~{} cm^{-2}~{}s^{-1}}, \tag{26}\]
where we assumed that the TeV band is above the frequency of the IC spectral peak, and neglected Klein-Nishina effects. If anything, this effect would further reduce the observed flux in the TeV band, and allow for a larger parameter space in which the proton synchrotron mechanism dominates. This value is \(\approx 8\) orders of magnitude lower than the observed flux at the TeV band, implying that the electron SSC mechanism is sub-dominant at these energies for the constraints we derived.
## 5 Lightcurve and spectra
The observed lightcurve of the afterglow of GRB 221009A is shown in Figure 3 alongside the electron and proton synchrotron components estimated from our model. The two different set of parameters used in this section are summarized in Table 1. We set the parameters to \(p_{e}=8/3\), \(p_{p}=2.3\), \(E_{54}=50,n_{0}=50\), \(\epsilon_{p,-1}=8\) and \(\xi_{p}=0.04\), resulting in \(\epsilon_{B,0}=0.186\), \(\epsilon_{e,0}=0.0014\) and \(\xi_{e}=0.01\). The onset of the afterglow phase at 226 s after the burst trigger is marked by the dotted line (LHAASO-Collaboration et al., 2023) whereas the dashed line drawn at t = 597 s marks the end of the prompt duration estimated by Lesage et al. (2023). Overall, this figure demonstrates that the proton synchrotron model presented here is capable of explaining the observed temporal features of the afterglow of GRB 221009A in several different energy bands.
Figure 3: The multi-wavelength afterglow light curve for the proton synchrotron model along with the observed data of GRB 221009A for \(p_{e}=8/3,p_{p}=23/10,E_{54}=50\), \(n_{0}=50\), \(\epsilon_{p,-1}=8\) and \(\xi_{p}=0.04\), corresponding to \(\epsilon_{B,0}=0.186\), \(\epsilon_{e,0}=0.0014\) and \(\xi_{e}=0.01\). The _swift_- UVOT (white filter) and BAT data were retrieved from Tables 4 and 6 of Williams et al. (2023) and the _fermi_-LAT data were obtained from Table 5 of Laskar et al. (2023). The optical data is corrected for galactic extinction with \(A_{v}=5.4\)(Shrestha et al., 2023; Fulton et al., 2023). The black dotted line at 226 s marks the onset of the afterglow phase corresponding to LHAASO-Collaboration et al. (2023), while the black dashed line at \(T_{0}=597\) s marks the afterglow onset determined by Lesage et al. (2023).
We further produce the afterglow spectrum at two different times, namely at \(t=235\) s and \(t=4000\) s, and display it in Figure 4 (both in the left and right panels). The spectral energy distributions (hereinafter SEDs) consist of three main components. They are produced by the electron-synchrotron, the proton-synchrotron and the IC processes and are respectively shown by the blue, red and black lines. Inspection of Figure 4 (left), obtained for \(p_{e}=8/3\) and \(p_{p}=2.3\), shows that the SED of the electron-synchrotron component at 4000 s satisfies the observed data in the optical and X-ray bands as designed in our analytic approach. At the same time, the proton synchrotron component accounts for the LHAASO flux, also marginally satisfies the LAT flux.
We then search for solutions with a smaller proton index, \(p_{p}=2.2\). This is advantageous as it results in a lower burst energy and external density. The right panel of Figure 4 shows the SEDs for the parameters \(E_{54}=30\), \(n_{0}=30\), \(\xi_{p}=0.1\), \(\epsilon_{p,-1}=8\), resulting in \(\epsilon_{B}=0.17\), \(\xi_{e}=0.01\), \(\epsilon_{e}=0.002\) and \(\alpha=6\). In addition, under this assumption, the required total burst energy \(E\) is lower than for \(p_{p}=2.3\). Hence the prompt radiative efficiency is increased here. We see that, in this case too, the LAT flux at 4000 s can be marginally explained by the proton synchrotron process.
The close to equipartition values of \(\epsilon_{B}\) and \(\epsilon_{p}\) associated with all the SEDs are as anticipated for the proton-synchrotron model, see Equations (19) and (23). Indeed, protons are more massive than electrons and to radiate a substantial amount of energy, they need a strong magnetic field. We further find that the IC components (black lines in the SEDs shown in Figure 4) are subdominant at all time bins in both scenarios considered, since the magnetic field energy density is large compared to the electron energy density (\(U_{B}\gg U_{e}\), see e.g. Rybicki & Lightman, 1986). Indeed, for the constraints we derived, using \(p_{e}=8/3\) and \(p_{p}=23/10\), one obtains
\[U_{B} =2.1\times 10^{6}~{}E_{54}^{-179/132}~{}e_{p,-1}^{-52/33}n_{0}^{19 /132}t_{3}^{-3/4}\xi_{p}^{4/11},\] \[\text{and}~{}~{}U_{e} =0.14~{}E_{54}^{-\frac{5}{8}}~{}n_{0}^{\frac{5}{8}}~{}t_{3}^{- \frac{3}{8}}.\]
The corresponding ratios of the energy densities in our models are given in Table 1.
The lightcurve of the TeV emission is shown to have a break followed by a steeper temporal decay at time \(T_{0}+896~{}(+230,-110)\) s (LHAASO-Collaboration et al., 2023). This break could be obtained by the crossing of the maximum synchrotron frequency \(\nu_{\text{max}}\) through the LHAASO energy band. For the parameters we derived in case \(p_{e}=8/3\) and \(p_{p}=2.3\), the time at which \(\nu_{\text{max}}\) equals 1 TeV is \(t_{3}=42\). In principle, it is possible to use this property to better constrain the acceleration efficiency, \(\alpha\) and the other model parameters. However, this constraint depends on the exact parametrization of the proton distribution function at the highest energies, and therefore would bring only little insight into the model, apart from better constraining \(\alpha\).
## 6 Discussion
### Comparison between GRB 221009A and GRB 190114C
Similar to GRB 221009A, GRB 190114C is another long GRB with a VHE afterglow emission observed in the band between 0.2 and 1 TeV (Acciari et al., 2019). This GRB has an isotropic equivalent energy \(E_{iso}\simeq 2.5\times 10^{53}\) erg. The redshifts of both GRB 190114C (\(z=0.4245\)) and GRB 221009A are low, \(z<0.5\). As a result, the detectability of \(\geq 1\) TeV photons, if produced in the source, is high, since they are only weakly EBL attenuated (see e.g. Franceschini, 2021). Similarly to GRB 221009A, a complete set of multi-wavelength observational data is available for GRB 190114C afterglow (see _e.g._ MAGIC Collaboration et al., 2019, for a summary of those observations). In Isravel et al. (2022), we considered the proton synchrotron mechanism to explain the VHE afterglow of GRB 190114C.
One can therefore compare the parameters we derived for GRB 221009A to those we obtained for GRB 190114C, in order to outline some of the intriguing features of long GRB associated to VHE energy afterglow observations. Under the guise of a proton synchrotron model, these features are as follows:
#### 6.1.1 Particle index
The parameters associated with \(p_{e}\neq p_{p}\) yielded the most optimal results in these bursts. One potential cause of protons and electrons having distinct indices is the non-uniformity of the power-law turbulence spectrum of the
\begin{table}
\begin{tabular}{|c||c c c c c c||c c c c|c|c|c|} \hline & \(p_{e}\) & \(p_{p}\) & \(E\) & \(n\) & \(\xi_{p}\) & \(\epsilon_{p}\) & \(\epsilon_{B}\) & \(\epsilon_{e}\) & \(\xi_{c}\) & \(\alpha\) & \(\eta\) & \(U_{B}/U_{e}\) \\ \hline Figure 4 left & 8/3 & 2.3 & \(5\times 10^{53}\) erg & 50 cm\({}^{-3}\) & 0.04 & 0.8 & 18.57\(\times 10^{-2}\) & 1.4 \(\times 10^{-3}\) & 0.01 & 6 & 5.7\% & \(8.1\times 10^{3}\)\(t_{3}^{-3/8}\) \\ Figure 4 right & 8/3 & 2.2 & \(3\times 10^{53}\) erg & 30 cm\({}^{-3}\) & 0.1 & 0.8 & 17 \(\times 10^{-2}\) & 2 \(\times 10^{-3}\) & 0.01 & 6 & 9.1\% & \(5.2\times 10^{3}\)\(t_{3}^{-3/8}\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used to construct the SEDs in Figure 4.
magnetic field across a wide range of scales (Asano et al., 2009). This could be realized under several circumstances: a) a difference in scales corresponding to the gyration radii of protons attaining \(10^{20}\)eV, and electrons reaching GeV energies (Cerruti et al., 2015), respectively, and b) a variation in wavelength distribution of shock generated magnetic perturbations and geometry of the magnetic field at the shock front (Niemiec et al., 2006). On the other hand, the acceleration processes setting these power-law indices in the presence of shock can have varied properties depending upon the orientation of the magnetic field relative to the shock (Caprioli and Spitkovsky, 2014, 2014, 2014). Apart from these, a two-component jet model could also feature different power law indices. In this model, a narrow, highly collimated jet could account for the TeV emission produced by protons, while a less collimated, wider jet could be responsible for the low-energy emission from electrons (Berger et al., 2003; Huang et al., 2004; Sato et al., 2023).
#### 6.1.2 Embedded magnetic field
Requiring that proton synchrotron emission explains the TeV observations results in the necessity of having a strong magnetic field, with \(\epsilon_{B}\) close to equipartition. Similarly, the electron equipartition energy must be low, \(\epsilon_{e}\lesssim 10^{-3}\). Even though the inference \(\epsilon_{B}\sim 10^{-1}\) is inconsistent with the analytical models and particle-in-cell (PIC) simulations of un-magnetized plasma (Medvedev, 2006; Sironi and Spitkovsky, 2011), such a relatively high \(\epsilon_{B}\) can be possibly explained as follows. Considering the connection between long-GRBs and supernovae could account for the highly magnetized environment as well as low \(\epsilon_{e}\)(see _e.g._ Kippen et al., 1998; Bosnjak et al., 2006; Campana et al., 2006; Klose et al., 2019). By performing global fitting of the emission in six supernova remnants (SNRs), Reynolds et al. (2021) found \(\epsilon_{B}\) to be between \(10^{-3}\) and \(10^{-1}\), while \(\epsilon_{e}\) achieves a smaller value in the range of \(10^{-4}-5\cdot 10^{-2}\). These equipartition parameters were determined owing to the advantageous conditions provided by SNRs. This is analogous to the findings in our model for this GRB and 190114C (Isravel et al., 2022). Also, the compression of the upstream turbulent magnetic field by the shock may amplifies its strength (Lemoine and Revenu, 2006). The type of turbulence spectrum may also dictate the strength of the magnetic field, such as Kolmogorov turbulence (Biermann and Strittmatter, 1987) and Kraichnan turbulence (Kraichnan, 1965). Alternatively, the reverse shock approximation could also be evoked for proton acceleration as it could harbor such a high magnetization (Waxman and Bahcall, 2000; Zhang et al., 2018). To explain the VHE emission of GRB 221009A with synchrotron emission from protons accelerated at the reverse shock, Zhang et al. (2022) implemented a strong magnetic field with \(\epsilon_{B}\sim 0.5\).
#### 6.1.3 Injection fraction
The measured fluxes, together with the large kinetic energy, require the fraction of particle accelerated into the power-law to be considerably low, \(\xi_{e}\approx 10^{-2}\) and \(\xi_{p}\approx 10^{-2}\). These values are consistent with the numerical estimation of \(\xi_{e}\lesssim 10^{-1}\) considering the acceleration of charged particles in collisionless shocks (Sironi and Spitkovsky, 2011). A lower than the unity \(\xi\) is gaining attention in the GRB afterglow theories and modeling, see e.g. Ressler and Laskar (2017); Warren et al. (2018); Cunningham et al. (2020); Asano et al. (2020). For instance, for the very nearby (\(z\sim 0.07\)), GRB
190829A, \(\xi<1\) is obtained by a fit to the data (Salafia et al., 2022). Also, Gill & Granot (2023) concluded their investigation of GRB 221009A by emphasizing the small value of \(\xi_{e}\sim 10^{-2}\) required by their analysis. Such a low injection fraction of electrons and protons indicate that there is a large population of thermal particle species present in the downstream of the shock (Ressler & Laskar, 2017). The thermal particles can indeed be anticipated to emit synchrotron radiation in radio frequencies during the early phase according to Eichler & Waxman (2005). However, the contribution of these thermal electrons and protons in producing the GRB spectra is yet to be studied in detail (Warren et al., 2018).
#### 6.1.4 Progenitor environment
The progenitor of long-GRBs is a massive star which collapses to form a compact object (see _e.g._ Kumar & Zhang, 2015). The low-metallicity Wolf-Rayet stars (\(20-25M_{\odot}\)) with low mass-loss rates (\(\sim 10^{-7}M_{\odot}\mathrm{yr}^{-1}\)) are believed to be progenitors for the collapsar model for which the circumburst density is \(n\gtrsim 10\) cm\({}^{-3}\)(Woosley et al., 2002; Fryer et al., 2006). Considering \(\xi=0.01\) in our model, the external medium density of these long duration bursts ranges between \(10-10^{2}\) cm\({}^{-3}\). When \(\xi=1\), for a constant ISM medium, the circumburst density estimated in theoretical models is around \(n\leq 1\) cm\({}^{-3}\)(Beniamini et al., 2015; Gompertz et al., 2018; Derishev & Piran, 2021; Guarini et al., 2023), and can even reach \(\sim 100\) cm\({}^{-3}\)(Laskar et al., 2015) in some specific cases.
#### 6.1.5 Energetics of GRBs
For those two bursts, the required kinetic energy is larger by about an order of magnitude than their prompt-phase equivalent energy \(E_{iso}\). This leads to a radiative efficiency \(\eta=E_{iso}/(E_{k}+E_{iso})\) in the order of 10%. For the proton synchrotron model, we estimated \(\eta\) to be about 8% for GRB 190114C and 9% for GRB 221009A considering \(p_{e}=8/3\) and \(p_{p}=2.2\) for the latter. Moreover, if the jet observed in GRB 221009A is considerably narrow with a half-opening angle of \(\theta\sim 0.8^{\circ}\) as reported by LHAASO-Collaboration et al. (2023), then the jet kinetic energy after correcting for beaming is \(E_{k,jet}=(\theta^{2}/2)E_{k}\sim 3\times 10^{51}\) erg for \(p_{e}=8/3\) and \(p_{p}=2.2\) while for \(p_{e}=8/3\) and \(p_{p}=2.3\) it is \(E_{k,jet}\sim 5\times 10^{51}\) erg. This is in agreement with the expectations of amount of energy stored in GRBs (Frail et al., 2001). This analysis is based on the Konus-Wind estimation of \(E_{iso}\sim 3\times 10^{54}\) erg (Frederiks et al., 2022). However, _Insight_-HXMT in conjunction with the GECAM-C measured the isotropic equivalent energy of GRB 221009A to be \(1.5\times 10^{55}\) erg (An et al., 2023). The latter measurement is five times higher than the former. It is worthy to highlight that integrating \(E_{iso}=1.5\times 10^{55}\) erg into our model leads to an increase in \(\eta\) which is around 33% for \(p_{e}=8/3\) and \(p_{p}=2.2\) and \(\sim 23\%\) considering \(p_{e}=8/3\) and \(p_{p}=2.3\), while the other parameters remain unchanged. We emphasize that both estimates of \(E_{iso}\) values yield reasonable prompt phase-energy conversion efficiencies.
#### 6.1.6 Acceleration Efficiency
Finally, we found that these bursts do not require a very efficient proton acceleration, with an efficiency in the order of a few tens, \(\alpha\approx 5-20\). Interestingly, the PIC simulations performed in Asano et al. (2020) in the context of the VHE afterglow emission of GRB 190114C resulted in even lower efficiency, \(\alpha\sim 100\). It is obtained by considering the early diffusive process in Fermi acceleration mechanism. The parameter \(\alpha\) in the range \(5-20\) acquired by setting the maximum proton energy may imply that high energy protons could be accelerated via MHD turbulence (Demidem et al., 2018; Asano et al., 2020).
### Comparison of SSC and proton-synchrotron components for GRB 221009A
Many authors attempt to explain the VHE observations of GRB afterglow with a purely leptonic modeling based on the synchrotron self-Compton process (for instance Zhang et al., 2022; Gonzalez et al., 2022; Ren et al., 2022; Laskar et al., 2023; Kann et al., 2023; Das & Razzaque, 2023; LHAASO-Collaboration et al., 2023). It is argued that explaining the high energy photon with this mechanism is difficult because the modelled SSC flux in the TeV band, which is strongly constrained by the radio, optical and X-ray fluxes, is smaller than the corresponding observed flux after correcting for the EBL absorption (see, Gonzalez et al., 2022; Miceli & Nava, 2022). The obtained fits required lower magnetization than the one presented here, typically \(\epsilon_{B}\approx 10^{-4}-10^{-3}\) and higher \(\epsilon_{e}\approx 10^{-2}-10^{-1}\).
In the model we presented, this problem does not exist. Indeed the flux in the TeV band is somewhat independent from the flux at lower energies. However, this freedom comes at the expense of a large kinetic energy and a small fraction of electrons injected into the non-thermal power-law. This ultimately leads to a large external density for the interstellar medium and a small (5 to 10 %) prompt radiative efficiency. We stress that these values are consistent with those found in numerical simulations, as well as afterglow modelling.
## 7 Conclusions
The explosions caused by the core collapse of massive stars are predicted to result in long-GRBs (see _e.g._ Kumar and Zhang, 2015). Some of them, as identified recently, are accompanied by VHE signals at energies \(\gtrsim\) TeV during their afterglow phase. This offers an opportunity to investigate the source of VHE emission from these extremely powerful events. We have explained the early afterglow of GRB 221009A within the framework of a hybrid emission model where the electron-synchrotron process is the source of the low energy component of the spectrum and the VHE component is explained by the proton-synchrotron mechanism with different particle indices. We constrain some parameters of this model by using observations in the optical, X-ray and TeV bands, and demonstrate that the observations can be reproduced by our model. Yet, our modeling requires that protons and electrons have different spectral indices. The key aspect of our model is that the kinetic energy of the bursts needs to be large and the fraction of particles (electrons and protons) accelerated into the power-law must be small.
We then compare the model parameters we obtained for GRB 221009A and for GRB 190114C (Isravel et al., 2022) to underline their similarity. We find that explaining these two bursts with the hydrid model we presented requires a large kinetic energy, \(E\) and density, \(n\), which in turn limits the fractions of accelerated particles injected into the power-law by shock acceleration to be small, with \(\xi_{e}\sim 10^{-2}\) and \(\xi_{p}\gtrsim 10^{-2}\). Still, we emphasis the fact that the required energy in both cases, \(\approx 10^{54}-10^{55}\) erg is not unacceptable. Especially for these extremely bright GRBs, the efficiency in kinetic energy conversion to prompt emission is of the order of a few percent and up to 10%. These values are not exceptional. Similar constraints are commonly inferred in GRBs, under various assumptions: for example in the context of purely leptonic models, see Cunningham et al. (2020). The existence of a strong magnetic field, characterized by \(\epsilon_{B}\gg\epsilon_{e}\) is crucial for a proton- synchrotron process to explain the TeV emission. We demonstrated that under this assumption, the high energy component of the SSC model experiences a significant suppression. However, the SSC process may take precedence in a scenario where \(\epsilon_{B}\ll\epsilon_{e}\).
We therefore conclude that the proton-synchrotron process offers a compelling alternative to other radiative models based on the SSC mechanisms to explain the VHE afterglow of the GRBs. Further detection of GRBs at VHE by the Cherenkov telescope array (CTA) (Knodlseder, 2020) and the LHAASO experiment will allow to further constrain the free parameters and contrast it with other purely leptonic models.
We acknowledge support from the European Research Council via ERC consolidating grant No. 773062 (acronym O.M.J.).
## Appendix A Inverse-Compton (IC) Scattering Component
First we point out that in our model we can adapt the classical regime for the inverse Compton process, _i.e._ neglect Klein-Nishina effects. Indeed
\[\gamma_{m,e}h\nu_{m,e}\Gamma^{-1}\sim 0.15~{}E_{54}^{5/33}\epsilon_{p,-1}^{ 13/33}n_{0}^{5/33}\xi_{p}^{-1/11}~{}\mathrm{MeV}.\] (A1)
Hence, the minimum frequency of the up-scattered electrons in the observer's frame of reference is given by \(\nu_{m,IC}=2\gamma_{m,e}^{2}\nu_{m,e}\). Using Equations (19) and (21) at \(t=235\) s and \(p_{e}=2.67\), we find
\[h\nu_{m,IC} =1.58\times 10^{17}\,0.025^{\frac{4}{p_{e}-1}}0.06^{\frac{4(p_{e}-2 )}{p_{e}-1}}\,6.44^{\frac{2p_{e}}{p_{e}-1}}\,(2.53\cdot 10^{18})^{\frac{1-p_{p}}{ p_{p}+1}}\,(8.94\cdot 10^{-8})^{-\frac{2}{p_{p}+1}}\,f_{p_{\mathrm{p}}}^{\frac{2p_{p}-2}{ p_{p}+1}}\] \[\quad\times E_{54}^{\frac{p_{p}+5}{p_{p}+4}}\epsilon_{p,-1}^{p_{p }-2}\,n_{0}^{\frac{3-p_{p}}{kp_{p}+4}}\,\xi_{p}^{\frac{4-2p_{p}}{p_{p}+1}}\,\] \[=30~{}E_{54}^{73/132}\epsilon_{p,-1}^{26/33}n_{0}^{7/132}\xi_{p}^{ -2/11}~{}~{}\mathrm{GeV},\] (A2)
Similarly, the characteristic cooling energy of the IC spectrum, \(\nu_{c,IC}=2\gamma_{c,e}^{2}\nu_{c,e}\), is given by
\[h\nu_{c,IC} =1.55\times 10^{8}\left(8.94\cdot 10^{-8}\right)^{-\frac{14}{p_{p }+1}}\left(2.53\cdot 10^{18}\right)^{-\frac{7\left(p_{p}-1\right)}{p_{p}+1}}f_{p_{p}} \,^{\frac{14p_{p}-14}{p_{p}+1}}\,E_{54}\,^{\frac{9p_{p}+37}{4p_{p}+4}}\,\epsilon _{p,-1}\,^{\frac{14p_{p}-14}{p_{p}+1}}\] \[\quad\times n_{0}^{\frac{19-9p_{p}}{4p_{p}+4}}\,\epsilon_{p}\,^{ \frac{28-14p_{p}}{p_{p}+1}}\,,\] \[=6.8\times 10^{-17}\ E_{54}^{577/132}\epsilon_{p,-1}^{182/33}n_{0} ^{-17/132}\xi_{p}^{-14/11}\ \text{eV}.\] (A3)
The maximum flux of the IC spectrum is given by \(F_{\nu_{\text{peak,IC}}}=\frac{1}{3}\sigma_{T}n_{e}rF_{\nu_{\text{peak,e}}}\), and is expressed as
\[F_{\nu_{\text{peak,IC}}} =3.51\times 10^{-32}\left(2.53\cdot 10^{18}\right)^{\frac{2p_{p }-2}{p_{p}+1}}\left(8.94\cdot 10^{-8}\right)^{\frac{4}{p_{p}+1}}f_{p_{p}}\,^{ \frac{4-4p_{p}}{p_{p}+1}}E_{54}\,^{\frac{-5p_{p}-13}{4p_{p}+4}}\,\epsilon_{p,- 1}\,^{\frac{4-4p_{p}}{p_{p}+1}}\,n_{0}\,^{\frac{5p_{p}-3}{4p_{p}+4}}\,\xi_{p} \,^{\frac{4p_{p}-8}{p_{p}+1}},\] \[=3.2\times 10^{-25}\ E_{54}^{-245/132}\epsilon_{p,-1}^{-52/33} \xi_{p}^{4/11}n_{0}^{85/132}\ \text{erg cm}^{-2}\ \text{s}^{-1}\ \text{Hz}^{-1}.\] (A4)
Considering Equations (A2) and (A3), clearly \(\nu_{m,IC}>\nu_{c,IC}\). The IC flux at 1 TeV for \(p_{e}=8/3\) and \(t=235\) s is then estimated to be
\[[\nu F_{\nu}]_{IC}|_{1\text{ TeV}} =2.28\times 10^{-5}\,0.543^{\frac{p_{p}}{2}}\,10^{\frac{2-7p_{e}}{4} }\,f_{p_{e}}\,^{2p_{e}-2}\,E_{54}\,^{\frac{3p_{p}+2}{8}}\epsilon_{B,-1}\,^{ \frac{p_{e}-6}{4}}\epsilon_{e,-2}\,^{2p_{e}-2}n_{0}\,^{\frac{2-p_{e}}{8}}\xi_ {e}\,^{4-2p_{e}},\] \[=1.7\times 10^{-10}\ E_{54}^{5/4}\epsilon_{e,-2}^{10/3}\epsilon_{B,- 1}^{-5/6}n_{0}^{-1/12}\xi_{e}^{-4/3}\ \ \text{erg cm}^{-2}\ \text{s}^{-1}.\] (A5)
|
2304.06499
|
Altitude-Loss Optimal Glides in Engine Failure Emergencies -- Accounting
for Ground Obstacles and Wind
|
Engine failure is a recurring emergency in General Aviation and fixed-wing
UAVs, often requiring the pilot or remote operator to carry out carefully
planned glides to safely reach a candidate landing strip. We tackle the problem
of minimizing the altitude loss of a thrustless aircraft flying towards a
designated target position. Extending previous work on optimal glides without
obstacles, we consider here trajectory planning of optimal gliding in the the
presence of ground obstacles, while accounting for wind effects. Under
simplifying model assumptions, in particular neglecting the effect of turns, we
characterize the optimal solution as comprising straight glide segments between
iteratively-determined extreme points on the obstacles. Consequently, the
optimal trajectory is included in an iteratively-defined reduced visibility
graph, and can be obtained by a standard graph search algorithm, such as A$^*$.
We further quantify the effect of turns to verify a safe near-optimal glide
trajectory. We apply our algorithm on a Cessna 172 model, in realistic
scenarios, demonstrating both the altitude-loss optimal trajectory calculation,
and determination of airstrip reachability.
|
Daniel Segal, Aharon Bar-Gill, Nahum Shimkin
|
2023-04-13T13:14:13Z
|
http://arxiv.org/abs/2304.06499v1
|
Altitude-Loss Optimal Glides in Engine Failure Emergencies - Accounting for Ground Obstacles and Wind
###### Abstract
Engine failure is a recurring emergency in General Aviation and fixed-wing UAVs, often requiring the pilot or remote operator to carry out carefully planned glides to safely reach a candidate landing strip. We tackle the problem of minimizing the altitude loss of a thrustless aircraft flying towards a designated target position. Extending previous work on optimal glides without obstacles, we consider here trajectory planning of optimal gliding in the the presence of ground obstacles, while accounting for wind effects. Under simplifying model assumptions, in particular neglecting the effect of turns, we characterize the optimal solution as comprising straight glide segments between iteratively-determined extreme points on the obstacles. Consequently, the optimal trajectory is included in an iteratively-defined _reduced visibility graph_, and can be obtained by a standard graph search algorithm, such as A*. We further quantify the effect of turns to verify a safe near-optimal glide trajectory. We apply our algorithm on a Cessna 172 model, in realistic scenarios, demonstrating both the altitude-loss optimal trajectory calculation, and determination of airstrip reachability.
Daniel Segal, the lead author of this paper and a former master's student of the two other authors, tragically deceased before this work was completed. The current paper presents the last version written by Daniel, with minor style modification. This work was a followup on Daniel's M.Sc. thesis, which was published as an article in the Journal of Guidance, Control and Dynamics (2019), and won the 2018 best graduate student paper award by the Israeli Association for Automatic Control. This arXiv publication is dedicated to the memory of Daniel Segal, an outstanding engineer, an accomplished researcher, and a friend.
**Keywords**: Optimal gliding trajectory, engine cutoff emergency, trajectory planning, obstacle avoidance, visibility graph
## Nomenclature
\begin{tabular}{l l l} ALO & = & Altitude Loss Optimal \\ FTP & = & Free Tangent Point \\ \(C_{L}\), \(C_{D0}\) & = & Lift and profile drag coefficients \\ \(D\), \(L\) & = & Drag and lift force \\ \(f_{g}\), \(f_{0}\) & = & Glide slope function and sink rate function, respectively \\ \(J\) & = & Cost function \\ \(K\) & = & Induced drag coefficient \\ \(m\) & = & Aircraft's mass \\ \end{tabular}
###### Abstract
Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of robotic path planning. Aircraft energy harvesting is a key problem in the field of path planning. Aircraft energy harvesting is a key problem in the field of path planning.
mostly address works that consider this problem in conjunction with obstacle avoidance. Additional references for optimal gliding without obstacles can be found in [1].
Grid-based methods divide the configuration space into cells of given size or resolution, and search for the optimal path over the graph that connects adjoining cells. The scheme proposed in [2] uniformly discretized the state space, employed flight primitives to connect grid points, and utilized the Dijkstra algorithm for optimal graph search. This allows for avoiding ground obstacles, but the method is computationally inefficient. Papers [4, 5] suggest and examine the use of genetic algorithms for searching over a dense grid.
Sampling-based motion planning algorithms have proved effective in high dimensional spaces. In [6, 7] the authors employ variants of the RRT* algorithm for emergency landing path planning. Such probabilistic optimization methods can only provide probabilistic guarantees on their convergence times.
From a control-theoretic viewpoint, in [8] the authors formulate the emergency landing problem as a Hamiltonian-Jacobi-Bellman (HJB) reachability problem in 6DOF space. This problem can be generally solved numerically using gridding of the state space; however, the resulting high dimensionality leads to prohibitive computation times. A sub-optimal solution is proposed for certain sub-problems using the concept of flight primitives, partially relying on the formulation in [2]. In [3] the authors time-discretize the dynamic state equations, either in nonlinear or linearized form, and apply a general optimal control solver to compute the solution. The effectiveness of this scheme is demonstrated for short-range landing scenarios with no obstacles.
Roadmap methods for obstacle avoidance employ geometric constructs to designate specific points in the configuration space, and then restrict the search to the graph that connects these points. The papers [9, 10, 11] consider path planning for emergency landing using visibility graphs. Starting with a 2D visibility graph, the authors propose heuristic extensions of that graph to the 3D space problem. Searching these graphs generally leads to suboptimal trajectories. This general approach is akin to ours, however the emphasis in the present paper is on characterizing and finding _optimal_ paths, under our modeling assumptions and more specific objective function. Related work in [43] suggests preflight contingency planning for engine failure: Trajectories that avoid no-fly zones are determined by setting way-points, which are connected by wind-dependent trochoidal paths. We note that the visibility graph approach has also been used for 3D path planning for _powered_ aircraft, see for example the recent papers [18, 19, 20, 22, 23] and references therein.
### _Main Contribution and Paper Outline_
In this work, we establish analytic results that allow deriving an efficient algorithm that computes the ALO gliding trajectory, subject to constant winds, and in the presence of general shaped terrain-induced obstacles. Similar to [1], we employ an approximate, problem-specific aerodynamic model of the aircraft; this model uses the aircraft speed (or angle of attack) and roll angle as instantaneous controls. As shown in [1], the ALO solution between two points in the absence of obstacles is a fixed-heading trajectory with constant velocity (the magnitude of which depends on the wind intensity and direction). This free-space optimal fixed-heading glide solution serves as the basic component of the optimal trajectory, which generally comprises straight flight segments between obstacles. Our analysis initially neglects the effect of turns, assuming that the required direction changes can be carried out instantaneously and with no altitude loss.
The proposed algorithm utilizes the roadmap approach for obstacle avoidance, by adapting and extending the visibility graph approach (e.g., [27]) which is known to be optimal for planar problems. This approach allows to search only through salient points on the obstacles, rather than creating and searching through a dense discretization of the entire space. The underlying idea is to create a sparse search graph, whose intermediate vertices constitute on-contour points on the ground obstacles; the links between these points correspond to fixed-heading glide segments, which comprise the trajectories to candidate landing sites. This graph is generated iteratively, starting from an initial point of known altitude, the engine cutoff point.
Our results imply that a shortest path search over the generated graph indeed yields the required ALO trajectory. Optimality does not depend on the shape of the obstacles; hence the method is applicable for general terrain maps.
To further account for the effect of aircraft turns between straight flight segments, we develop an estimate for the altitude loss associated with such turns. These estimates can either be used to provide safer elevation loss guarantees for a previously-computed trajectory, or more generally - be superposed onto the search graph nodes (as these correspond to aircraft turns) during the graph building and search process. Thus, near-optimal trajectories, that also accommodate the effect of turns, are obtained.
To summarize, the main contributions of this paper are: (1) A method to calculate a local 2D obstacles map via intersections between wind-induced manifolds and the terrain ahead and then finding the on-contour extreme points to serve as grid nodes; solving OGS algorithmics over such evolving grids enables to optimally bypass obstacles of general shape. (2) We obtain a novel result, Theorem 3 in Subsection III.B, which states that it is enough to compute only two points to circumvent any terrain-induced local obstacle. (3) We convert the 3D obstacles problem into a sequence of 2D problems over a digital map and solve them via employing the A\({}^{*}\) algorithm. We show that the trajectory obtained from this algorithmic procedure is indeed the optimal trajectory in terms of altitude loss, subject to wind and terrain elevations. (4) We formulate an extension to the proposed algorithm to consider the effect of turns.
The paper is organized as follows. In Section II we formulate our optimization problem, including the notion of terrain-induced obstacles and wind modeling. In Section III we establish some theoretical results that substantiate our approach, and present the proposed ALO trajectory planning algorithm that accounts for ground obstacles and winds. To generate the terrain-induced obstacles, we employ uniformly-spaced digital terrain mapping. In Section IV we extend our approach to consider the effect of turns and obtain sub-optimal trajectories. Next, in Section V we demonstrate our results on realistic engine cutoff scenarios. We summarize our main contributions and conclusions in section VI. Appendices A, B and C include some additional details on modeling and sample performance analyses. Appendix D describes an initial flight experiment that was conducted with a Cessna 172 aircraft, towards validating the proposed concept and algorithm.
## II Model and Problem Statement
We proceed to present our trajectory planning problem and its mathematical modeling, outline our assumptions, and formulate the optimization problem. Our interest is in computing an efficient gliding trajectory, in terms of minimal altitude loss, from a current location \(\mathbf{A}\) to a candidate landing-strip \(\mathbf{B}\), subject to wind and terrain-induced obstacles. The aerodynamic model that underlies this work is similar to that of [1], and is briefly outlined in Appendix A for completeness.
### _Frames of Reference_
We denote the Ground frame as a Local Level Local North (LLLN) inertial frame, with \(X\) pointing to the North, \(Y\) to East, and \(Z\) downwards, i.e., NED coordinates. Its origin is located at the projection of the aircraft center of mass on the ground at sea level altitude at time \(t=0\). Throughout this work, we consider a constant wind vector \((W_{X},W_{Y})\) of the air-mass relative to the ground frame. Noting our approximation of \(\gamma\approx 0\) (see Appendix A, Equations (A.1,A.2)), we have
\[\dot{X} =V\cos(\psi)+W_{X}\] \[\dot{Y} =V\sin(\psi)+W_{Y}\]
when \(\psi\) and \(V\) are the aircraft heading and velocity relative to the air-mass and \(\dot{X},\dot{Y}\) are the aircraft velocity north and east components in the Ground Frame.
The flight heading \(\psi_{g}\) in the Ground frame is given by
\[\psi_{g}=\arctan 2\,(\dot{Y},\dot{X}) \tag{1}\]
where \(\arctan 2\,\) is the standard four-quadrant arctangent. Note that maintaining constant velocity \(V\) and constant flight direction \(\psi\) implies constant ground heading, \(\psi_{g}\), in the Ground frame.
### _Terrain-Induced Obstacles_
Terrain-induced obstacles, or ground obstacles, can be naturally represented by an elevation map. We employ a digital map, e.g., the Shuttle Radar Topography Mission (SRTM) database [28] as the source for the elevation data. SRTM provides elevation values with a spatial resolution of about 30 m and elevation accuracy better than 16 m. By performing standard interpolation between the nearest samples in the discrete map _dtm\([m,n]\)_, we can produce a continuous elevation function, _dtm\((X,Y)\)_. The elevation map imposes the constraint
\[-Z>\textit{dtm}(X,Y)+\textit{Clearance}\]
on the feasible trajectory. The _Clearance_ is meant to provide a safe distance from the ground. It should include the altitude error of the elevation data, the aircraft instruments altitude error and the trajectory tracking error. We use the notation \(\textit{DTM}(X,Y)\) to denote the elevation map augmented by this clearance; that is \(\textit{DTM}(X,Y)\triangleq\textit{dtm}(X,Y)+\textit{Clearance}\).
### _The Optimization Problem_
In the basic problem considered in this paper, we are given the current aircraft position \(P_{A}\) and elevation \(Z_{A}=Z(0)\), and a candidate landing site location, \(P_{B}\). For attainability, we require to minimize the altitude loss between the current location \(P_{A}\) and the destination location \(P_{B}\).
We employ the model, derived in [1] and detailed also in Appendix A. This derivation is subject to:
**Assumption 1**.: _(a) We remove the constraint on \(\dot{\psi}\) (Equation (B.1)), allowing \(\psi\) to change freely. (b) The effect of boundary conditions in terms of initial and final velocity vectors is neglected. Consequently, both the aircraft velocity and pose at the initial and final points are not constrained. (c) The velocity control variable is limited to the feasible flight envelope of the aircraft (\(V_{\textit{stall}}(n)\leq V\leq V_{max}\)). (d) We consider constant air-density. (e) We apply an optimistic cost on turns (\(\phi=0\)). (f) We consider the long-duration segment of the two time-scales of the problem; therefore, the change rate of the fast variables are eliminated: \(\dot{\gamma}\cong 0\) and \(\dot{V}\cong 0\). (g) We adopt the small-angle assumption for the FPA, \(\cos(\gamma)\approx 1\)._
Bringing in from Appendix A the derivation result for the sink-rate function:
\[\dot{Z}=f_{0}(V,\phi)=K_{SR}\left(\frac{V^{4}+n(\phi)^{2}V_{0}^{4}}{V}\right)\]
where \(K_{SR}=\frac{\rho SC_{Do}}{2mg}\), \(V_{0}=\sqrt{\frac{2mg}{\rho S}\sqrt{\frac{K}{C_{Do}}}}\), and \(n(\phi)=\frac{1}{\cos(\phi)}\). The stall limit Eq. (A.8):
\[V_{stall}(n(\phi))=\sqrt{\frac{2mg}{\rho SC_{Lmax}}n(\phi)}\]
The control variables in our model are the flight velocity, \(V(t)\), and the flight heading relative to the air-mass, \(\psi(t)\), while the variables, \(X\), \(Y\) and \(Z\) are the state variables in our problem. We define the cost as the altitude loss from \(P_{A}\) to \(P_{B}\). As we neglect the effect of turns on altitude-loss, we nullify the bank-angle variable in the sink-rate function, Eq. (A.11), namely, \(\dot{Z}=f_{0}(V(t),\phi(t))\cong f_{0}(V(t),0)\). Therefore, the altitude loss is given by the integral of sink rate \(\dot{Z}=f_{0}(V(t),0)\) along the trajectory:
\[J(V)=Z(t_{f})-Z(0)=\int_{0}^{t_{f}}f_{0}(V(t),0)dt \tag{2}\]
We can now state our optimization problem:
\[\begin{array}{ll}\min_{V(t),\psi(t)}&J(V)\\ \text{subject to}&\dot{X}=V\cos(\psi)+W_{X}\\ &\dot{Y}=V\sin(\psi)+W_{Y}\\ &\dot{Z}=f_{0}(V(t),0)\\ &V_{stall}(1)\leq V\leq V_{max}\\ &-Z>\textit{DTM}(X,Y)\\ &(X(0),Y(0))=P_{A}\\ &Z(0)=Z_{A}\\ &(X(t_{f}),Y(t_{f}))=P_{B}\end{array} \tag{3}\]
Note that Assumption 1 turns the model Eq. (A.1)-Eq. (A.6) into the approximate model in Eq. (3) here-above. Following our assumption that turns are instantaneous and do not incur altitude loss, our initial conditions do not include the aircraft orientation. We address this complementary effect in Section IV.
Solving this optimization problem means that we aim at reaching the candidate landing site \(P_{B}\) with minimal altitude loss.
### _The ALO Free-Space Glide_
Let us first recall that the optimal gliding trajectory of an aircraft flying between two given points in fixed wind and in absence of obstacles, is indeed a straight path with fixed heading and speed.
**Theorem 1** (Theorems 1 and 2 in [1]).: _Consider the problem (3) of an aircraft flying inside a constant velocity air-mass \((W_{X},W_{Y})\) subject to an imposed ground destination, as well as minimum (stall) and maximum velocity constraints. In the absence of ground obstacles, the optimal trajectory in the sense of minimal altitude loss must maintain a constant velocity and fixed heading. Furthermore, the optimal flight speed is given in Eq. (10) below._
These straight path segments will serve as our building blocks for the optimal path in the presence of obstacles. The horizontal kinematics of an aircraft flying inside an air-mass with constant velocity and fixed-heading is illustrated in Fig. 1. The aircraft must follow the ground track from the current aircraft location \(\mathbf{A}\) in the direction of the ground velocity vector, \(\mathbf{V_{g}}\), subject to the wind vector \(\mathbf{W}\) to reach its destination at location \(\mathbf{B}\).
In constant-speed and fixed-heading flight, the aircraft speed in the Ground frame, \(V_{g}\), is given by:
\[V_{g}=||\mathbf{V}+\mathbf{W}|| \tag{4}\]
In the absence of obstacles, the ALO flight according to Theorem 1 is given by the fixed-heading, \(\psi_{g}\), from \(\mathbf{A}\) to \(\mathbf{B}\).
The wind velocity, \(\mathbf{W}\), can be expressed in terms of the "in-plane" component, \(W_{\parallel}\), in the direction of the flight heading in the Ground frame, \(\psi_{g}\), and the "crosswind" component, \(W_{\perp}\), perpendicular to the flight heading, namely in direction \(\psi_{g}+\frac{\pi}{2}\):
\[W_{\perp}=W_{\perp}(\psi_{g})=-W_{X}\sin(\psi_{g})+W_{Y}\cos( \psi_{g}) \tag{5}\] \[W_{\parallel}=W_{\parallel}(\psi_{g})=W_{X}\cos(\psi_{g})+W_{Y} \sin(\psi_{g}) \tag{6}\]
The positive in-plane wind is equivalent to "tailwind", and the negative in-plane wind is equivalent to "headwind". Employing Eqs. (5)-(6), the ground velocity, \(V_{g}\), can be expressed as
\[V_{g}=\sqrt{V^{2}-W_{\perp}^{2}}+W_{\parallel} \tag{7}\]
As shown in Appendix A, Eqs. (A.9) and (A.11), the altitude loss rate under these conditions is given by \(\dot{Z}=f_{0}(V,0)=V_{g}f_{g}(V)\) where \(f_{g}(V)\) is the glide slope function. This function is given explicitly in terms of the glide velocity as
\[f_{g}(V)=\frac{K_{SR}\frac{V^{4}+V_{0}^{4}}{V}}{\sqrt{V^{2}-W_{\perp}^{2}+W_{ \parallel}}} \tag{8}\]
where \(K_{SR}=\frac{\rho SC_{D0}}{2mg}\) and \(V_{0}\) denotes the optimal velocity in still air (Equation (A.10)).
Fig. 1: Fixed heading and fixed velocity glide analysis in free space – top view
To obtain the minimum of \(f_{g}(V)\), i.e., the optimal (ALO) glide slope, we numerically solve the following sixth-degree, speed-to-fly equation [1, Eq. (36)]
\[V^{6}-\frac{3}{2}V^{4}W_{\perp}^{2}+\frac{1}{2}W_{\parallel}\sqrt{V^{2}-W_{ \perp}^{2}}\left(3V^{4}-V_{0}^{4}\right)-V^{2}V_{0}^{4}+\frac{1}{2}W_{\perp}^{ 2}V_{0}^{4}=0 \tag{9}\]
which has a unique solution for \(V\in(V_{b},\infty)\), where \(V_{b}=\sqrt{W_{\perp}^{2}+\max(0,-W_{\parallel})^{2}}\) is the minimal speed for which \(V_{g}\) in Eq. (7) is positive and well defined.
To satisfy the minimum and maximum velocity constraints, \((V_{stall},V_{max})\), we must limit the ALO glide velocity. Thus, the ALO glide velocity, \(V_{opt}\), is given by:
\[V_{opt}=V_{opt}(W_{\parallel},W_{\perp})=\min(\max(V_{stall},V^{*}),V_{max}) \tag{10}\]
while \(V^{*}\) is the solution of the speed-to-fly equation (9). The ALO glide slope in free space flight in the direction \(\psi_{g}\) is given by \(f_{g}(V_{opt}(W_{\parallel}(\psi_{g}),W_{\perp}(\psi_{g})))\).
## III Solution Concept and Algorithm
In this section, we derive the proposed algorithm to obtain the optimal trajectory in terms of minimal altitude-loss that avoids ground obstacles of general shape, in the presence of possibly intense wind. Starting from the initial position and altitude, our algorithm first creates a local 2D obstacle map by calculating the ALO straight-path trajectory to every map coordinate, and obtains the obstacles as those coordinates for which this trajectory is below ground level. Next, we find the set of _free tangent points_ of these obstacles, which serve as the next vertices to be explored. In fact, we will show that at most two of these points need to be considered for each connected obstacle. We will further show that a fixed heading segment from the current position to one of these vertices must be included in the ALO trajectory. We may now iteratively continue to create a graph composed of such fixed heading segments, until the algorithm explores the destination and finds an ALO trajectory using a graph search algorithm.
The proposed algorithm bears similarity to the classical 2D shortest path navigation problem which has been extensively explored in the literature, in particular for polygonal obstacles [27]. However, in our problem, the so-called visibility road map cannot be calculated directly as the obstacles effectively depend on the current altitude of the aircraft. Therefore, the obstacle map depends on the currently explored vertex. Also, here we consider a non-Euclidean cost function, the altitude loss, which depends on the direction of the flight relative to the wind vector. We therefore justify our construction directly, based on Theorem 1 above.
In Subsection III-A we describe the concept of the ALO manifold and define the obstacles and the free space. In Subsection III-B we obtain the theoretical results that together with the definition of a feasible ("safe") path above a discrete digital map, Subsection III-C, enable us to obtain the ALO trajectory optimization algorithm in Subsection III-D.
### _The ALO Manifold and Local Obstacle Map_
In Subsection II-D we have identified the ALO glide velocity, \(V_{opt}\), which leads to minimal altitude-loss for flight in a given heading \(\psi_{g}\). Calculating the ALO glide slope \(f_{g}(V_{opt})\) enables to obtain the minimal altitude-loss rate in every direction.
To obtain the relevant 2D obstacle map from the current position, it is convenient to first define the _ALO manifold_, \(\textit{M}=\{\textit{M}(x,y)\}\), which is the cone-like surface of minimal altitude-loss to every displacement \((x,y)\). Thus, \(\textit{M}(x,y)\) is the altitude loss obtained by ALO glide from current position projection in the 2D horizontal plane, \(P\), to some displacement, \(P+(x,y)\), given by
\[\textit{M}(x,y) =||(x,y)||\cdot f_{g}(V_{opt}(W_{\parallel}(\psi_{g}),W_{\perp}( \psi_{g}))) \tag{11}\] \[\psi_{g} =\arctan 2\left(y,x\right) \tag{12}\]
Here \(f_{g}(V)\) and \(V_{opt}\) are given by Eqs. (8) and (10). An example of the ALO manifold for a Cesna 172 model is given in Fig. 2.
We proceed to define the local obstacle function _LO_. For a given position projection onto the 2D horizontal plane, \(P_{A}=(X_{A},Y_{A})\), and altitude \(Z_{A}\), we define the local obstacles function, \(\textit{LO}(X,Y;P_{A},Z_{A})\), as
\[\textit{LO}(X,Y;P_{A},Z_{A})=-Z_{A}-\textit{M}(X-X_{A},Y-Y_{A})-\textit{DTM}(X,Y) \tag{13}\]
where the ALO manifold, \(\textit{M}(x,y)\), is given by Eq. (11). Thus, \(\textit{LO}(X,Y;P_{A},Z_{A})\) is the elevation above ground level due to an ALO straight glide from \(P_{A}\) to \((X,Y)\). An example of the obstacle function is given in Fig. 3, where we show an intersection between a DTM and an ALO manifold. In this example the aircraft is located at \(P_{A}=(0,0)\) and altitude 2000 m above sea level. The regions where the ground elevation is above the ALO manifold are the local obstacles as viewed from the current position.
Employing the definition of \(\textit{LO}(X,Y;P_{A},Z_{A})\) we can define the 2D local obstacle map.
**Definition 1** (Obstacles and Free space).:
_Given the current position and altitude \((P_{A},Z_{A})\), define_
1. _Free Space_ : \(\textit{FREE}=\{(X,Y):\textit{LO}(X,Y;P_{A},Z_{A})\geq 0\}\)__
2. _Local Obstacles Set_ : \(\textit{OBST}=\{(X,Y):\textit{LO}(X,Y;P_{A},Z_{A})<0\}\)__
3. _A single obstacle,_ \(\mathcal{O}\)_, is a connected subset of the Local Obstacles Set._
### _Trajectory Optimization - Optimal Obstacle Avoidance_
In this section, we lay out the theoretical foundation for the proposed trajectory optimization algorithm. Throughout this section we consider a fixed starting point \((P_{A},Z_{A})\), which stands for the current position and elevation
Fig. 3: Local obstacle function, \(\textit{LO}(X,Y;P_{A},Z_{A})\), example
Fig. 2: Altitude loss contours of the ALO manifold for a Cesna 172, given a 20 m/sec wind heading east
of the aircraft along the planned trajectory. In the following we refer to the state projection in the 2D horizontal plane, \(P=(x,y)\), as a _point_ or _position_.
**Definition 2** (Convex Combination relative to \(P_{a}\)).: _For a given point \(P\in R^{2}\), the convex combination of \(P\) to \(P_{A}\) with parameter \(\lambda\geq 0\) is \(P(\lambda)\triangleq((1-\lambda)P_{A}+\lambda P)\); thus, \(P(1)=P\)._
**Definition 3** (Direct Reachability).: _The point \(P\) is directly reachable from \(P_{A}\) if we have \(P(\lambda)\in\text{FREE}\) for all \(\lambda\in[0,1]\)._
**Definition 4** (Obstacle Boundary Tangents).: _For a point \(P\in R^{2}\) and an obstacle, \(\mathcal{O}\), when \(P\in\partial\mathcal{O}\). Let \((x(s),y(s))\) be the parametrization of the curve \(\partial\mathcal{O}\) around \(P=(x(0),y(0))\). Let \((x^{\prime}_{+}(0),y^{\prime}_{+}(0))\) and \((x^{\prime}_{-}(0),y^{\prime}_{-}(0))\) be the derivative of the curve \((x(s),y(s))\) w.r.t. \(s\) from \(s=0^{+}\) and \(s=0^{-}\) respectfully. Denote the obstacle boundary tangents of \(P\) as \(P^{+}(\lambda)\) and \(P^{-}(\lambda)\) when \(P^{+}(\lambda)=P+\lambda(x^{\prime}_{+}(0),y^{\prime}_{+}(0))\) and \(P^{-}(\lambda)=P+\lambda(x^{\prime}_{-}(0),y^{\prime}_{-}(0))\)._
**Definition 5** (FTP - Free Tangent Point).: _For a point \(P_{T}\in R^{2}\), let \(P_{T}(\lambda)\) denote its convex combination to \(P_{A}\) as in Definition 2. A point \(P_{T}\) is a free tangent point to an obstacle if_
1. \(P_{T}\) _is directly reachable from_ \(P_{A}\)_;_
2. \(P_{T}\) _is on the boundary of OBST:_ \(P_{T}\in\partial\text{OBST}\)_;_
3. _For some_ \(\varepsilon>0\) _small enough and all_ \(\lambda\in(1,1+\varepsilon)\)_,_ \(P_{T}(\lambda)\) _is directly reachable from_ \(P_{A}\) _and_ \(P_{T}(\lambda)\notin\partial\text{OBST}\)_._
_In addition, if \(\exists\mathcal{O}\) such that \(P_{A}\in\partial\mathcal{O}\). Let \(P_{A}^{+}(\lambda)\) and \(P_{A}^{-}(\lambda)\) be the obstacle boundary tangents of \(P_{A}\) via Definition 4. The point \(P_{T}\) is an FTP if:_
1. _There exists_ \(\varepsilon_{1}>0\) _such that for all_ \(\lambda\in(0,\varepsilon_{1})\) _we have_ \(P_{A}^{+}(\lambda)\in\text{FREE}\)_._
2. \(P_{T}\) _is directly reachable from_ \(P_{A}\)__
3. \(P_{T}\) _equals_ \(P_{A}^{+}\left(\max(\inf_{\lambda>0}\{\lambda:P_{A}^{+}(\lambda)\notin\partial \text{OBST}\},\varepsilon_{2})\right)\)_, for some_ \(\varepsilon_{2}>0\) _small enough_
_Or if:_
1. _There exists_ \(\varepsilon_{1}>0\) _such that for all_ \(\lambda\in(-\varepsilon_{1},0)\) _we have_ \(P_{A}^{-}(\lambda)\in\text{FREE}\)_._
2. \(P_{T}\) _is directly reachable from_ \(P_{A}\)__
3. \(P_{T}\) _equals_ \(P_{A}^{-}\left(\min(\sup_{\lambda<0}\{\lambda:P_{A}^{-}(\lambda)\notin\partial \text{OBST}\},-\varepsilon_{2})\right)\)_, for some_ \(\varepsilon_{2}>0\) _small enough_
As illustrated in Fig. 4, the FTP on the left is located at the end of the obstacle rim, in accordance with the definition of an FTP which requires that \(P(\lambda)\notin\partial\text{OBST}\) for \(\lambda\in(1,1+\varepsilon)\).
In Fig. 5 the starting points \(P_{A1}\) and \(P_{A2}\) reside on the boundary of an obstacle. In this example \(P_{A1}\) has only two FTPs and \(P_{A2}\) has three FTPs. Note that the obstacle boundary tangents of \(P_{A1}\) are inside the obstacle; namely, there is no \(\varepsilon_{1}>0\) such that \(P_{A1}^{+}(\lambda)\in\text{FREE}\) for all \(\lambda\in(0,\varepsilon_{1})\) and \(P_{A1}^{-}(\lambda)\in\text{FREE}\) for all \(\lambda\in(-\varepsilon_{1},0)\). However, for \(P_{A2}\) the obstacle boundary tangents are outside the obstacle which results in two additional FTPs, \(P_{T-}\) and \(P_{T+}\). From each FTP the aircraft will require to advance along the obstacle boundary in an iterative manner until \(P_{B}\) becomes reachable or another obstacle becomes induced by the descent.
Fig. 4: FTP Example: An obstacle with four FTPs from \(P_{A}\)
Consider now an ALO trajectory from \((P_{A},Z_{A})\) to \(P_{B}\). If \(P_{B}\) is directly reachable from \(P_{A}\), then clearly, by Theorem 1, the ALO trajectory is simply a fixed heading glide to \(P_{B}\). Otherwise, if an obstacle stands in our way, the following holds.
**Theorem 2**.: _Suppose \(P_{B}\) is not directly reachable from \(P_{A}\) and that the set of FTP points from \(P_{A}\) is finite. Then any ALO trajectory from \((P_{A},Z_{A})\) to \(P_{B}\) must include a fixed-heading glide segment from \(P_{A}\) to an FTP._
Proof.: Denote set of FTPs by \(\{P_{i}\}_{i=1}^{N}\), and let \(P_{i}(\lambda)\) be the convex combination of \(P_{i}\) to \(P_{A}\), as per Definition 2. Extend the line segment from \(P_{A}\) to \(P_{i}\) until it touches the next obstacle, and denote the additional segment by \(E_{i}\) (see Fig. 6). Formally, let
\[\lambda_{i}^{*}=\inf_{\lambda>1}\{\lambda:P_{i}(\lambda)\in\partial OBST\}\]
and
\[E_{i}=\{P_{i}(\lambda):1\leq\lambda\leq\lambda_{i}^{*}\}\]
Let us divide the entire free space FREE into two sets: The set F\({}_{1}\) of directly reachable points from \(P_{A}\) (Definition 3), and the set F\({}_{2}\) of _potentially reachable_ points, namely, those points in FREE which are not directly reachable from \(P_{A}\). Then, as illustrated in Fig. 6, the segments \(\{E_{i}\}_{i=1}^{N}\) serve as the boundary between the directly reachable set F\({}_{1}\) and the potentially reachable set F\({}_{2}\).
Evidently \(P_{A}\) is in the reachable set (from itself), while \(P_{B}\) is not by the Theorem assumption. Therefore, the ALO trajectory (like any feasible trajectory) must cross at least one of these segments \(E_{i}\) at some point \(e_{i}=P_{i}(\lambda_{i})\), where \(1\leq\lambda_{i}\leq\lambda_{i}^{*}\).
As \(e_{i}\) is directly reachable, the ALO trajectory to \(e_{i}\) is the fixed heading trajectory, which by definition of \(E_{i}\) must pass though \(P_{i}\).
As illustrated in Fig. 7, to reach \(P_{B}\), trajectories 1, 2 and 3 pass through points \(\{P_{i}(\lambda_{i})\}_{i=1}^{3}\) respectively. As they do not include the fixed heading trajectories from \(P_{A}\) to \(P_{i}\) they are sub-optimal.
Fig. 5: FTP Example: the starting point \(P_{A}\) is on the obstacle boundary
Fig. 6: The boundary between the directly reachable and potentially reachable sets
Based on Theorem 2, we can derive an iterative graph search algorithm to obtain the ALO trajectory. A concrete algorithm, based on the standard A\({}^{*}\) search scheme, is presented in Subsection III-D. Essentially, the algorithm starts at the initial point \((P_{A},Z_{A})\), _expands_ this point by constructing the local obstacle map from this position, and finding the respective FTPs which serve as the successor nodes in the search graph. The next node to be expanded is chosen by the key or ranking function of the A\({}^{*}\) algorithm, and the process continues iteratively until an optimal path to the target is found and verified.
The computational complexity of the outlined procedure clearly depends on the number of FTPs that need to be explored per obstacle. We proceed to show that this number can be reduced to two, even for non-convex obstacles.
Let us first relate to the case in which the destination, \(P_{B}\), is outside the convex hull of an obstacle. In this case, it should be intuitively clear that we can explore only the two most extreme FTPs and not the entire FTP set, as illustrated in Fig. 8.
The latter observation indeed follows as a special case of Theorem 3 below. A less immediate case is when the target is within the convex hull of an obstacle, as illustrated in Fig. 9. Our statement requires the following definition.
**Definition 6** (Essential FTPs).: _Consider a point \(P_{A}\) and its FTP set \(\{P_{i}\}_{i=1}^{N}\) with respect to an obstacle \(\mathcal{O}\). Suppose that \(\{P_{i}\}_{i=1}^{N}\) is arranged in a monotonously increasing order in terms of heading from \(P_{A}\) to \(P_{i}\)._
_Let \(C_{i}\), \(i=1...N\), be the area enclosed between \(P_{i}\), \(\partial\mathcal{O}\) and \(P_{i+1}\) (see Fig. 9). Identify \(P_{N+1}\) with \(P_{1}\). The essential FTPs of \(\mathcal{O}\) with respect to \(P_{A}\) are the pair \(\{P_{j},P_{j+1}\}\) such that \(P_{B}\in C_{j}\)._
Fig. 8: Illustration of a destination outside the obstacle convex hull
Fig. 7: Three candidate trajectories for the ALO paths illustrated on a local obstacle map
For example, in Fig. 9 the essential FTPs are \(\{P_{1},P_{2}\}\).
**Theorem 3**.: _Suppose \(P_{B}\) is not directly reachable from \(P_{A}\). Then any ALO trajectory from \((P_{A},Z_{A})\) to \(P_{B}\) must include one of the fixed heading glide segments from \(P_{A}\) to the two essential FTPs._
Proof.: Let us observe the essential FTPs \(\{P_{j},P_{j+1}\}\) as illustrated in Fig. 9. According to Theorem 2 the ALO trajectory must include a fixed heading segment from \(P_{A}\) to some FTP. Any trajectory from \(P_{A}\) to an FTP, \(P_{k}\) which is not an essential FTP, should cross the fixed heading segments from \(P_{A}\) to \(P_{j}\) or from \(P_{A}\) to \(P_{j+1}\) at some point \(Q\). The trajectory from \(P_{A}\) through \(P_{k}\) to \(Q\), unlike the trajectory from \(P_{A}\) directly to \(Q\), is not a fixed heading trajectory, and thus via Theorem 1 it is not ALO.
### _From Continuous to Discrete Terrain Map_
In this subsection, we assume that the terrain map is as a DTM given over a discrete grid, hence the obstacles set is obtained at each stage over that grid. We propose two alternative approaches to perform interpolation of the discrete obstacles set to the continuous domain. Each alternative defines what is a feasible ("safe") path. The DTM is represented as an \(M\) by \(N\) matrix with discrete samples of the elevation data in a resolution of \(\Delta X\) and \(\Delta Y\) m. To obtain the discrete local obstacle map, we sample the local obstacles function, _LO_, for all \(1\leq m\leq M,1\leq n\leq N\) as the matrix:
\[\textit{LO}[m,n]=\textit{LO}(X_{I0}+(m-1)\Delta X,Y_{I0}+(n-1)\Delta Y;P_{A}, Z_{A}) \tag{14}\]
where \(\textit{LO}(X,Y;P_{A},Z_{A})\) is given by Eq. (13), and \((X_{I0},Y_{I0})\) is the point of the elevation sample \(\textit{DTM}[1,1]\). Due to the discretization the boundary of the obstacles \(\textit{LO}(X,Y;P_{A},Z_{A})=0\) is not generally sampled; therefore, we provide the following two interpolation schemes.
_Approach 1. Linear approximation over a triangulation:_ in this approach, we form a standard triangulation by dividing each square into two triangles as illustrated in Fig. 10. The obstacle boundaries are then identified using linear interpolation over each square. This will give polygonal obstacles, whose vertices lie on the sides of the triangles (at most two per triangle). Since only vertices can be FTPs, we can construct a finite algorithm that implements the exact (continuous) one by employing Theorem 2 and Theorem 3.
Fig. 9: Essential FTPs illustration of the four FTPs, \(P_{1}\) and \(P_{2}\) are the essential pair
_Approach 2. Safe squares:_ This approach is somewhat more conservative. Define a safe square as one for which all its vertices maintain \(\textit{LO}[m,n]\geq 0\), as illustrated in Fig. 11. A square is _unsafe_ if at least one of its vertices has \(\textit{LO}[m,n]<0\). The obstacles set is defined as the union of the unsafe squares, while FREE is the complement of the obstacles set. We include in FREE its boundary. A feasible (safe) path must be in FREE.
In this approach the formed obstacles (the complement of FREE) are polygonal with vertices on the grid points; thus, the FTPs are also on grid points only.
An FTP point, via its definition, should be directly reachable from the current position. Evaluating direct reachability with approach 2 is simpler, as the obstacles encompass entire squares only; therefore, the safe path should not intersect squares defined as obstacles. As this approach allows for a simpler implementation, we henceforth focus on it in the description of our algorithm.
Having defined the local obstacle map as polygonal sets in continuous space we can apply the results of Subsection III-B to obtain a finite algorithm to find the FTPs at each stage of the overall algorithm.
Fig. 11: Local obstacle map for the safe squares approach
Fig. 10: Local obstacle map for the linear approximation approach
### _The ALO Trajectory Planning Algorithm_
In this subsection, we first define the graph on which a trajectory search algorithm can find the ALO trajectory. Then, we choose the A\({}^{*}\) shortest path algorithm to obtain the ALO trajectory due to its efficiency property. In the sequel, we derive the algorithm pseudocode.
Let us create the graph, \(G=(V_{r},E)\), with vertices, \(V_{r}\), and edges, \(E\). The graph is created by employing an iterative algorithm. Each vertex \(V_{i}=(P_{i},d_{i})\in V_{r}\) consists of the location point \(P_{i}\) and the altitude loss, \(d_{i}\), on a trajectory to \(V_{i}\). The algorithm initializes the graph and starts exploration from the engine cutoff vertex \(V_{A}=(P_{A},0)\) and continues through intermediate FTP vertices. At each explored node \(V_{i}=(P_{i},d_{i})\) we add the vertex \(V_{j}=(P_{B},d_{j})\) if \(P_{B}\) is directly reachable from \(V_{i}\) otherwise we add, \(\{V_{j}=(P_{j},d_{j})\}\), the set of FTPs from \(V_{i}\), to the graph vertices set, \(V_{r}\). Also, we add the all of the edges \(\{(V_{i},V_{j})\}\) to the edges set \(E\). In case \(V_{j}\) is an FTP we add it to the set of nodes to be explored in the next iterations. The algorithm continues until the set of nodes to be explored is empty.
Now let us show that the ALO trajectory can be obtained from the graph, \(G=(V_{r},E)\).
**Theorem 4**.: _The ALO trajectory from position \((P_{A},Z_{A})\) to point \(P_{B}\) can be obtained from the graph \(G=(V_{r},E)\)._
Proof.: Let us prove by induction the existence in \(G\) of the ALO trajectory vertices and edges. The first ALO trajectory vertex, \(V_{i=1}\), is \(V_{A}=(P_{A},0)\); it exists in the graph since the graph build algorithm initializes \(G\) with \(V_{A}\). At the first iteration, the vertex \(V_{i=1}=V_{A}\) is explored. In case \(P_{B}\) is directly reachable from \(V_{i=1}\), than \(V_{i=2}\) is \((P_{B},d_{2})\); thus, the trajectory from \(P_{A}\) directly to \(P_{B}\) exists in the graph \(G\) which is ALO via Theorem 1. In case \(P_{B}\) is not directly reachable from \(V_{i=1}\) than the adjacent vertices set, \(\{V_{j}\}\), is the set of FTPs from \(V_{A}\). Via Theorem 2 at least one edge from \(P_{A}\) to an FTP is part of the ALO trajectory; thus, in both cases, the second vertex, \(V_{i=2}\), and the edge \((V_{i=1},V_{i=2})\) of the ALO trajectory exist on graph \(G\).
Assuming vertex number \(i\) of the ALO trajectory, \(V_{i}\), exists in \(G\), let us show that the vertex \(V_{i+1}\) and edge \((V_{i},V_{i+1})\) of the ALO trajectory exist in \(G\). In case \(P_{B}\) is directly reachable from \(V_{i}=(P_{i},d_{i})\) than the vertex \(V_{j}=(P_{B},d_{j})\) and edge \((V_{i},V_{j})\) exist in \(G\); thus, the trajectory from \(P_{i}\) directly to \(P_{B}\) exists in the graph \(G\) (which is ALO via Theorem 1). In case \(P_{B}\) is not directly reachable from \((P_{i},d_{i})\), than the adjacent vertices of \(V_{i}\), \(\{V_{j}=(P_{j},d_{j})\}\), are the FTPs from \(V_{i}\). Via Theorem 2 at least one edge from \((P_{i},d_{i})\) to the FTP \(V_{j}\) is part of the ALO trajectory; thus, vertex \(V_{i+1}=V_{j}\) and edge \((V_{i},V_{i+1})\) of the ALO trajectory also exist in graph \(G\). Therefore, the ALO trajectory can be obtained from graph \(G\).
Next, we present an outline of the chosen path planning algorithm, in a pseudocode format. The algorithm returns an optimal (ALO) path from the initial position and altitude \((P_{A},Z_{A})\) to the candidate landing site \(P_{B}\), or reports a failure if no such obstacle-avoiding path exists. The standard A\({}^{*}\) algorithm [29, 30] is used to guide the graph construction and search. The heuristic function \(h(V_{i})\), for some vertex \(V_{i}=(P_{i},d_{i})\), is naturally taken as the minimal straight-glide altitude loss from position \((P_{i},Z_{A}+d_{i})\) to the target \(P_{B}\). This heuristics is admissible (optimistic), which guarantees the optimality of A\({}^{*}\), and also consistent (monotone), which entails certain efficiency properties. A heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path.
The heuristic function employed in Algorithm 1, \(h(V_{i})\), for a vertex \(V_{i}=(P_{i},d_{i})\), is the minimal altitude loss in free space from point \(P_{i}\) to the landing site \(P_{B}\). This heuristic is a consistent underestimate of the minimal altitude loss as via Theorem 1 a fixed heading trajectory is ALO in free space; therefore, the ALO trajectory in the presence of ground obstacles may not maintain a fixed heading and thus must incur greater altitude loss. Thus, the proposed A\({}^{*}\) variant yields the optimal trajectory on the graph whose nodes are the engine cutoff location \((P_{A},0)\) the landing site \((P_{B},d_{j})\) and the FTPs.
The proposed ALO algorithm is presented in Algorithms 1. The algorithm returns either the ALO path from \(P_{A}\) to \(P_{B}\), or _failure_ if none exits. Algorithm 1 is essentially the standard A\({}^{*}\) algorithm, following [30], with the straight-path optimal altitude loss serving as the link costs. Algorithm 2 outlines the relevant application-specific functions. With standard on-board computer capability, the sparsity feature of our unique variant yields calculation cycles of less than three seconds. And, of course, the engineering application phase of our derivation will include Monte Carlo studies, addressing the uncertainties issue. However, it a priori assures non-divergence, penalizing reachability as a function of uncertainty levels. In the sequel, the pseudo-codes are presented:
```
1:functionALOUT-Trajectory-Search(\(P_{A},Z_{A},P_{B},W,\textit{DTM}\))
2:\(V_{A}\leftarrow(P_{A},0)\)
3:\(g(V_{A})\gets 0\); \(h(V_{A})\leftarrow\)ALT-Loss(\(P_{A},P_{B}\));
4:\(f(V_{A})\gets g(V_{A})+h(V_{A})\); parent(\(V_{A}\)) \(\leftarrow\)nil
5:OPEN\(\leftarrow\) a list ordered by \(f(\cdot)\), with \(V_{A}=(P_{A},0)\) as the initial element
6:CLOSED\(\leftarrow\emptyset\) (an empty list)
7:loop
8:ifOPEN\(=\emptyset\)thenreturn failure
9:\(V_{i}=(P_{i},d_{i})\leftarrow\) element \(V_{i}\) of OPEN with the smallest value of \(f(V_{i})\)
10: Add \(V_{i}\) to CLOSED and remove \(V_{i}\) from OPEN
11:if\(Z_{A}+f(V_{i})>-\textit{DTM}(P_{B})\)thenreturn failure
12:if\(P=P_{B}\)thenreturn(\(g(V_{i})\),Optimal-Path(\(V_{A},V_{i}\)))
13:Success\(\leftarrow\)Expand(\(P_{i},d_{i}+Z_{A}\)) (find the adjacent vertices of \(V_{i}\))
14:for every\(P_{j}\in\)Successorsdo
15:\(g_{j}\gets g(V_{i})+\textsc{Alt-Loss}(P_{i},P_{j})\) (note that \(g(V_{i})=d_{i}\))
16:\(V_{j}\leftarrow(P_{j},g_{j})\)
17:if\(V_{j}\notin\textit{CLOSED}\)then
18:\(h(V_{j})\leftarrow\textsc{Alt-Loss}(P_{j},P_{B})\)
19:if(\(V_{j}\in\textit{OPEN}\ \&\ g_{j}<g(V_{j})\)) then
20:\(g(V_{j})\gets g_{j}\); \(f(V_{j})\gets g(V_{j})+h(V_{j})\); parent(\(V_{j})\gets V_{i}\)
21:if\(V_{j}\not\in\textit{OPEN}\)then add \(V_{j}\) to OPEN
22:\(g(V_{j})\gets g_{j}\); \(f(V_{j})\gets g(V_{j})+h(V_{j})\); parent(\(V_{j})\gets V_{i}\)
23:
24:functionOptimal-Path(\(V_{A},V_{B}\))
25:\(\textit{path}\leftarrow[V_{B}]\); \(V_{i}\gets V_{B}\)
26:while\(V_{i}\neq V_{A}\)do
27:\(V_{i}\leftarrow\textit{parent}(V_{i})\)
28:\(\textit{path}\leftarrow[V_{i},\textit{path}]\)
29:returnpath
```
**Algorithm 1** Altitude-Loss-Optimal Trajectory Search with A\({}^{*}\)
The inputs to the algorithm in line 1 are the initial position \((P_{A},Z_{A})\), the target (landing site) point \(P_{B}\), the wind components, \(W=(W_{X},W_{Y})\), and the digital terrain map, DTM. For simplicity, we identify nodes in the search graph with their location point \(P\) and altitude loss \(d\). The basic quantities assigned to a node \(V_{i}=(P_{i},d_{i})\) are \(g(V_{i})\), the altitude loss of the best path discovered so far from \(P_{A}\) to \(V_{i}\); the heuristic function \(h(V_{i})\), an under-estimate of the altitude loss from \(V_{i}\) to \(P_{B}\); and their sum \(f(V_{i})\) which serves as an estimate for the total altitude loss for a path that goes through \(P_{i}\). Also, \(\textit{parent}(V_{i})\) identifies the predecessor to \(V_{i}\) for tracing the optimal path. A CLOSED list contains nodes whose minimal altitude loss has been determined, and an OPEN list contains _frontier_ nodes that are waiting to be explored. The OPEN list is typically implemented as a priority queue, with key \(f\). Lines 3-4 initialize the search graph with the initial node \((P_{A},0)\), and lines 5-6 initialize the OPEN and CLOSED sets. The body of the algorithm is a loop that terminates with an optimal path or a failure. In line 8, _failure_ is declared if there are no more nodes in OPEN to explore (while \(P_{B}\) has not been reached before). In lines 9-10, the next node to be expanded is chosen as the one in OPEN with minimal key-value \(f(V_{i})\), and moved CLOSED. Line 11 (the only non-standard addition) checks if the current best under-estimate in \(f(V_{i})\) still allows to reach the target above its ground level; if not, it returns _failure_. This additional check ensures that the search does not continue in vain even if OPEN is not empty. Line 12 terminates the algorithm with success if a feasible path to \(P_{B}\) with minimal altitude loss has been determined. It then returns the minimal altitude loss \(g((P_{B},d))\), and the optimal path which is traced back via the Optimal-Path function in lines 24-29. Next, line 13 finds the successors to node \(V_{i}\) via the function Expand, which receives both the node projection onto the 2D horizontal plane and altitude. Finally, lines 14-23 update, for each successor \(V_{j}\) which is not already in CLOSED, the altitude loss \(g(V_{j})\) of the best path
found so far to \(V_{j}\), and its parent node in that path.
Some explanations for the functions in Algorithm 2 are interleaved as comments. As noted, pre-calculation of the ALO manifold is possible and useful for computational efficiency. The two functions in lines 38 and 40 of Expand are not explicitly specified. Directly-Reachable simply checks if the straight-line path from \((P,d)\) to \(P_{B}\) lies in the FREE part of the obstacles map _LOMap_, and is a standard procedure in computational geometry. The Find-Extreme-FTPs function relies on the results of Subsection III-B and the characterizations in Definitions 5 and 6, and can be implemented by directly following these definitions. An efficient implementation however requires more advanced methods from computational geometry, and is outside the scope of the present paper. Some related algorithms may be found in [31].
```
1:functionALO-Manifold(\(\Delta P\))
2:\(\triangleright\) The ALO manifold function \(M(x,y)\) is specified in Eq. 11
3:\(\triangleright\) For efficiency \(M(\cdot)\) is pre-calculated on suitably dense grid
4:\(\Delta Z\gets M(\Delta P)\)
5:return\(\Delta Z\)
6:functionAlt-Loss(\(P1,P2\))
7:\(\triangleright\) Altitude loss from \(P_{1}\) to \(P_{2}\) for optimal straight-glide with wind
8:\(\Delta Z\leftarrow\textsc{ALO-Manifold}(P2-P1)\)
9:return\(\Delta Z\)
10:functionExpand(\(P\), \(Z_{P}\))
11:\(\triangleright\) Obtain the essential successors of node \(P\)
12:LOMap\(\leftarrow\)Calculate-Local-Obstacle-Map(\(P\), \(Z_{P}\))
13:ifDirectly-Reachable(\(P,P_{B},\textit{LOMap}\)) then
14:Successors\(\leftarrow\{P_{B}\}\)
15:elseSuccessors\(\leftarrow\)Find-Extreme-FTPs(\(P,\textit{LOMap}\))
16:returnSuccessors
17:functionCalculate-Local-Obstacle-Map(\(P\), \(Z_{P}\))
18:\(\triangleright\) Calculate the local obstacle map as seen from \(P\), for the given DTM
19:\(\triangleright\) We apply the safe squares approach, as per Section III-C
20: Compute LO on grid points centered at \(P\), Eqs. (13),(14)
21:LOMap\(\leftarrow\) Mark all map squares as OBST or FREE
22:returnLOMap
```
**Algorithm 2** Functions for Algorithm 1
Now we proceed to show that the proposed algorithm obtains the ALO trajectory from graph \(G\).
**Theorem 5**.: _Algorithm 1 obtains the ALO trajectory from position \((P_{A},Z_{A})\) to \(P_{B}\)._
Proof.: The edges of every explored node \(V_{i}=(P_{i},d_{i})\) to an adjacent node \(V_{j}=(P_{j},d_{j})\) are calculated by employing the Expand method to obtain \(P_{j}\) and calculating \(d_{j}\) by adding the minimal altitude loss between \(P_{i}\) and \(P_{j}\) to \(d_{i}\). In case \(P_{B}\) is directly reachable from \(V_{i}\) than \(P_{j}=P_{B}\), otherwise \(P_{j}\) is an FTP. Therefore, the explored edges and vertices are those of graph \(G=(V_{r},E)\), i.e., Algorithm 1 (the A\({}^{*}\) algorithm) operates on graph \(G\).
The heuristic function, \(h(V_{i})\), employed in Algorithm 1 is the minimal altitude loss in free space from \(V_{i}=(P_{i},d_{i})\) to \(P_{B}\). This function via Theorem 1 yields the minimal cost in free space \(P_{B}\) and therefore serves as a lower bound in case \(P_{B}\) is not directly reachable due to obstacles. Therefore, \(h(V_{i})\) is a consistent underestimate of the minimal altitude loss from \(V_{i}\) to \(P_{B}\).
Now, via [30, Result 4 pp. 78] we have that A\({}^{*}\) finds the optimal trajectory in graph \(G\), given \(h(V_{i})\) is an admissible consistent heuristic. As via Theorem 4 the ALO trajectory exists in graph \(G\), the obtained trajectory is the global ALO trajectory from \(V_{A}=(P_{A},0)\) to \(P_{B}\).
Emphasizing the inherent global optimality of our Optimization formulation: (a) the math model of the dynamics/aerodynamics of our system, incorporating the wind effect, results in cone-like manifolds, periodically updated
on the fly. (b) contours, obtained via intersections between these envelopes and the terrain, constitute inputs to the algorithm which on-line finds the tangent grid nodes, (c) over these nodes and the candidate landing strips, our accelerated algorithm propagates the trajectory towards these destinations. Now, we proceed to show that the search graph used in Algorithm 1 is finite. First, note that our algorithm extracts obstacles as 2D polygons, due to that we can show that amount of FTPs derived from a local obstacle map is finite.
**Proposition 1**.: _Given a polygonal local obstacles nodes set \(\{(x_{i},y_{i})\}_{i=1}^{N}\), the FTPs, \(\{P_{j}\}_{j=1}^{K}\), of the local obstacles is a subset of the polygonal nodes set; namely, \(\{P_{j}\}_{j=1}^{K}\subset\{(x_{i},y_{i})\}_{i=1}^{N}\)._
Proof.: According to the FTP Definition 5 (b) we have \(\{P_{j}\}_{j=1}^{K}\in\partial\)OBST. Therefore, FTPs are either on (i) from the set \(\{(x_{i},y_{i})\}_{i=1}^{N}\) or (ii) on \(\partial\)OBST not including \(\{(x_{i},y_{i})\}_{i=1}^{N}\). In case (ii) we have that either condition (a) or condition (c) of Definition 5 does not hold (see Fig. 12). Therefore, the FTPs can only be of case (i), i.e., all FTPs are from the set \(\{(x_{i},y_{i})\}_{i=1}^{N}\).
**Proposition 2**.: _Given an \(M\) by \(N\) discrete elevation map. The vertices set of the graph in Algorithm 1 is finite._
Let us use the safe squares representation of local obstacles mapping (Section III-C above) as means for substantiating the following proof.
Proof of Proposition 2.: In the safe squares approach the local obstacles are polygons with vertices as a subset of the grid nodes of an \(M\) by \(N\) discrete elevation map. Thus, via Proposition 1 the FTPs are also a subset of the \(M\) by \(N\) discrete elevation map grid. Due to that the set of horizontal nodes is finite and bounded by \(M\cdot N\) nodes. Each time the algorithm transverses from one horizontal node to a subsequent one it will lose at least the minimal altitude along a single discrete elevation map grid. Thus, in case the algorithm revisits the same horizontal point from the \(M\times N\) grid it will reach the ground level after enough visits. Therefore, the graph spanned from FTPs of local obstacles the engine cutoff location, and the landing site location is finite.
Since the search graph is finite, Algorithm 1 will converge, via Theorem 5, to the ALO trajectory from position \((P_{A},Z_{A})\) to \(P_{B}\).
A completed search graph is illustrated in Fig. 13. The initial node is the engine cutoff point \(P_{A}\), the target node is the intended landing site \(P_{B}\), and the remaining nodes are FTPs that were computed iteratively as part of the algorithm. The link cost (or weight) is the optimal straight-glide altitude loss between the node positions, effectively given by the ALO manifold computed above. Each node position \(V_{i}=(P_{i},d_{i})\) is adjoined as part of the algorithm with a cost \(g(V_{i})\) which holds the altitude loss from \(P_{A}\) to that point, the heuristic \(h(V_{i})\) which is
Fig. 12: Illustration of a polygonal local obstacle. The points \(T_{1}\) and \(T_{2}\) represents a contradiction to Definition 5 (c) and Definition 5 (a) respectively
an under-estimate of the optimal altitude loss from \(V_{i}\) to \(P_{B}\), and the sum \(f(V_{i})=g(V_{i})+h(V_{i})\) which is used to choose the next node to be explored.
Note that in this subsection we have discussed the implementation of the algorithm starting from the engine cutoff point. Inside the aircraft, the algorithm will be invoked repeatedly every 300 ft of altitude loss and thus will consider changes in the environment.
## IV Accounting for the Effect of Turns
In this section we augment our approach to account for the effect of turns. To this end, we simply add an estimate of the altitude loss associated with each change of heading, without modifying the geometry of the search graph construction. While the effect of a few turns may not be cardinal in a glide of several miles of more, this addition does provide better estimates of the total altitude loss of the selected trajectory.
We assume that, in practice, the pilot will perform a simple turn maneuver rather than an exact ALO maneuver which might be hard to carry out. In particular, we restrict our attention to fixed bank-angle maneuvers. In Appendix B we derive an analytical expression for the altitude-loss associated with fixed bank-angle maneuvers. This results in a circular arc in wind coordinates. We also show that the ALO turn in still air is performed at the stall velocity limit, \(V_{stall}(n(\phi))\), which results in the minimal turn radius. The resulting minimal altitude loss, \(\Delta Z^{*}\), is given by
\[\Delta Z^{*}=\frac{2K_{SR}}{g}\left(\frac{V_{stall}(1)^{4}+V_{0}^{4}}{\sin(2 \phi)}\right)|\Delta\psi|+\frac{1}{2g}\left(V_{g,1}^{2}-V_{g,0}^{2}\right) \tag{15}\]
where \(\Delta\psi\) is the heading change in the Airmass coordinates, \(\phi\) the selected bank angle, and \(V_{g,0}\) and \(V_{g,1}\) are the ground velocities before and after the turn maneuver (to be taken as the ALO glide velocities at the respective straight-glide segments). The constants \(V_{stall}\), \(V_{0}\) and \(K_{SR}\) are specified in Appendix A. We note that the second term in (15) follows from the change in energy due to velocity change before and after the turn.
It may be seen from the above equation that minimal altitude loss is obtained for bank angle \(\phi=\pi/4\). This value is to be selected, unless this angle is constrained to a smaller value.
The above estimate can be used in two ways. First, for a given glide trajectory composed of straight segments, the additional altitude loss can be computed consecutively for each heading change, and as needed it may be checked whether the modified altitude along the trajectory satisfies that ground clearance requirements. Alternatively, altitude corrections due to heading change may be incorporated in the search procedure in a straightforward manner by applying them to each newly-explored node, by so that they are taken into account during trajectory selection and optimization.
In Appendix C we compare the altitude-loss of the proposed turn segment to the optimal one computed by the GPOPS optimization package [38]. A related work concerning altitude loss during turns can be found in [34].
Fig. 13: Illustration of the graph produced by the proposed algorithm. The obstacle shown is the local obstacle from \(P_{A}\). Obstacles seen from other vertices are not depicted
## V Reachability Scenarios
Employing our generalized approach, we compute optimal maximum-range trajectories from the engine cutoff location A as a function of wind and the initial flight velocity and heading. In the following scenarios, we assume that the aircraft, Cessna 172S, weight is about 907 kg (no fuel - faulty fuel gauge) [32]).
_First scenario:_ In the first demonstration, the aircraft has experienced engine cutoff at the horizontal position \(P_{A}\) in Figure 14, altitude 2500 m. The onboard emergency trajectory planning algorithm analyzes online the attainable landing site candidates. The algorithm employs the aircraft wind estimation capability, yielding a 20 m/sec wind, heading North. The ALO manifold - the surface surrounding \(P_{A}\), depicted in Figure 14, is then displayed on the LCD screen of the aircraft "Glass Cockpit" (GC), which presents the relevant obstacles. Also, the two landing sites are situated behind terrain obstacles inside the curve at the intersection of the ALO manifold and the terrain. The algorithm calculates the optimal trajectories to both landing sites. It turns out that the paved runway \(B[2]\) at position \(P_{B}[2]\) is unattainable as it is outside the ALO manifold; thus, the pilot has no choice but to aim at landing site \(B[1]\) which is a barren field. The algorithm yields the optimal trajectory from position \(P_{A}\) to landing site \(B[1]\), the solid red curve in Fig. 14, and generates flight instructions aiding the pilot to follow the optimal trajectory to this second-best landing site.
In case the wind estimator yields wind heading of -20\({}^{\circ}\), the better landing site, the paved runaway \(B[2]\) can be reached. Again, the algorithm yields flight instructions, guiding the pilot to the optimal landing site.
_Second scenario:_ The aircraft experiences an engine malfunction at position \(P_{A}\), as illustrated in Figure 15. The onboard emergency trajectory planning algorithm displays the altitude-loss manifold, following online wind estimation of a 15 m/sec wind heading West. The resulting manifold is then displayed on the GC screen - the solid surface in Figure 15. In this scenario, the paved runway \(B_{2}\) in position \(P_{B}[2]\) is attainable. The algorithm calculates the optimal trajectory from position \(P_{A}\) to landing site \(B_{2}\) and generates flight instructions to aid the pilot. After descent of about 10 km, at position \(P_{A}^{\prime}\), the wind estimator detects that the wind magnitude has diminished to about 2 m/sec. The algorithm detects that the mountain ahead is now an obstacle that must be avoided. The onboard algorithm calculates the new optimal trajectory to \(B_{2}\) that circumvents the obstacle, yielding the optimal velocity and channeling flight instructions directly to the GC, guiding the pilot safely to this landing site. Our online algorithm re-directs the aircraft to circumvent the obstacle on the solid red trajectory towards \(B[2]\).
## VI Conclusion
In this work, we have derived the theoretical basis and the algorithmic framework for calculating altitude-loss-optimal descent paths towards a candidate landing location in case of engine power loss. The algorithm takes into account the effects of intense in-plane and crosswinds while avoiding ground-induced obstacles. First, our algorithm iteratively obtains intersections of altitude-loss-optimal manifolds, drawn from instantaneous aircraft locations with terrain elevation mapping; from these on-line-generated contours we construct sparse grids for OGS algorithmics. The algorithm relies on a novel iterative visibility graph framework, which effectively turns the 3D problem into a sequence of 2D ones. I.e., we show that our algorithm needs to consider just two points on each intersection contour to optimally bypass any obstacle of whatever shape. This serves to reduce the computational load, to allow for real-time calculation in case of emergency. We have proven that our algorithm is globally optimal in terms of altitude loss, subject to combined effects of winds and terrain-induced obstacles. We further include the effect of turns to assure safe near-optimal glide trajectories.
We apply our algorithm in realistic scenarios, using a Cessna 172 model, and demonstrate both the altitude-loss-optimal trajectory calculation and airstrip reachability determination. Furthermore, an initial validation flight text was conducted, as described in Appendix D.
Note that our modeling assumes altitude-independent wind velocity and air density. Relaxing the constant wind assumption invokes two challenges: (a) the ALO trajectory in free space is no longer a fixed-heading trajectory, (b) the local obstacle map may change in a way that it is not possible to reduce the 3D problem into local 2D problems. One may resort to solving numerically, using a piecewise-linear approximation of wind as function of altitude. Further research may relax the constant air density assumption as well.
## Acknowledgement
We thank Mr. Yuval Dvir, a fully-certified flight test pilot, for piloting our validation flight testing, and for his valuable insights. This research was supported by the Israel MoD Grant number 4441016309.
## Appendix A The Aerodynamic Model
We start with modeling, the engine cutoff problem, as in [1]:
\[\dot{X} =V\cos(\gamma)\cos(\psi)+W_{X}\] (A.1) \[\dot{Y} =V\cos(\gamma)\sin(\psi)+W_{Y}\] (A.2) \[\dot{Z} =-V\sin(\gamma)\] (A.3) \[\dot{V} =-g\cdot\left(\frac{D}{mg}+\sin(\gamma)\right)\] (A.4) \[\dot{\gamma} =\frac{g}{V}\cdot\left(\frac{L\cos(\phi)}{mg}-\cos(\gamma)\right)\] (A.5) \[\dot{\psi} =\frac{L\sin(\phi)}{mV\cos(\gamma)}\] (A.6)
In Equations (A.1)-(A.6) the aircraft is modeled as a point mass in a Ground frame of reference. The variables X,Y,Z are the North, East, Down location components of the point mass in the Ground frame. The variables \(\psi\), \(\gamma\), and \(V\) are the heading, vertical angle and magnitude of the aircraft velocity vector relative to the air-mass.
The lift and drag forces are specified by the standard expressions:
\[L=qSC_{L}\,D=qS\left(C_{D0}+KC_{L}^{2}\right)\]
where \(q=\frac{1}{2}\rho V^{2}\). Recall that \(C_{L}\) depends on the Angle of Attack.
As in [1], we employ a reduced-order model where the fast variables are the true air velocity, \(V\), the FPA and the bank-angle, \(\phi\), while the slow variables are \(X\), \(Y\), \(Z\) and \(\psi\). The control variables in this model are the pair \((V(t),\phi(t))\).
Substituting \(\dot{\gamma}\cong 0\) into Equation (A.5) yields the load factor \(n=\frac{L}{mg}\) as a function of \(\phi\) and \(\gamma\):2
Footnote 2: Equations (A.7) (A.8), (A.9)), (A.10) are analogous to [35, Equations (9.67), (8.18), (9.73)] and [36, Equation (3.6)]
\[n(\phi,\gamma)=\frac{\cos(\gamma)}{\cos(\phi)}\] (A.7)
Thus, \(n\cong n(\phi)=\frac{1}{\cos(\phi)}\) (e.g., [36, Equation (4.21)]).
The stall limit of the aircraft is obtained by employing the load factor definition \(n=\frac{L}{mg}\) and the lift force equation at \(C_{L}=C_{Lmax}\):
\[V_{stall}(n(\phi))=\sqrt{\frac{2mg}{\rho SC_{Lmax}}n(\phi)}\] (A.8)
Combined with an upper limit \(V_{max}\) on the flight velocity, we have \(V_{stall}(n(\phi))\leq V\leq V_{max}\).
The cost function in our problem is the altitude loss, which is the integral of the sink-rate. Starting from Equation (A.3), the sink rate can be expressed as a function of the control variables, \((V,\phi)\), using the parabolic drag approximation (A) and employing (A.4) subject to \(\dot{V}\cong 0\), yielding
\[\dot{Z}=\frac{\rho SC_{D0}}{2mg}V^{3}+\frac{2Kmg}{\rho S}\frac{n^{2}}{V}\] (A.9)
It is convenient to express the sink rate in terms of \(V_{0}\), the optimal max-range glide velocity in still air. The optimal max-range-optimal dynamic pressure in still air, subject to small FPA approximation, is given by \(q_{0}=\frac{mg}{S}\sqrt{\frac{K}{C_{D0}}}\). Therefore, \(V_{0}\), the optimal glide velocity in still air, is:
\[V_{0}=\sqrt{\frac{2mg}{\rho S}\sqrt{\frac{K}{C_{D0}}}}\] (A.10)
The sink rate can now be expressed in terms of \(V_{0}\) and \(\phi\) by substituting Equation (A.10) and \(n\cong n(\phi)\) into the sink rate Equation (A.9). We denote the sink rate function as \(f_{0}(V,\phi)\):
\[\dot{Z}=f_{0}(V,\phi)=K_{SR}\left(\frac{V^{4}+n(\phi)^{2}V_{0}^{4}}{V}\right)\] (A.11)
where \(K_{SR}=\frac{\rho SC_{pn0}}{2mg}\) and \(n(\phi)=\frac{1}{\cos(\phi)}\). In straight glide segments \(f_{0}(V,0)\) defines the running cost of our problem. Minimizing the cumulative running cost constitutes our optimization objective.
## Appendix B Altitude Loss Due to Turns
Let us derive the expression for altitude loss as a result of heading change. We work in the Airmass Frame. Employing Equations (A.6), (A.7) and \(n=\frac{L}{mg}\), yields the heading turn rate:
\[\dot{\psi}=\frac{g}{V}\tan(\phi)\] (B.1)
Let us obtain the ALO maneuver. From the chain rule of derivation,
\[\dot{Z}=\frac{\partial Z}{\partial\psi}\dot{\psi}\implies\frac{\partial Z}{ \partial\psi}=\frac{\dot{Z}}{\dot{\psi}}\]
Now, employing \(n=\frac{1}{\cos(\phi)}\) and Eqs. (A.11, B.1), for \(\dot{Z}\) and \(\dot{\psi}\):
\[\frac{\partial Z}{\partial\psi}=\frac{2K_{SR}}{g}\left(\frac{V^{4}cos(\phi)^{ 2}+V_{0}^{4}}{\sin(2\phi)}\right)\] (B.2)
As \(\frac{\partial Z}{\partial\psi}\) is monotonous in \(V\), the ALO turn velocity is the lower bound, \(V_{stall}(n(\phi))\). Let us substitute Eq. (A.8) into Eq. (B.2) to obtain the minimal altitude-loss, \(\frac{\partial Z}{\partial\psi}^{*}\), at bank angle \(\phi\):
\[\frac{\partial Z}{\partial\psi}^{*}=\frac{2K_{SR}}{g}\left(\frac{V_{stall}(1) ^{4}+V_{0}^{4}}{\sin(2\phi)}\right)\] (B.3)
Further, the latter expression is clearly minimal for \(\phi=\frac{\pi}{4}\). Therefore, the ALO maneuver is to turn at constant velocity, \(V_{stall}(n(\frac{\pi}{4}))\), and a bank angle, \(\phi=\frac{\pi}{4}\). This result corresponds the 'controls histories' in [33, Figures 9(c),11(c),12(c)], which were acquired by numerical integration of extremals. The velocity control change to the stall limit while increasing the load factor, \(n\), when maneuvering; however, in these examples, the load factor increases to more than \(n(\frac{\pi}{4})\) in order to meet the boundary constraints.
With constant \(\phi\) and \(V\), the aircraft performs the turn with a constant radius in the Airmass Frame, namely
\[R=\frac{V^{2}}{g\tan(\phi)}\] (B.4)
For a heading change from \(\psi_{0}\) to \(\psi_{1}\) (see Figure 16), the altitude loss during the turn itself, as follows from (B.3), is given by
\[(\Delta Z)_{a}=\frac{2K_{SR}}{g}\left(\frac{V_{stall}(1)^{4}+V_{0}^{4}}{\sin(2 \phi)}\right)|\psi_{1}-\psi_{0}|\] (B.5)
Fig. 16: The spatial geometry of turns in the Airmass Frame
In addition to the altitude-loss during the turn itself, we need to consider the effect of the velocity change before and after the maneuver. Note that under our assumptions, the flight velocity is a fast time-scale variable, so that the aircraft can change its velocity instantaneously when it enters and exits the the turn maneuver. This variation in kinetic energy therefore directly translates to variation in potential energy, leading to the following estimate for the total altitude loss:
\[\Delta Z^{*}=(\Delta Z)_{a}+\frac{1}{2g}\left(V_{g,1}^{2}-V_{g,2}^{2}\right)\] (B.6)
where \(V_{g,0}\) and \(V_{g,1}\), respectively, are the aircraft ground velocities before and after the turn maneuver.
## Appendix C Performance Analyses
We present here sample sensitivity analyses for the formulations of the previous subsection for altitude-loss due to the turn maneuver. We employ the aerodynamic model of a Cessna 172. We fitted a quadratic drag approximation to the model in [37]. The resulting aircraft model parameters are: \(C_{D0}\) = 0.0329, \(K\) = 0.0599 with mass \(m\) = 907 kg, S = 15.9793 m\({}^{2}\), \(V_{stall}(1)\) = 27.27 m/sec. We further used a standard atmosphere model with 15\({}^{\circ}\)C at sea level.
_Turn Performance Analysis:_ Figure 17 depicts the altitude loss rate according to Eq. (B.2) as a function of the turn speed and bank angle. Figure 18 shows the turn radius per Eq. (B.4). We observe that indeed the optimal solution is to fly at stall limit and a bank angle of 45 degrees.
Fig. 17: Altitude loss vs. the turn velocity in the Airmass Frame
Fig. 18: Turn radius vs. the turn velocity in the Airmass Frame
_Comparison to a GPOPS optimal solution:_ We proceed to demonstrate our analytic solution for ALO maneuver vs. the numerical solution of GPOPS [38]. The GPOPS solver was given the full state Eqs. (A.1-A.6) without our assumptions of small glide angles. Ours is an analytical approximation, assuming a step change in velocity (and consequently in energy), whereas the GPOPS search is quasi-continuous.
Fig. 19 shows a good match between our solution and the optimal-turn trajectory obtained via GPOPS. Note that in our simulation we have assumed a step change in velocity vector direction (in Section IV we discuss optimal concurrent descent and turn maneuvering). GPOPS numerical scheme does not assume such a step-change in the V direction.
## Appendix D Flight Experiment
Towards a validation of the proposed model and algorithms, a flight experiment plan was devised. This Appendix describes briefly the initial experiment that was conducted.
We chose to flight-test our optimal algorithm on a Cessna 172 - to demonstrate both the optimal airstrip choice and the trajectory generation, and the following of this trajectory by pilot-in-distress.
A team of undergraduate students pursued this objective, supervised by Daniel Segal, and advised by Dr. Aharon Bar-Gill. Technical assistance was provided by the Technion CRML laboratory (Control, Robotics and Machine Learning) and its engineering team, headed by Mr. Koby Kohai. The following text was provided by A. Bar-Gill.
Fig. 19: GPOPS vs analytic solution 3D view trajectory with \(W_{Y}=-20\) and \(W_{X}=0\)
Fig. 20: System schematics — on-line algorithmic computation
The team has developed a dedicated simulation for testing the algorithm implementation. It involves flight modeling, generation of optimal trajectory towards the preferable airstrip and cues on a screen - for the pilot to track this trajectory. The offline simulation was then adapted, its software was embedded into the airborne PC and run in the lab as a hybrid simulation -- for debugging the algorithm implementation and firmware interfaces.
The simulated scenario is shown on screens of both the system portable PC and the pilot's dedicated display (used in course of our flight demo):
1. The right side of the pilot display depicts the dynamically-generated WP's (Way Points). Following these WP's, the pilot circumvents terrain elevations - on his way to the goal airstrip. The algorithm repeatedly checks the validity of the originally-chosen optimal trajectory (green), with black arrow pointing along - vis-a-vis th e red ones, which represent trajectories towards higher-weighing candidate airstrips In the simulated scenario, the aircraft velocity vector's pointing tracks the green trajectory by following the dynamic Way Point, running along the trajectory and periodically comparing with the backup trajectory (red).
2. The central, radar-like, display screen- the black circle at its center symbolizes the aircraft, and the red points - the candidate landing strips. The blue "disk" represents the instantaneous reachability envelope.
3. The pilot's guidance cues -- heading and roll to point at along-trajectory-running WP and pitch angle / Descent Rate / V_air. The left side of the pilot display, the artificial horizon - "V"-symbol with "wings" stands for instantaneous pointing of the velocity vector of the aircraft. In order to track the optimal trajectory, computed by the researchers' algorithm, the pilot must align this "V with wings" symbol with the symbol, which comprises two yellow triangles.
_The overall system:_
* VectorNav unit, onboard computer, pilot display, Client-Server based on UDP connection, next waypoint in shared file.
_Real-Time Inputs:_
* Position (Lon, Lat, Alt), Heading (Speed, Yaw), Wind Speed (North, East).
_Pre-Processing:_
* DTM -- high resolution elevation map array, sliced into 3 parts.
* CSV file, pre-calculation of parameters for quick access.
* Reachability Envelopes -- calculation yielding optimal altitude loss trajectories.
* CSV file, containing optional landing sites.
_Algorithm:_
* landing sites within the chosen area.
* the site with the highest attribute.
* W.P.-to-W.P. Guidance, following the Next Waypoint calculation.
Fig. 21: Pilot Dedicated Display
- using an artificial horizon.
At the airfield, the students' team installed the airborne system, and inputted into the system PC the locations of the candidate airstrips (East of Mount Tabor) and their respective weightings.
_The Flight Test Team (professional flight-test crew, volunteering their expertise):_ Test pilot, Yuval Dvir tracked the optimal trajectory cue on a dedicated display, Safety pilot, Mike Dvir maintained the glide-optimal flight velocity (value above Idle), and kept checking the sanity of the algorithm implementation.
_Flight test scenario - July 30, 2019:_ Assuming engine failure at altitude of 2200 [ft] and heading 100\({}^{\circ}\) West of Mount Tabor, and potential landing strips East of it, the algorithm has chosen the best landing strip available East of Mount Tabor and computed the optimal trajectory for the test pilot to track. And indeed, the pilot has circumvented Mount Tabor appropriately, from North, towards the virtual landing strip North-East of Mount Tabor, preferred over another virtual site South-East of Tabor. As a result, the concept was in-flight validated, implementing our real-time algorithm, and tracking the globally-optimal trajectory by the pilot.
|
2303.15324
|
Can Large Language Models design a Robot?
|
Large Language Models can lead researchers in the design of robots.
|
Francesco Stella, Cosimo Della Santina, Josie Hughes
|
2023-03-15T09:41:44Z
|
http://arxiv.org/abs/2303.15324v1
|
# Can Large Language Models Design a Robot?
###### Abstract.
Large Language Models can lead researchers in the design of robots.
Large Language Models (LLMs) [1], are revolutionizing the field of robotics, providing robots with the ability to understand and process natural language at a level previously thought impossible. These powerful AI tools have the potential to improve a wide range of tasks in robotics, including natural language understanding, decision making, and human-robot interaction. One of the key advantages of large language models is their ability to process large amounts of text data, such as instructions, technical manuals, and maintenance logs, and internalize an implicit knowledge containing rich information about the world from which factual answers can be extracted. In fact, the text you have just read was generated by the LLM ChatGPT-3 [2] when prompted "Can you write an introduction in a newsy style to the potential of large language models in robotics?".
Language models have long been used in robotics to translate natural language instructions into actions executable by robots [3][4], synthesize code from text prompts [5] and find relationships between different fields of knowledge. In light of these impressive capabilities, LLMs may now contribute to another bottleneck of robotics, design. Leveraging their emerging capabilities [6], LLMs can deliver a dialogue that enables, teaches, and guides humans in building a robot from scratch. These capabilities could fundamentally change the methodology by which we design robots, and could shift the role of humans from designer or engineer to technician. So, to what extent can ChatGPT-3 replace an engineer and design a robot?
To generate the first ChatGPT-3 designed robot we approach the task in a two step approach. In the first high-level phase, the computer and the human collaborate on a conceptual level, discussing ideas and outlining the specifications for the robot design while in the second phase the physical implementation of the design specifications takes place. As an example of this AI-driven design process, we consider the challenge of a human engineer driven by the desire to "help the world with robotics," as shown in Figure 1. The human operator starts by asking the LLM which are the future challenges for humanity and promptly gets an overview with a clear outline of the main hazards. The human can then select the option they are most interested in and narrow down the design space by asking for clarifications. This interaction can span multiple fields of knowledge and levels of abstraction, ranging from concepts to technical implementation. In this way, the human can spot new intersections between research fields, such as agriculture and robotics, and consider factors that are hardly part of the experience of an engineer by training, such as what is the crop that is economically most valuable to automate. By iterating this process, the LLM and the human converge to the technical design specifications of a robotic system.
Typically, in a computational design framework, the computer solves technical problems specified by the human. In this case, conversely the LLM proposes conceptual options to the human, who then selects the most appealing choice. In this sense, the LLM acts as the researcher, leveraging knowledge and finding interdisciplinary connections, while the human acts as a manager, providing direction to the design. The application is selected as an output of
this first part of the process, and a set of initial technical specifications are generated. This includes code, material, components, manufacturing method selection, and mechanism design. In the second, low-level phase of the design process, these directions need to be translated into a physical and functioning robot. Although LLMs can currently not generate entire CAD models, test code, or automatically fabricate the robot, recent advances have shown that AI algorithms can support the technical implementation of software [7], mathematical reasoning [8], or even shape generation [9]. Thus, we expect that, in the near future, AI-generated inputs will highly support a large set of technical tasks. However, in the foreseeable future, humans will remain mainly in charge of the technical implementation of the robotic solution. The human is therefore relegated to the technician role, polishing the code proposed by the LLM, finalizing the CAD, and fabricating the robot. This robot can then be tested in real-world scenarios, and a new conversation with the LLM can be used to iterate on the design process in light of experimental evidence. As an example of this second phase, Figure 2 displays the main outputs generated by the LLM and the real-world deployment of the AI-designed robotic gripper for crop harvesting.
From this exploration we can foresee different modalities of human-AI interaction and collaboration. At one extreme, the LLMs could provide all the input required for robot design, which the human follows blindly. The AI is the inventor, addressing human questions and providing 'creativity', technical knowledge and expertise, whereas the human deals with the technical implementation. This could indirectly foster transfer and democratization of knowledge, by enabling non-specialists to realize robotic systems. A more moderate, yet powerful approach is collaborative exploration between the LLM and the human, leveraging the ability of the LLM to provide interdisciplinary and wide ranging knowledge to augment the human's expertise. Finally, we can consider a third approach in which the LLM acts as a funnel, helping to refine the design process and providing technical input whilst the human remains the inventor or scientist involved in the process. This collaboration between AI and humans presents clear benefits and opportunities. By augmenting human knowledge with LLMs, this methodology removes the limits imposed by the learning process and supports the human in finding relevant connections between fields, making interdisciplinary research and reasoning more accessible. It can spur the curiosity of researchers, interactively teach new robotics engineers, and accelerate the design process. As seen in our demonstration, the relationship between human and AI may vary for different parts of the design process depending on the skill and expertise of the individual and the goal of the robotic design process. At the same time, the introduction of LLMs into the design of robots brings questions
Figure 1. On the left, the two phases of the design process: first the human and LLM discuss the specifics application and of the design and later the human implements them. On the right, a pictorial overview of the discussion, with the questions prompted by the human on top, and the options provided by the LLM below. The green color highlights the decision tree of the human, which gradually focuses the problem to match his goal.
regarding its potentially negative effects on scientific disciplines and engineering - a creative, interdisciplinary, and IP-creating process that currently relies on highly-skilled professionals. In this regard, it is pivotal to point out that LLMs should be regarded as an evolution of search engines, generating the "most probable" answer to a given prompt [10]. As such, it is debatable if they can develop creative solutions that substantially advance the robotics discipline beyond what is already known by the scientific community. But unlike search engines, LLMs can propose ways to integrate 'knowledge' and apply it to unseen problems, thus providing a potentially false impression that new knowledge is being generated. Moreover, we see another potential issue in the widespread use of LLMs in our field. As the same trained model is accessible to everybody, it could create a bias in researchers' focus toward solutions that the model statistically prefers. This way, it may hinder the exploration of new technological solutions. Finally, this ability of the LLM to apply and adapt prior experience to new problems could prevent humans from taking responsibility for the solutions developed [11], which could lead to dangerous outcomes and a lack of human creativity in the design process. This could prohibit and stagnate the advancement of new robotic technologies and designs. There are also significant societal and ethical implications resulting from human-AI interactions for robot design. LLMs could automate high-level cognitive design tasks, and have humans focusing on more technical jobs. This could redefine the set of skills that are required by an engineer, and change the education that engineers should receive. Finally, there are key issues regarding plagiarism, traceability and IP [12]. Can a design created via LLM be considered to be novel as it builds only on prior knowledge [13], and also how can this previous knowledge be referenced? Similarly, if human-AI collaboration leads to the creation of novel IP, is this not a function of the training data of the LLM? As this technology matures there are also longer term considerations including data-privacy, the frequency of retraining and how new knowledge should be integrated to maintain the usability and relevancy of this tool.
To conclude, the robotics community must identify how to leverage these powerful tools to accelerate advances and capabilities, yet doing so in an ethical, sustainable and socially empowering way. We must develop means of acknowledging the use of LLMs [14], and also being able to trace the lineage of the generation of designs from LLM. Looking forward, we strongly believe that LLMs will open many exciting possibilities and that they will be a force for good if opportunely managed. The design process could be fully automated by combining collaborating LLMs to ask and answer questions, with one helping to refine the other. This could also be augmented with automated fabrication to allow for a fully autonomous pipeline for the creation of bespoke and optimized robotic systems. Ultimately, it is an open question for the future of this field if these tools can be used to assist robot developers and leverage inter-disciplinary knowledge leading to new robotic capabilities, or does this lead to a long-term stagnation of the field, with lazy, unskilled engineers, relying on external computation to generate new knowledge?
Figure 2. An AI designed this robotic gripper.
|
2309.00014
|
Improving NeRF Quality by Progressive Camera Placement for Unrestricted
Navigation in Complex Environments
|
Neural Radiance Fields, or NeRFs, have drastically improved novel view
synthesis and 3D reconstruction for rendering. NeRFs achieve impressive results
on object-centric reconstructions, but the quality of novel view synthesis with
free-viewpoint navigation in complex environments (rooms, houses, etc) is often
problematic. While algorithmic improvements play an important role in the
resulting quality of novel view synthesis, in this work, we show that because
optimizing a NeRF is inherently a data-driven process, good quality data play a
fundamental role in the final quality of the reconstruction. As a consequence,
it is critical to choose the data samples -- in this case the cameras -- in a
way that will eventually allow the optimization to converge to a solution that
allows free-viewpoint navigation with good quality. Our main contribution is an
algorithm that efficiently proposes new camera placements that improve visual
quality with minimal assumptions. Our solution can be used with any NeRF model
and outperforms baselines and similar work.
|
Georgios Kopanas, George Drettakis
|
2023-08-24T16:30:54Z
|
http://arxiv.org/abs/2309.00014v2
|
# Improving NeRF Quality by Progressive Camera Placement for Free-Viewpoint Navigation
###### Abstract
Neural Radiance Fields, or NeRFs, have drastically improved novel view synthesis and 3D reconstruction for rendering. NeRFs achieve impressive results on object-centric reconstructions, but the quality of novel view synthesis with free-viewpoint navigation in complex environments (rooms, houses, etc) is often problematic. While algorithmic improvements play an important role in the resulting quality of novel view synthesis, in this work, we show that because optimizing a NeRF is inherently a data-driven process, good quality data play a fundamental role in the final quality of the reconstruction. As a consequence, it is critical to choose the data samples - in this case the cameras - in a way that will eventually allow the optimization to converge to a solution that allows free-viewpoint navigation with good quality. Our main contribution is an algorithm that efficiently proposes new camera placements that improve visual quality with minimal assumptions. Our solution can be used with any NeRF model and outperforms baselines and similar work.
\(\bullet\)Computing methodologies Computer graphics; Rendering; Active learning settings; +
Footnote †: journal: Journal of Computer Graphics
## 1 Introduction
In recent years, Neural Radiance Fields (NeRFs) [17] have emerged as a powerful approach allowing high-quality novel view synthesis, for scenes captured with photos taken from many different viewpoints. These methods also provide an alternative to Multi-View-Stereo [16] solutions that are slow and fail to produce faithful visual and geometric reconstruction. For both MVS and NeRF, capturing a scene typically starts with users taking many photos or video of the scene. Usually, users follow instructions to loop around an object a few times at different heights and to make sure to also capture top views [18] which we call _hemispheri
cal_ capture. This works well for "object-centric scenes", i.e., scenes that have a main object that the users want to be able to view freely, while the rest is considered background. There has been little previous work on how to capture more general scenes such as rooms, buildings etc. that have no central point of interest, especially when the goal is to allow _free-viewpoint_ navigation in the environment. Users typically place the cameras based on their intuition and empirical knowledge about which camera placements usually work, often leading to the failure of the reconstruction and consequently, forcing users to recapture the scenes in a costly and time consuming trial-an-error process.
Intuitively, hemispherical capture works for object-centric scenes because it samples the space containing the object _uniformly_ both for camera positions and in the angular domain. This uniform coverage is a dense sampling of a complete radiance field, since rays from the camera centers through the pixels in each view frustum cover space uniformly. Such coverage provides multi-view information that is used in the optimization to disambiguate depth, allowing accurate reconstruction.
Achieving such uniform ray coverage both in positions and angles is much more challenging in the context of general scenes, where there is no single central object. With an infinite number of cameras, it could theoretically be possible to densely sample the light field, but in practice the number of cameras is limited and in addition given the geometry of the scene cameras cannot be placed everywhere (i.e., inside objects). The problem we try to solve is: given a camera budget and physical limitations of space, how can we _efficiently_ choose the next camera that will allow the resulting ray sampling to be as close as possible to uniform in space and angle.
There has been little previous work on this problem; most methods that have been proposed require either modification on the training of the NeRF model making them unsuitable for generalization to other NeRF variants other than the ones that it was specifically designed for, or very expensive calculations based on the current state of the NeRF model. These properties make the process slow and cumbersome.
In our solution, we first develop a metric to evaluate uniformity in space and angle that is fast to evaluate. We then propose an algorithm that uses this metric to select the next best camera such that the overall distribution will be closer to uniform in positions and angle. We evaluate the metric and the algorithm on synthetic data and compare to baselines and previous work, demonstrating that our solution works well, and we also run our algorithm on a real dataset as a proof of concept. From a practical perspective, our algorithm can be used for automated capture using robotic or drone capture; we leave this as future work, but we discuss a practical future use case in Sec. 3.4.
In summary, our contributions are:
* The definition of an efficient metric for _observation frequency_ and _angular uniformity_ that can be computed on the fly during NeRF capture, without requiring additional images.
* An algorithm to quickly estimate reconstruction quality of a scene and that proposes the next camera placement that maximizes the improvement in quality of capture based on our metrics.
Our solution can work with any NeRF model without changes to the optimization loop, and only introduces a small performance overhead to the training. We performed extensive testing on synthetic scenes, and our method achieves the best quality against multiple baselines and other algorithms that we tried given a limited budget of cameras. We also present a first preliminary evaluation with real data, in which our method also performs well.
## 2 Related Work
In recent years a huge number of publications on Neural Radiance Fields (NeRFs) have been published; we will start by reviewing only a few papers that are closely related to our method and design choices. Recent comprehensive surveys on NeRFs can be found in [17] and [20]. We then review camera selection for reconstruction, both traditional and for NeRFs.
### Neural Radiance Field Basics
Neural Radiance Fields were introduced by [16]. They fundamentally changed how we can reconstruct 3D scenes from 2D images by introducing a continuous volumetric representation of the scene, encoded in a Multi-Layer Perceptron (MLP) which we can optimize using Stochastic Gradient Descent (SGD) to fit the input images, solving the reconstruction with a data-driven optimization.
Follow-up work improved the reconstruction quality by allowing for better extrapolation when dealing with cameras observing objects from different distances [1] but also generalized the algorithms for unbounded and realistic scenes [1]. Most of the results demonstrated in these papers focus on object-centric camera placements which has become the "typical NeRF-style capture", i.e., a hemisphere of cameras around the object and looking directly at the center of the object.
Another line of work focuses on performance, both for fast training and fast rendering. Most solutions achieve good results by encoding the radiance field in voxel grids with limited spatial extent [15, 16, 17, 18] or point clouds [21]. For our experiments we will built on top of Instant-NGP [15], since it is currently the NeRF method with the best quality/performance trade-off. Instant-NGP uses a hash function to map a 3D hierarchical voxel-grid of high dimensional features to a compact 1D representation. This grid is later queried in an optimized way along the ray to produce the final color for each pixel. The optimization includes a very efficient _occupancy grid_ that marks the voxels as occupied or free and results in an efficient way to skip empty space during ray marching.
Also, there are numerous papers that try to reconstruct NeRFs from a very limited amount of input views [16, 15, 17, 18]. These models could potentially be benefited greatly by an optimal selection of cameras.
While all these algorithms significantly improve the state-of-the-art, in the vast majority of cases they use datasets in which the
cameras are placed on a hemisphere over a region of interest. This allows for good quality reconstruction only in that specific region (typically an object of interest). It is not clear how one would place cameras for more complicated environments, when allowing the user to navigate freely. Camera placement is an important factor that controls the final quality of the reconstruction and the ability to _navigate freely_ in the scene without artifacts. In this context we propose a solution that will automate and standardize the way of capturing NeRFs, removing the burden of trial and error from the user.
### Camera placement for reconstruction
We discuss representative previous work in camera placement for reconstruction. In the vast majority of cases we assume that the scene is captured with photographs, and that the cameras of these photographs have been calibrated. Camera calibration is typically performed using Structure-from-Motion (SfM), using systems such as COLMAP [14].
#### 2.2.1 Traditional Reconstruction
Multi-View Stereo (MVS) is an offline process that recovers the 3D geometry from a set of images and is very computationally expensive. The user first captures the images, performs camera calibration and then runs MVS. After a few hours of computation, the user may come to realize that the images are not good enough for a good 3D reconstruction, requiring the scene to be recaptured from scratch. This is a tedious process, especially if accessing the capture site is difficult.
To improve this cumbersome process the field of Next-Best-View (NBV) estimation [15, 16, 17] predicts the next view that will provide more information to the reconstruction process given a set of already captured views. In the field of volumetric reconstruction [15] focuses on sensors with depth and creates a set of heuristics to estimate the next best view that will maximize the information gain of each newly acquired sample. While this work is inspiring it lies outside the scope of optimizing a model from a set of data-samples with SGD because they use a depth sensor that directly observes the geometry information of the scene, while in neural radiance fields the geometry representation is being optimized to fit the scene.
Other works in camera selection that focus on MVS can be separated in heuristic-based methods [16, 19] or data-based [10]. They mostly focus on estimating or predicting the uncertainty of the MVS reconstruction process without actually running it. In contrast to MVS, fast NeRF models [18, 19] open the door for new approach in the field of next best view estimation that allows online reconstruction and camera placement prediction, especially if camera calibration can be provided online by the capture device (e.g., augmented reality helmet).
#### 2.2.2 Neural Radiance Fields
Automatic camera placement for Radiance Fields is an emerging topic of research. A popular approach is to modify the NeRF model to be able to predict it's own uncertainty [16, 17, 18] which later is used in various ways to choose the views which maximize it. The uncertainty is modeled in two ways, either by converting the MLP that encodes the scene to a Bayesian MLP [16, 17, 18] that also predicts it's own uncertainty or by using the physical properties of the volumetric representation along a ray based on the entropy of the density function [17]. All methods that use the NeRF model to predict uncertainty are computationally intensive since they need many MLP evaluations for each candidate camera. In addition, it is hard to train an MLP that predicts it's own uncertainty. That is why [16] uses a depth sensor to stabilize the training. Some methods [16] focus on selecting views when there is a very limited budget of cameras allowed and [16] presents a solution that evaluates the uncertainty based on the spread of density along a ray. This needs a full rendering step per candidate camera which means when the space of candidates grows in unconstrained environments it comes with an increased cost. All the above methods are not demonstrated on non object-centric scenes, making them unsuitable for our context which focuses on free-viewpoint navigation in complex scenes.
## 3 Method
The goal of our method is to dynamically suggest new camera positions such that we create a dataset that will achieve a good quality reconstruction. This can be used to guide a robotic agent or a human to acquire new images when capturing a NeRF. NeRFs trained on object-centric datasets achieve excellent quality when observing the object from a camera that matches the distribution of the training cameras, but easily break when moving away from them, see Fig. 3. We are interested in constructing a carefully designed placement of cameras that will allow the final user of the NeRF to navigate freely in the scene, while avoiding strong visual artifacts.
We want to generalize the simple assumptions of the object-centric capture style to more complex scenes and viewing scenarios, in particular when we allow the viewer to navigate freely.
### Observation Frequency and Angular Uniformity
The object-centric capture style of NeRF [19] and MipN-eRF [16] has two main properties. First, all cameras observe the object and second, the cameras are distributed along different directions to cover the angular domain uniformly. If we constrain the user to view the scene on the hemisphere, this capture style naturally provides a good reconstruction since it covers the space all the possible cameras uniformly.
We next provide a formal definition of this observation, and in particular a measure of _observation frequency_ and a measure of _angular uniformity_ of these observations.
Given a set of cameras \(\mathcal{C}\), a point \(p\) in space will be well reconstructed if it is observed often from the input cameras and if these cameras are distributed uniformly in the angular domain of directions. We next formalize this mathematically and generalize it for multiple points \(p\).
We define a function that describes how frequently each point is observed. For a point \(p\) we define the frequency \(O_{f}(p)\) of observation as follows:
\[O_{f}(p)=\frac{\sum_{i=0}^{N}\mathds{1}_{obs}(C_{i},p)}{N} \tag{1}\]
Where \(\mathds{1}_{obs}\) is an indicator function that is 1 if point \(p\) lies inside the frustum of camera \(C_{i}\) and \(N\) is the total number of cameras. This equation describes a simple relationship between cameras and points in space: If all the available cameras observe a point, then \(O_{f}(p)=1\), while if no cameras observe it \(O_{f}(p)=0\).
For the directions of the observations, we next define a metric to measure _angular uniformity_, since more uniform angular distribution of observed directions results in better resulting visual quality.
We define \(f_{p}\) the distribution of camera directions in the angular domain that observe a point \(p\) and the uniform distribution \(u\) in the same angular domain. To determine the quality of the angular distribution of cameras, we will compute the total variation distance between the two distributions.
\[TV(f_{p},u)=\frac{1}{2}\sum_{\theta,\phi\in\Omega}|f_{p}(\theta,\phi)-u( \theta,\phi)| \tag{2}\]
where \(u\) is a uniform PDF. We construct the piece-wise constant PDF \(f_{p}^{\mathcal{C}}\) in the angular domain by computing the histogram of the directions of the cameras that observe point \(p\). Every bin in the histogram contains the number of cameras that observe this point from the solid angle that corresponds to the bin. Similarly to Eq. (1), the angular metric is 0 if our point p is observed from a uniform distribution, while it approaches 1 as it moves further from the uniform distribution.
We provide a visual illustration of this process in Fig.2, showing the histogram of \(F_{p}\) and \(U\) in bins with polar coordinates. The direction are represented in polar coordinates, that not area preserving, so we weight the bins of the histogram based on the surface area of each bin.
### Estimating Reconstruction Quality
We define the area where we want to estimate the reconstruction quality, and for simplicity we use an axis-aligned bounding box to define it. This is the area which the user wishes to observe; We refer to this area as \(\mathcal{B}\).
Ideally, we would like to evaluate the quality of reconstruction of every point in \(\mathcal{B}\). In practice however, we discretize the problem by constructing a regular grid in \(\mathcal{B}\) with resolution of \(32^{3}\) and refer to it as \(\mathcal{B}_{32}\); we will evaluate reconstrucutability on the _nodes_ of the grid. In discrete space it is easier to measure the total reconstructability \(E\) of \(\mathcal{B}\) given by the set of cameras \(\mathcal{C}\) by summing over all the nodes \(p\) of the grid:
\[E(\mathcal{C},\mathcal{B}_{32})=\sum_{p\in\mathcal{B}_{32}}\left(1-TV(f_{p}^{ \mathcal{C}},u)\right)+O_{f}(p)^{\gamma}, \tag{3}\]
where \(f_{p}^{\mathcal{C}}\) is the empirical PDF in the angular domain for point \(p\) and set of cameras \(\mathcal{C}\) and \(u\) is the uniform PDF in the same domain, while \(\gamma\) is a non linear scaling factor that modulates how much more important it is to observe points that are less frequently observed than points that have already been observed frequently.
Our formulation has several advantages. First, it is easily interpretable by a user, making it easy to modify and specify regions that have more importance than others. In the limit case of a single point and if \(C\) is constrained on a hemisphere our method reduces to the typical "object-centric" capture setup of NeRF and MipNeRF/360. Second, a key advantage is that our camera proposal does not require additional image acquisition to estimate reconstruction quality, as for other methods, e.g., based on uncertainty estimation [20]. Our combined observation frequency and angular uniformity metrics can be seen as a proxy for uncertainty, while being relatively cheap to compute.
### Optimization
Our goal is to find the set of cameras \(\mathcal{C}\) that maximize the quantity in Eq. (3):
\[\operatorname*{argmax}_{\mathcal{C}}E(\mathcal{C},\mathcal{B}_{32}) \tag{4}\]
To do this we optimize the NeRF model while we choose the new cameras. Cameras can only be placed in empty space and some NeRF models, including Instant-NGP [17] that we use, provide an occupancy grid that is used to skip empty space while rendering. If the implementation does not provide one it is trivial to compute it on the fly by sampling 3-D space and storing the density and using a threshold to binarize the value. We use the occupancy grid to place candidate cameras in free space. For simplicity, we also constrain the cameras to lie inside \(\mathcal{B}\).
In the beginning of our run we have no images nor a trained occupancy grid, and one needs the other to initialize the process. To overcome this, we ask the user to create a bounding box in the scene that is empty and call it \(\mathcal{B}^{f}\). This allows us to sample cameras safely so we can start the process. This is somewhat of a chicken-and-egg problem, since we need enough of an initial reconstruction
Figure 2: To compute the distance of the empirical distribution in the angular domain of cameras that observe a point \(p\), we assign the cameras to bins based on their direction from point \(p\) into a histogram in polar coordinates. We then convert this to a PDF, for which we account for the non-uniform surface area of the spherical coordinate system. Then we use the total variation distance in Eq.(2) to get the value for node point \(p\).
to have a coordinate system to define this initial box \(\mathcal{B}^{f}\). In a real-world scenario, the user will simply take 20 photos of the scene - evidently in free space - that will initialize the reconstruction and the coordinate system, and imply the definition of \(\mathcal{B}^{f}\). For the synthetic examples we used for evaluation, we have the coordinate system beforehand, and we defined \(\mathcal{B}^{f}\) manually.
Now that we know how to sample random cameras safely, we want to maximize the quantity in Eq. (3). We use a greedy maximization technique: every 250 training iterations we acquire a new camera given a set of already chosen \(\mathcal{C}_{k}\). We do this by sampling a set of N=1000 candidate cameras that lie inside \(\mathcal{B}\), or inside \(\mathcal{B}^{f}\) for the first 20 cameras. Each camera gets a random direction, and we filter out all cameras whose center lies in occupied space or observes occupied space from too close. Then we compute \(E(\mathcal{C}_{k}\cup c_{n},\mathcal{B}_{32})\) for each of the N cameras, choose the camera with the highest score and add it in \(\mathcal{C}_{k+1}=\mathcal{C}_{k}\cup c_{n}\). We repeat until we acquire as many cameras as our budget allows. The algorithm is summarized in Alg. 1. Our current unoptimized implementation requires a few seconds to propose a new camera; further optimization would allow truly interactive use.
### Future Practical Usage Scenario
The method above provides all the elements for online camera selection that can be used in future work by a robotic or drone-based system. In this paragraph we describe how such a system could function to motivate the utility of our results.
As discussed above, a user would first take 10-20 photos of the environment for an initial camera calibration, providing a reference frame, and allowing the definition of an initial box \(\mathcal{B}^{f}\). In a fully operational system, we would then run a fast NeRF such as InstantNGP [10] that reconstructs a first approximation of the scene volumetric representation in a few seconds. We then would run our algorithm to choose the next camera, and move the robot or drone to the next position; the new capture is then incrementally added to the NeRF optimization until the budget of cameras is reached. Given the latency of moving the capture agent and the fact that we need a few training iterations between the two captures, a slightly optimized implementation of our algorithm is perfectly suited to such a scenario, since it can provide the next best camera faster than the agent can actually move to the next position. As a consequence, a full system using our algorithm would allow automatic and high quality capture with a small number of photos, both reducing capture time and optimizing resulting image quality.
## 4 Results & Comparisons
Evidently, the goal of our method is to provide guidance for a human or robotic acquisition system while capturing a NeRF. Designing the user interface for human guidance or interfacing with an automated acquisition system are complex tasks that we leave for future work. Instead, we provide a thorough evaluation of synthetic scenes, in which image acquisition is achieved simply by rendering a new image from the camera proposed by our system. We also provide a preliminary evaluation of our method on a real capture, by capturing a large number of views that we can then sample.
This system was implemented by interfacing together InstantNGP [10] with Blender's python API [13] and Cycles renderer2. We extended the python interface of InstantNGP to allow us to query the occupancy grid efficiently and we also linked Instant-NGP with Blender's python environment. We will provide all source code and data, that will be available here [https://gitlab.inria.fr/fungraph/progressive-camera-placement](https://gitlab.inria.fr/fungraph/progressive-camera-placement). The full pipeline that allows the treatment of complex synthetic scenes interfaced with NeRF systems such as InstantNGP is a powerful tool in itself, and was very useful for this project. We hope it will also be helpful to others experimenting with NeRFs in full, realistic scenes.
Footnote 2: [https://www.cycles-renderer.org/](https://www.cycles-renderer.org/)
Footnote 3: We manually select a point and a radius for the hemisphere such that it makes for that specific scene configuration, i.e a table with vaes etc
### Evaluation on Synthetic Scenes
For the first set of synthetic scene comparisons, we evaluate our method against two baseline camera placement strategies: Hemisphere where we place the cameras on a hemisphere around an object of interest4 in the scene (this is the standard NeRF [10] capture style) and Random where we place the cameras at a random position and a random orientation by also making sure that the camera placement is not in occupied space (i.e., not inside objects), see also Sec. 3.3.
Footnote 4: We manually select a point and a radius for the hemisphere such that it makes for that specific scene configuration, i.e a table with vaes etc
As discussed in Sec. 2, there are few methods that treat our specific problem; of all NeRF camera selection methods, only ActiveNet [12] provides code, and we thus include a comparison to this approach. We use the authors' implementation which is based on an implementation of the original NeRF [10] method. To allow a best-effort fair comparison we used their code to extract the cameras, and then trained the same InstantNGP [10] model as we did with all other baselines and our
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c|c c c|c c c|c c c c|c c c} Score & \multicolumn{4}{c|}{LivingReport} & \multicolumn{4}{c|}{LivingReport} & \multicolumn{4}{c|}{LivingReport} & \multicolumn{4}{c|}{Oircles} & \multicolumn{4}{c|}{Oircles} \\ \multicolumn{1}{c|}{TrafficTest} & Random & \multicolumn{1}{c|}{Ibignet} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c|}{Avg} & \multicolumn{1}{c|}{Random} & \multicolumn{1}{c|}{Ibignet} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c|}{Arg} & \multicolumn{1}{c|}{Random} & \multicolumn{1}{c|}{Ibignet} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c|}{Arg} & \multicolumn{1}{c|}{Random} & \multicolumn{1}{c|}{Ibignet} & \multicolumn{1}{c|}{Ours} & \multicolumn{1}{c|}{Avg} \\ \hline Hagine & 17.43 & **34.8** & 14.43 & 21.78 & 16.13 & **30.12** & 13.04 & 21.60 & 20.34 & **30.48** & 19.52 & 22.0 & 15.17 & **36.79** & 13.21 & 21.79 & 18.92 & **33.28** & 16.36 & 22.58 \\ AchineNet & 20.57 & 17.91 & 18.59 & 19.02 & 25.51 & 22.76 & 21.28 & 21.38 & 24.71 & 20.78 & 19.62 & 21.71 & 25.75 & 27.52 & 24.40 & 25.59 & 25.00 & 22.33 & 21.42 & 22.08 \\ Random & 25.77 & 23.86 & 22.81 & 24.14 & 26.65 & 24.48 & 22.00 & 24.37 & 23.57 & 25.63 & 20.06 & 25.42 & 26.20 & 28.29 & 22.65 & 25.71 & 25.24 & 24.30 & 22.70 & 24.08 \\ Ours & **27.46** & 27.80 & **26.56** & **27.37** & **28.40** & 27.02 & **25.62** & **27.14** & **31.35** & 27.01 & **27.12** & **28.0** & **25.38** & 30.59 & **28.91** & **29.29** & **27.34** & 27.06 & **25.87** & **26.89** \\ \multicolumn{1}{c|}{GT} & \multicolumn{4}{c|}{Ours} & \multicolumn{4}{c|}{ActiveNerf [PLSH22]} & \multicolumn{4}{c|}{Random} & \multicolumn{1}{c}{Hemisphere} \\ \end{tabular}
\end{table}
Table 1: Per Scene Quantitiasive evaluation of our method. We provide the PSNR for each test set separately and the total average of each algorithm for each scene.
Figure 3: Images from our sampling test set. We present a visual comparison to baselines(Random,Hemisphere) and ActiveNerf. The first column shows the ground truth image. The scenes shown are Livingroom1, Livingroom2, kitchen5, office6 and office9.
method. To extract the cameras we pre-render 1000 random cameras which act as the pool from which ActiveNeRF can choose cameras. The ActiveNeRF implementation is very computationally and memory intensive so we tuned ActiveNerf to start from 20 random cameras, and choose 8 cameras every 50k iterations. These 8 cameras are the best cameras chosen from their algorithm from a random subset of 100 out of the 1000 cameras.
We used 5 synthetic scenes modeled by professional artists to represent realistic indoor environments4. For each scene we construct a training set corresponding to each one of the algorithms we want to evaluate and multiple test sets that provide a good overview of the total quality throughout the scene.
Footnote 4: The scenes are available for purchase at www.evermotion.com and are compatible with the Blender Cycles renderer.
Our test-sets contain a total 150 views that are distinct from the training views. The test-sets are split in 3 sub-sets: 1) 50 random views using the Hemisphere capture style 2) 50 views using Random and 3) 50 views using our sampling process. The purpose of the multiple test sets is to evaluate each algorithm fairly throughout different camera distributions such that the quantitative metric evaluate the total quality throughout the scene. This avoids bias towards one of the aforementioned distributions, and allows a more comprehensive overall evaluation of our algorithm. We provide all the renderings for all views and all algorithms in our supplemental material. We also rendered free-view point paths which we provide in the supplemental video.
As we observe in Fig. 3 and Fig. 6 the standard hemisphere captures of NeRF and MipNeRF fail to generalize in test sets coming from other distributions. Hemisphere views have a specific structure and objects that lie outside of the hemisphere are observed only from constrained angular directions and this allows for the NeRF model to overfit to the set of input cameras. While the random capture significantly outperforms the hemisphere capture in the generalized setting, we can still see significant artifacts because of the unstructured nature of the dataset. In theory, an infinite number of random views should allow for perfect reconstruction, but this is impractical since it is labor and computationally intensive.
As we can see from the quantitative and qualititave evaluation (Fig. 3,6), ActiveNerf [4] does not always successfully choose the cameras that would allow for a good reconstruction. This happens for many reasons. First, the original NeRF models used has a hard time to converge in complicated scenarios that are not similar to the synthetic blender dataset, and it becomes even harder with the Bayesian model of Active-NeRF. Active-NeRF needs to get a notion of the scene to allow for good camera placements and in complicated scenes it can be challenging just from 20 initial cameras. Second, ActiveNeRF chooses cameras that maximize the uncertainty for the specific model they are training for, this does not guarantee that this uncertainty metric will generalize to other NeRF models and finally the memory and speed requirements do not allow for a huge number of candidate cameras similar to our method.
We can see that our capture style outperforms all other algorithms across the scenes and views both quantitatively in Tab. 1 and qualitatively at Fig. 3 and Fig. 6. This supports our hypothesis that if we observe all parts of the scene while maintaining a uniform set of directions we will get an ideal reconstruction.
We also perform a visual analysis to provide insight on how
Figure 4: “Floorplan” visualisation of the seperate elements that construct our energy term in Eq. (3). The values have been averaged along the Y axis.
Figure 5: In this plot we show the test-set PSNR with relation to the number of cameras in the training set. We plot two sampling algorithms, random and ours. We evaluate the PSNR across all 150 images of the test set for the scene Office6
different methods score against the energy function in Eq. (3). In Fig. 4 we provide a visualization of the scores of each of the two terms of Eq. (3). We see that our method clearly observes all the nodes more often that the other baselines and we achieve better angular coverage for each node.
We also show in Fig. 5 how our camera sampling improves in the test-sets as we introduce more and more cameras against the random cameras.
### Preliminary Real Scene Evaluation
As discussed earlier (Sec. 3.4, we leave the actual integration of our method into a full capture system as future work. Such a system would require either a user interface (e.g., on a phone) or interfacing with a robotic capture system (e.g., a drone). However, it is instructive to see how well our method works on real data, so we present a very preliminary test on a real scene. Since we are lacking a capture agent, we instead _simulate_ the ability to select new cameras as a proof of concept. Specifically, we achieve this by taking approximately 1300 photos (removing every 14th image to create a left-out test set), simulating "random coverage" of the scene. We then use our algorithm to select 200 cameras from this pool of randomly distributed images. This is evidently a very preliminary test, but the results shown in Fig. 7 show that our method performs significantly better than random selection.
## 5 Conclusions
We presented an efficient method for selecting cameras for NeRF capture in complicated environments, targeting free-viewpoint navigation. Our key contributions are the introduction of the angular and coverage metrics, and our fast optimization to propose the next best camera for NeRF reconstruction. Our method outperforms baselines and one previous method in overall performance; it is also faster than other methods and without significant overhead over baseline methods. An important attribute of our solution is that it is easily interpretable and can provide meaningful guidance and understanding to users without requiring additional images. One other benefit from the simplicity of our methods is that it could be adapted to vary the importance of the scene spatially; we leave this as future work.
Our method is not without limitations. One issue is that we have not investigated if our sampling is biased. If this is the case, no matter how many cameras we sample, we might not reach a "perfect" reconstruction and visual quality. Also, even though the method is efficient, it would benefit from even faster performance allow truly interactive capture.
There are numerous possibilities for future work. From a theoretical perspective, we are interested in studying other metrics of reconstruction quality in a more extensive and complete fashion. We are also very excited about the idea of integrating our approach in a mixed Augmented/Virtual Reality (AR/VR) context: for example we can guide an on-site (AR) user to take photos of a scene so that the remote VR user can very quickly be immersed in the same environment. Using our method in the context of drone capture would allow NeRF captures to be performed with high quality with little human intervention, rendering the approach much more useful and easy-to-use.
|
2310.02760
|
A Hybrid Quantum-Classical Approach to the Electric Mobility Problem
|
We suggest a hybrid quantum-classical routine for the NP-hard Electric
Vehicle Fleet Charging and Allocation Problem. The original formulation is a
Mixed Integer Linear Program with continuous variables and inequality
constraints. To separate inequality constraints that are difficult for quantum
routines we use a decomposition in master and pricing problems: the former
targets the assignment of vehicles to reservations and the latter suggests
vehicle exploitation plans that respect the battery state-of-charge
constraints. The master problem is equivalent to the search for an optimal set
partition. In our hybrid scheme, the master problem is reformulated in a
quadratic unconstrained binary optimization problem which can be solved with
quantum annealing on the DWave Advantage system. On large instances, we
benchmark the performance of the decomposition technique with classical and
quantum-inspired metaheuristics: simulated annealing, tabu search, and vector
annealing by NEC. The numerical results with purely classical solvers are
comparable to the solutions from the traditional mixed integer linear
programming approaches in terms of solution quality while being faster. In
addition, it scales better to larger instances. The major advantage of the
proposed approach is that it enables quantum-based methods for this realistic
problem with many inequality constraints. We show this by initial studies on
DWave hardware where optimal solutions can be found for small instances.
|
Margarita Veshchezerova, Mikhail Somov, David Bertsche, Steffen Limmer, Sebastian Schmitt, Michael Perelshtein, Ayush Joshi Tripathi
|
2023-10-04T12:14:56Z
|
http://arxiv.org/abs/2310.02760v1
|
# A Hybrid Quantum-Classical Approach to the Electric Mobility Problem
###### Abstract
We suggest a hybrid quantum-classical routine for the NP-hard _Electric Vehicle Fleet Charging and Allocation Problem_. The original formulation is a Mixed Integer Linear Program with continuous variables and inequality constraints. To separate inequality constraints that are difficult for quantum routines we use a decomposition in master and pricing problems: the former targets the assignment of vehicles to reservations and the latter suggests vehicle exploitation plans that respect the battery state-of-charge constraints. The master problem is equivalent to the search for an optimal set partition. In our hybrid scheme, the master problem is reformulated in a quadratic unconstrained binary optimization problem which can be solved with quantum annealing on the DWave Advantage system. On large instances, we benchmark the performance of the decomposition technique with classical and quantum-inspired metaheuristics: simulated annealing, tabu search, and vector annealing by NEC. The numerical results with purely classical solvers are comparable to the solutions from the traditional mixed integer linear programming approaches in terms of solution quality while being faster. In addition, it scales better to larger instances. The major advantage of the proposed approach is that it enables quantum-based methods for this realistic problem with many inequality constraints. We show this by initial studies on DWave hardware where optimal solutions can be found for small instances.
quantum annealing, hybrid quantum-classical algorithms, electric vehicles, combinatorial optimization, column generation
## I Introduction
Many industrial problems related to _logistics_, _planning_, _scheduling_, or _resource allocation_ can be formulated as NP-hard optimization problems over discrete and continuous variables [1]. Employing efficient algorithms may significantly reduce operational costs and increase profits - therefore, the search for computational advantage is crucial for competitiveness. In practice, operational problems are usually solved with of-the-shell commercial solvers such as Gurobi; however, when the search for the exact solution is time-consuming we can use _heuristic_ algorithms to get good results in a reasonable time.
The emergence of the _quantum hardware_ offers new opportunities for the design of efficient heuristics [2]. Quantum heuristics leverage the laws of quantum mechanics to improve _the approximation gap_ or reduce the _the time-to-solution_[3]. For instance, the _tunneling effect_ in the navigation of the energy landscape allows _quantum annealing_[4] to find optimal solutions under a certain condition [5]. This condition - namely the polynomially-bounded spectral gap - is nevertheless difficult to guarantee, therefore, an experimental evaluation on difficult industrial problems is _necessary_ to decide on the practical potential of the heuristic.
Quantum algorithms adapted to near-future quantum hardware solve problems in so-called _QUBO_ (Quadratic Unconstrained Binary Optimization) formulations [2], i.e. all problem specifications are captured in a quadratic objective function over binary variables. At first glance, the NP-hard QUBO is a powerful model: traditional discrete problems [6] as well as some simplified industrial use-cases [7, 8] can be represented in QUBO without a huge resource overhead.
However, most real-world industrial problems exceed this simple framework: for instance, in the MILP (Mixed Integer Linear Program) model the solution space is typically restricted by many _inequality and equality constraints_ and variables are not necessarily binary. In theory, if all variable domains are discrete and bounded, the MILP can be reformulated as QUBO: discrete variables are encoded as binary string, _slack variables_ transform inequality constraints into equalities, and quadratic _penalty terms_\(M(a_{j}^{T}x-b_{j})^{2}\) (where \(M\) is a large number) in the objective function enforce linear equality constraints \(a_{j}^{T}x=b_{j}\).
However, these obvious transformations lead to a large overhead in the number of variables in QUBO [8]. Thus, the size of the problems tractable on the near-future quantum hardware becomes strictly limited. In addition, penalty terms negatively impact the performance of quantum routines due to the additional energy scale separating feasible and infeasible solutions [9, 10]. To address this obstacle, the works [11, 12] suggest to restrict the quantum evolution to the feasible
subspace - but the protocol is difficult to put into practice. Alternately, a hybrid augmented Lagrangian method [13] may perform well when the formulation has only a few constraints.
We believe that a balanced interaction between quantum and classical routines is the most promising way to enable quantum enhancement for complex optimization problems [14]. In this work, we introduce a _hybrid approach_ that delegates some operational constraints to a classical routine while leaving a difficult selection problem to the quantum heuristic.
We consider the problem of managing a fleet of electric vehicles (EV) previously considered in [15, 16]. In this problem, we search for an exploitation plan for a set of EVs over a discrete time horizon \(T=\{0,\ldots,t_{\max}\}\). We aim to fulfill as many reservations as possible with cars from our fleet. When cars are not used we can recharge them. The work [15] proves the NP-hardness of the problem and suggests a MILP formulation that is further optimized with the Gurobi solver1. The solver fails to find optimal (or even good) solutions in one hour already for instances with 10 vehicles over a 48-hour time horizon - motivating the exploration of heuristic approaches.
Footnote 1: gurobi.com
In the original MILP from [15] a set of _inequality constraints_ ensure that the state-of-charge (SoC) of each EVs battery is always non-negative and doesn't exceed the battery capacity. These inequality constraints are particularly challenging for quantum routines. We decompose the problem into _master_ and _pricing_ problems: the master problem coordinates the collective solution while the pricing suggests new charging and utilization plans for individual EVs. The master problem is equivalent to the NP-hard Set Partition problem [17] that can be naturally formulated as QUBO [18] - following [10] in our hybrid procedure we solve it with quantum annealing. The pricing problem deals with the charging schedule limited by the SoC-validity constraints. It uses a graph representation of possible individual actions at each time step, a path in the graph corresponding to a complete exploitation plan for one vehicle. By associating different weights to the edges of the graph in an iterative way we encourage the pricing problem to include or not a particular reservation in the plan.
We numerically evaluate our approach on realistic data2. As the actual quantum hardware can tackle the problems of relatively modest size, in addition to experiments on the _DWave Advantage 6.1 system_ for small instances (over 8-hour time horizon) we benchmark classical and quantum-inspired meta-heuristics on master problems for large instances (over 48-hour time horizon).
Footnote 2: [https://www.ac.tuwien.ac.at/research/problem-instances/#evfcap](https://www.ac.tuwien.ac.at/research/problem-instances/#evfcap)
Structure of the paperIn section II we introduce the problem, briefly recall the structure of the MILP formulation from [15], and present the decomposition on the master and pricing problems. In section III we present our hybrid approach and its potential applications. We report the results of our numerical experiments in section III and discuss the insights in section V.
## II Problem statement
In the Electric Vehicle Fleet Charge and Allocation Problem (EVFCAP) we consider a set \(V\) of \(n\) electric vehicles on a time horizon \(T\) of \(t_{\max}\) time steps each of duration \(\Delta t\) (in our data \(\Delta t=15\)min). Each vehicle \(v\in V\) has the same battery capacity \(E^{cap}\) and an individual initial level of charge \(E_{v,0}\in[0,E^{cap}]\). Reservations \(r\in R\) (\(|R|=r_{\max}\)) have each a starting time \(T_{r}^{start}<T\), an ending time \(T_{r}^{end}\leq T\) and an expected energy consumption \(E_{r}^{res}\in[0,E^{cap}]\). When vehicles are not used, they can be charged from the grid with power bounded by \(p_{\max}\in\mathbb{R}^{+}\). The price of the grid energy varies in time and we denote its value at timestep \(t\) with \(c_{t}\in\mathbb{R}^{+}\). If a reservation is uncovered, i.e. not served by an EV from the fleet, it is fulfilled by a fuel car for the cost \(c^{uncov}E_{r}^{res}\). Future costs are anticipated with a term \(\alpha(E^{cap}-E_{v,t_{\max}})\) that penalizes low EV charging levels at the end of the time period. The target is to find a schedule of minimal cost satisfying all operational constraints.
### _Compact formulation_
In [15] the problem is formulated as a Mixed Integer Linear Program (MILP). In what follows we refer to this MILP as _compact formulation_ (compact MILP). It has \((n+1)r_{\max}\) binary variables: \(x_{r,v}=1\) represent the assignment of the reservation \(r\) to the vehicle \(v\) and \(y_{r}=1\) (\(y_{r}=0\)) indicates that a reservation is not assigned (assigned) to any vehicle. Continuous variables represent charging powers of vehicle \(v\) at time \(t\) as \(p_{v,t}\in[0,p_{\max}]\). A multitude of constraints prevent conflicts, such as the assignment of two overlapping reservations to the same vehicle. In addition, for each vehicle inequality constraints ensure that the battery charging levels are within allowed bounds \([0,E^{cap}]\).
### _Extended formulation_
In this work, we focus on a slightly modified version of the original problem [15]: First, we restrict the charging powers to two discrete levels, \(p_{n,t}\in\{0,p_{\max}\}\) (instead of the continuous values in the original formulation). Second, we discretize the energy levels \([0,E^{cap}]\rightarrow\mathcal{E}=\{0,\ldots,i\Delta E,\ldots,i_{\max} \Delta E\}\) with equal spacing \(\Delta E\) between these levels. The initial SoC \(E_{v,0}\) and the energies for reservations \(E_{r}^{res}\) are rounded to the next lower and upper levels in \(\mathcal{E}\), respectively. Finally, we ignore "free" photovoltaic energy during the optimization process and simply integrate it into the final solution.
This modified problem formulation still captures all the relevant aspects and the complexity of the original problem setting.
A feasible exploitation scenario for one EV, i.e. the assignment to reservation(s) and the charging schedule, can be represented as a path on a _weighted directed acyclic graph_\(G=(\mathcal{V},\mathcal{A})\) (see Fig. 1). Nodes \(\mathcal{V}\) correspond to possible charge levels \(\mathcal{E}\) at different time steps \(\{0,\ldots,T\}\); two auxiliary nodes _source_ and _sink_ help to encode the selection of a vehicle from the fleet and the value of the final SoC respectively. Arrows represent possible actions at different
timesteps: selection of a particular vehicle, charging, allocation to a reservation, or nothing.
The number of nodes depends on the discretization step \(\Delta E\) and the number of timesteps \(|T|\), while the number of arrows is linear on the size of the problem: we add one arrow from the source per vehicle and at most \(|\mathcal{E}|\) arrows for each reservation.
The EVFCAP problem can then be decomposed into two subproblems: i) the _pricing problem_ which suggest new promising exploitation scenarios, i.e. the paths in the graph ii) the _master problem_ which selects one feasible exploitation scenario for each EV in the fleet from the subset of feasible exploitation scenarios. Each exploitation scenario comes with a cost, and the global objective is to minimize the total costs of selected scenarios plus the cost of unsatisfied reservations.
### _Master problem_
We introduce one binary variable \(\lambda_{p}\in\{0,1\}\) for each path \(p\in\mathcal{P}\) from the source node to the sink node in the graph \(G\). The cost of the path \(c_{p}\) is a sum of the costs of all arrows in it: it corresponds to the sum of the grid energy costs (internal arrows) with the future costs (the arrow going to sink). In addition, as in the compact MILP, we take one variable \(y_{r}\in\{0,1\}\) per reservation \(r\in R\); \(y_{r}=1\) implies that the reservation \(r\) is unsatisfied.
The master problem may be formulated as Set Partition:
\[\min\sum_{p\in\mathcal{P}}c_{p}\lambda_{p}+c^{uncov}\sum_{r\in R }E_{r}^{res}y_{r} \tag{1}\] \[\sum_{\begin{subarray}{c}p\in\mathcal{P}:\\ r\in p\end{subarray}}\lambda_{p}+y_{r}=1, \forall r\in R\] (2) \[\sum_{\begin{subarray}{c}p\in\mathcal{P}:\\ v\in p\end{subarray}}\lambda_{p}=1, \forall v\in V\] (3) \[\lambda_{p}\in\{0,1\}, \forall p\in\mathcal{P} \tag{4}\]
where
* \(r\in p\) in (2) means that for some \(E_{j}\) the arrow \(a_{r}:(E_{j},T_{r}^{start})\rightarrow(E_{j}-E_{r}^{res},T_{r}^{end})\) corresponding to the reservation \(r\) is in the path \(p\). The constraint (2) implies that the reservation can be satisfied at most once.
* In (3) the notation \(v\in p\) means that the arrow taken from the source node corresponds to the vehicle \(v\). The constraint (3) means that we have to select **precisely** one path for each vehicle (it can be trivial).
The Set Partition problem (alternately called Exact Set Cover) is NP-hard [17] as is its approximation within the factor \(\ln n\)[19]. Typical instances issued from the EVFCAP data are difficult for the of-the-shell solvers: for example, the Gurobi solver does not find optimal solutions within one hour for \(12\) out of \(30\) instances with \(t_{\max}=192\), \(n=20\) and \(r_{\max}=320\) from our dataset. On the other side, no inequality constraints are present in this formulation, which allows for an efficient mapping to a QUBO form. Therefore, if the quantum annealing for the QUBO formulation rapidly returns good-quality solutions, it can improve both the _gap_ and the _runtime_ of the proposed decomposition scheme.
### _Pricing problem_
The number of paths \(|\mathcal{P}|\) in the graph \(G\) is exponential in the size of our instance. Therefore, we can't directly solve even _the relaxed version_ (where \(\lambda_{p}\in[0,1]\)). We use so-called **column generation**[20] originally introduced in [21] to circumvent this obstacle. In column generation, while solving the relaxation we do not consider all variables \(\mathcal{P}\) at once, but rather a restricted subset \(\mathcal{P}^{\prime}\subset\mathcal{P}\). We add new variables to \(\mathcal{P}^{\prime}\)**only if** they can improve the solution of the relaxation.
Promising variables are found in the _pricing routine_ that searches for violated cuts in the dual problem. Indeed, each variable in the primal problem (whether in \(\mathcal{P}^{\prime}\) or not) corresponds to a constraint in the dual problem, and by the duality theorem if the dual solution is feasible, then the primal solution is optimal [22].
In our case, the pricing is equivalent to the search of the _shortest path_ between _source_ and _sink_ nodes where edges corresponding to vehicle selection and reservations get costs determined by the dual solution of the restricted problem.
## III Hybrid quantum-classical approach
We suggest a hybrid approach that uses a classical column generation technique to build an instance of the Set Partition
Fig. 1: **Graph of feasible exploitation scenarios.** For clarity we show arrows only for one internal node. The graph contains one node per discrete SoC level per time step _(black circles)_ and two additional nodes called _source_ and _sink_. Arrows correspond to different actions: charging _(blue arrow)_, allocating to a reservation _(green arrows)_ and nothing _(pink arrow)_. Only feasible actions are represented in the graph. We connect the source node to the initial SoC values \(E_{v,0},v\in V\) (one arrow for each vehicle). All nodes corresponding to the final timestep \(v_{i}=(i\Delta E_{i},t_{\max})\) are connected to the sink node. We assign a cost to each arrow _(black subscript)_. Arrows corresponding to the charging at a timestep \(T_{i}\) get the cost value of \(c_{i}p_{\max}\Delta t\). The arrows going to the sink node have the cost \(\alpha(E_{cap}-E_{i})\). All other arrows have zero cost.
problem, that is further transformed into a QUBO form and solved with quantum heuristics (see Figure 2).
In the first (classical) part we solve the _relaxed version_ of the master problem: we start from a restricted set of variables that guarantee the existence of a feasible solution (trivial plans) and iteratively add variables that may improve the relaxed solution. If the pricing problem fails to find a "promising" variable, the relaxation is solved to optimality. In such case we take all generated variables (\(\mathcal{P}^{\prime}\)) and consider an _integer_ master problem over them - this ILP is equivalent to a search of an optimal set partition.
In the traditional Branch & Price [20], a fractional solution for relaxation leads to the branching, and the variables may be regenerated in every node of the branching tree. The regeneration is necessary to find an exact optimum as an optimal _integer_ solution may involve variables that don't appear in the solution process for the _relaxed_ linear program. The quantum-assisted procedure presented in [10] integrates the quantum solver as a _primal heuristic_ in the traditional Branch & Price scheme.
In contrast, in our approach, the variables are generated only once in the so-called root master problem. Compared to Branch & Price we significantly reduce the running time at the cost of the optimality guarantees. In a nutshell, we obtain a heuristic method, where the column generation presents candidate exploitation plans for individual vehicles that are further combined in the master problem.
Our hybrid approach can be applied in the same contexts as the Branch & Price (or heuristic Branch & Price) when, in addition, the time-to-solution is an important performance metric. We recall that the Branch & Price is particularly suitable for complex planning and logistics problems where difficult (nonlinear) constraints restrict the set of possible solutions [23].
## IV Numerical results
### _Evaluation of the decomposed formulation_
On Figure 3 we compare the quality of solutions returned with the proposed decomposition scheme ("EF" for extended formulation) to the results found within one hour by the Benders-decomposition-based heuristic from [15]. As a baseline, we take the solution obtained by Gurobi within one hour on the compact MILP formulation (\(C^{MILP}\)). In this experiment, we aim to evaluate the relevance of the decomposition (disregarding the performance of quantum solvers), so we delegate the master problem to the Gurobi solver (version 9.5). We report the relative difference in the cost value, \((C^{h}-C^{MILP})/C^{MILP}\), where \(C^{h}\) is the value of the objective function in the solution returned by heuristics \(h:\) (BH) and (EF).
We observe that on difficult instances the proposed EF approach leads to solutions of comparable quality while being significantly faster, except for the largest instances where the time needed for solution hits the one hour (3600 seconds) limits. We recall that on instances with \(t_{\max}=192\) the Benders-decomposition-based heuristic _always_ runs for _one hour_ disregarding the values of other parameters [15]. Moreover, for large instances with \(n=20\) vehicles the performance of EF approach scales better since the relative cost difference decreases with increasing \(r_{\max}\). However, this is most likely due to the degrading performance of the reference MILP approach rather than an improved performance of the proposed heuristic approaches.
### _Metaheuristics for the master problem_
On large instances (\(n=20\), \(r_{\max}=16n=320\) ) the Gurobi solver reaches the 1-hour time limit while solving the integer master problem (Figure 3, last panel). Therefore, we accelerate this step by moving from the exact Gurobi solver to the quantum annealing as well as to classical heuristic solvers.
Fig. 2: The Hybrid quantum-classical algorithm for the EVFCAP problem. The red node (solution of the integer master problem) is delegated to the quantum annealing. Dashed part shows the workflow of the traditional Branch & Price.
We test the performance of the classical metaheuristics _simulated annealing_[24] and _tabu search_[25] (Figure 4)3. We also benchmark the quantum-inspired vector annealer by NEC4 on our instances of the Set Partition Problem. The vector annealer performs the simulated annealing (on a vector supercomputer) but restricts local moves to the feasible subspace. We observe that this modification significantly changes the behavior of the metaheuristic: while the standard simulated annealing on the QUBO formulation finds better solutions with an increasing number of reservations (and, as a consequence, the number of constraints in the master problem), the opposite is true for the vector annealer.
Footnote 3: We use the implementation from the dwave-neal module
Footnote 4: [https://www.nec.com/en/global/quantum-computing/](https://www.nec.com/en/global/quantum-computing/)
Comparing the approximate solutions with the solutions found by Gurobi in one hour (Fig. 4), we observe that the quality of the solution decreases substantially. The cost difference is always positive (no improvement) and for large instances is at least 10% worse. However, given the substantially reduced runtime - each heuristic takes no more than 5 minutes - it might be reasonable to use heuristics and trade solutions quality for runtime improvement. As quantum annealers may further reduce the time-to-target [3], in a close-to-online regime (where new reservations appear during the time-horizon) the hybrid approach is a promising option for cost-efficient planning.
### _Quantum annealing for the master problem_
Finally, we evaluate the potential of our quantum-classical hybrid scheme on instances with \(t_{\max}=32\) where we use the DWave Advantage 6.1 for solving the Set Partition problem of the proposed decomposition scheme. If the quantum annealer returns an infeasible solution we restore feasibility in a greedy fashion: subsets from the infeasible solution are iteratively added to a partial solution if the addition doesn't violate any constraints.
We remark that even if we are able to run experiments only on the smallest instances from the benchmark dataset, we used the _real data_ and not the simplified one as in most papers [8, 10] that benchmark the quantum annealer on industrial use-cases. We compared the obtained results to the ones found by QuEnc - the variational quantum algorithm for gate-based quantum computers based on _amplitude encoding_[26].
In the DWave Advantage hardware physical qubits interact only with their local neighbors in the _Pegasus_ layout. The QUBO has to be _embedded_ in the hardware architecture, which leads to an overhead since a _logical qubit_\(x_{i}\) has to be represented by a chain of _physical qubits_\(\{q_{i}^{1},\ldots,q_{i}^{k}\}\), see, e.g., [27].
We observe (see Tab. I) that quantum annealing is able to find optimal or near-optimal solutions for very small instances of \(n=1\) and \(n=2\) EVs. For \(n=5\) the relative performance
Fig. 4: Performance of classical metaheuristics on the generated instances of the Set Partition problem. (SA) stands for the simulated annealing, (Tabu) for the tabu search, (VA) for the vector annealing by NEC.
Fig. 3: Solution quality and runtime (in seconds) on instances for various numbers \(n\) of EVs, maximal number of reservations \(r_{\max}\) and for \(t_{\max}=192\). Relative cost changes are given with respect to the 1-hour compact MILP solution. We report the values for the Bender-decomposition-based heuristic from [15] (BDH) and for the approach proposed here (EF) on instances with \(t_{\max}=192\). The first three plots demonstrate the relative difference in the cost value obtained by both heuristics. We observe that on most difficult instances (with \(n=20\) for with \(n=10\) and \(r_{\max}>8n\)) the solution quality of the EF heuristic is systematically higher compared to BDH. For the largest instances (\(n=20\)), the negative relative cost (with respect to the MILP solution) demonstrates the advantage of the heuristics over the time-limited Gurobi solution. The rightmost plot demonstrates the runtime of the EF heuristic. On the largest instances (\(n=20\), \(r_{\max}=16n\)) it hits the one-hour limit. However, it doesn’t disqualify the EF approach: we recall that the BDH runtime always equals 1 hour (3600 seconds), and the compact MILP is solved to optimality before the one-hour time limit only for \(n=5\).
is worst for the smallest number of reservations \(r_{\max}=20\) and improves with larger \(r_{\max}\). Interestingly, this trend correlates with the embedding overhead. The QuEnc approach shows the same qualitative behavior as a function of \(r_{\max}\) for fixed \(n\), but already fails to find optimal solutions for \(n=2\) EVs. The possible cause might be the generic hardware-efficient structure of QuEnc ansatz - contrary to the problem-specific annealing evolution or cost-dependent QAOA ansatz [28].
## V Discussion
We suggested a new approach to address real-world problems with hybrid quantum-classical routines. Instead of formulating the problem as one MIP, we separate it into master and pricing problems; the NP-hard master problem is further delegated to a quantum (or hybrid) algorithm. Constraints that are difficult for quantum routines are managed inside the classical pricing routine.
We tested our approach on the EVFCAP problem. The proposed decomposition of the original problem into two sub-problems enables hybrid quantum-classical approaches despite the many inequality constraints in the compact formulation. Additionally, for larger instances, it allowed us to find better solutions in a shorter time while using only classical methods. Our numerical experiments also confirm that quantum annealing is in principle capable to solve the master problem. This spurs the hope that the integration of quantum routines can further accelerate the search for a good-quality approximate optimum in the future. However, experiments on hardware with more qubits and better connectivity are necessary to further evaluate the potential of a quantum advantage for this problem.
In this regard, the proposed approach provides a promising route to solve planning problems with difficult constraints with hybrid quantum-classical schemes.
|
2302.12762
|
Random sparse generators of Markovian evolution and their spectral
properties
|
The evolution of a complex multi-state system is often interpreted as a
continuous-time Markovian process. To model the relaxation dynamics of such
systems, we introduce an ensemble of random sparse matrices which can be used
as generators of Markovian evolution. The sparsity is controlled by a parameter
$\varphi$, which is the number of non-zero elements per row and column in the
generator matrix. Thus, a member of the ensemble is characterized by the
Laplacian of a directed regular graph with $D$ vertices (number of system
states) and $2 \varphi D$ edges with randomly distributed weights. We study the
effects of sparsity on the spectrum of the generator. Sparsity is shown to
close the large spectral gap that is characteristic of non-sparse random
generators. We show that the first moment of the eigenvalue distribution scales
as $\sim \varphi$, while its variance is $\sim \sqrt{\varphi}$. By using
extreme value theory, we demonstrate how the shape of the spectral edges is
determined by the tails of the corresponding weight distributions, and clarify
the behavior of the spectral gap as a function of $D$. Finally, we analyze
complex spacing ratio statistics of ultra-sparse generators, $\varphi =
\mathrm{const}$, and find that starting already at $\varphi \geqslant 2$,
spectra of the generators exhibit universal properties typical of Ginibre's
Orthogonal Ensemble.
|
Goran Nakerst, Sergey Denisov, Masudul Haque
|
2023-02-24T17:27:26Z
|
http://arxiv.org/abs/2302.12762v2
|
# Random sparse generators of Markovian evolution and their spectral properties
###### Abstract
The evolution of a complex multi-state system is often interpreted as a continuous-time Markovian process. To model the relaxation dynamics of such systems, we introduce an ensemble of random sparse matrices which can be used as generators of Markovian evolution. The sparsity is controlled by a parameter \(\varphi\), which is the number of non-zero elements per row and column in the generator matrix. Thus, a member of the ensemble is characterized by the Laplacian of a directed regular graph with \(D\) vertices (number of system states) and \(2\varphi D\) edges with randomly distributed weights. We study the effects of sparsity on the spectrum of the generator. Sparsity is shown to close the large spectral gap that is characteristic of non-sparse random generators. We show that the first moment of the eigenvalue distribution scales as \(\sim\varphi\), while its variance is \(\sim\sqrt{\varphi}\). By using extreme value theory, we demonstrate how the shape of the spectral edges is determined by the tails of the corresponding weight distributions, and clarify the behavior of the spectral gap as a function of \(D\). Finally, we analyze complex spacing ratio statistics of ultra-sparse generators, \(\varphi=\text{const}\), and find that starting already at \(\varphi\geqslant 2\), spectra of the generators exhibit universal properties typical of Ginibre's Orthogonal Ensemble.
## I Introduction
Continuous-time Markov chains (CTMCs) [1] provide a popular framework to model dynamics of multi-state systems in diverse fields ranging from physics, chemistry, and biology [2; 3; 4] to economics [5; 6] and game theory [7; 8]. CTMCs are used to model chemical reactions [9; 10; 11; 12; 13; 14; 15], gene regulation processes [16; 17; 18; 19; 20], quantum dynamics (approximated by rate equations) [21; 22; 23; 24; 25], evolutionary game dynamics [26; 8; 27], and many other processes. CTMCs are also the key element of such celebrated models of statistical physics as contact processes [28; 29; 30], zero-range processes [31; 32] and exclusion processes like ASEP [33; 34; 35; 36; 37; 38; 39; 2]. In some fields, CTMCs are known under the names 'classical Markovian master equations' or 'rate equations'.
A continuous-time Markovian evolution in finite discrete space consisting of \(D\) states can be specified with a transition rate matrix \(\mathcal{K}\)[1], which is a generator of Markovian evolution. (It is called 'Kolmogorov operator' in Ref. [40]). The equation governing the evolution of a probability vector \(P(t)\), defined on the state space,
\[\frac{d}{dt}P(t)=\mathcal{K}P(t), \tag{1}\]
has the formal solution, \(P(t)=\exp(t\mathcal{K})P_{0}\), where \(P_{0}=P(0)\) is the initial probability vector. The evolution of \(P(t)\) is thus fully determined by the generator \(\mathcal{K}\), especially by its spectral properties. The fact that the operator \(\mathcal{M}_{t}=\exp(t\mathcal{K})\) should map a non-negative vector onto another non-negative vector while preserving \(\ell_{1}\)-norm, means that \(\mathcal{K}\) satisfies a set of constraints and these constraints have an effect on its spectral properties [1].
In order to model the evolution of a complex system with CTMCs, we would have to first design a specific Kolmogorov operator. Taking into account the large variety of existing models, it would be beneficial to figure out universal properties of \(\mathcal{K}\)-generators, i.e., properties that are typical rather than specific to a particular model. The first step in this direction is to define _random_ ensembles of generators. A similar situation arises in the case of unitary time-continuous evolution, where the corresponding generators (quantum Hamiltonians) were explored and classified by using the powerful toolbox of random matrix theory (RMT) [41; 42; 43; 44; 45]. Implementation of this idea resulted in the creation of Quantum Chaos theory [46; 47; 48] which made - and is still making - a strong impact on many-body quantum physics, both theoretical [49] and experimental (see, e.g., Ref. [50]).
Recently, RMT-based approaches were developed to analyze spectral properties of random generators of open quantum (Lindblad operators) [51; 52; 53; 54] and classical (Kolmogorov operators) [55; 40; 56] Markovian evolution. The considered generators, both quantum and classical, were on purpose constructed in a completely random way - up to the constraints that make them legitimate generators. In the case of Kolmogorov operators, this means that they are represented by dense matrices [55; 40]. It was shown that the spectral density of such operators represents a free sum of a uniform disc and a Gaussian distribution which results in a distinctive spindle-like shape [40], as shown in Figure 2 (a). This density is universal, in the sense that the particular way the operators are sampled does not affect the shape of the spindle (but may affect its position on the real axis and its overall scaling) [40].
In contrast to the random Kolmogorov operators, for most applications and known models, the corresponding \(\mathcal{K}\)-generators are represented by _sparse_ matrices. This is a consequence of locality and other topological constraints imposed on the allowed transitions in the state spaces of the models. For many-component or many
particle systems, elements of the generator matrix typically represent changing multiple (or all) components of the system simultaneously, e.g., for an exclusion process, a generic matrix element could represent correlated hopping of many particles. Since such processes are usually absent in physically (biologically, economically,...)-motivated models, most elements of the corresponding \(\mathcal{K}\)-matrices are zeros.
Sparsity affects the spectra, \(\{\lambda_{i}\}\), \(i=1,2,...,D\), of the corresponding generators. Most noticeably, the spectral gap, i.e., the distance between \(\lambda_{1}=0\) and the eigenvalue closest to it, \(\gamma_{*}=\min\{|\operatorname{Re}\lambda_{i}|\}\), does not grow with the increase of number of states \(D\). This is in sharp contrast to the case of dense random generators [51; 53; 54]. In Figure 1 we contrast the case of dense random stochastic matrices (a) with various model systems described by sparse Markov generators (b-f).
The large gap of dense random generators implies that even the slowest decaying mode of a generic initial probability vector converges rapidly to the equilibrium state, the relaxation time (inverse of the spectral gap) decreasing inversely in the state space size \(D\). In contrast, physical generators of CTMCs in general have spectral gaps and relaxation times functionally depending on \(D\) very differently than (anti-)linearly; see, e.g. [57; 58; 59; 60] for the example of the exclusion process. This difference of behavior suggests that it is more suitable to model physical CTMC generators by sparse rather than dense random matrices.
Our motivation is to refine the RMT approach to random Kolmogorov operators by including sparsity which is characteristic to physically relevant \(\mathcal{K}\)-generators. We specify an ensemble of random matrices of fixed sparsity \(\varphi\) as an ensemble of negative combinatorial Laplacians of random regular directed graphs. The sparsity is controlled by the vertex degree \(\varphi\) which is equal to the number of non-zero elements per row and column of the generator matrix. In graph terms, this means that each vertex has exactly \(\varphi\) incoming and \(\varphi\) outgoing edges. The non-zero elements (edge weights) are taken to be random, positive, independent, and identically distributed (iid).
A similar setup was studied in Ref. [56], where an ensemble of oriented Erdos-Renyi graphs [62], parameterized with edge probability distribution \(p(D)\), was used. The vertex distribution in this case is binomial-distributed [62], and not constant as in our case. However, one might expect similar behavior in the \(D\to\infty\) limit with the correspondence \(p(D)=\varphi/D\). The authors of Ref. [56] considered the regime \(Dp(D)\gg(\log D)^{6}\), which they found to have the same universal properties as in the non-sparse case. In this work, we consider sparsity beyond this limit, including specifically \(\varphi\sim D^{0}\) (vertex degree not growing with \(D\)) and \(\varphi\sim\log D\).
In this paper, we investigate the dependence of spectral properties of the sparse Kolmogorov operators on sparsity parameter \(\varphi\), number of states (dimension of the state space) \(D\), and on the edge weight distribution, i.e., on the distribution of the nonzero elements of matrix \(\mathcal{K}\). Explicit results are mostly derived for \(\chi_{2}^{2}\) and uniform weight distributions; however, these results can be adapted to other weight distributions.
We consider both the bulk of the spectral distribution and its edges.
As for the bulk, we focus on its position \(\mu\) (the mean of the corresponding eigenvalue distribution) and its variances along the real and imaginary axes (standard deviations of the distribution of the real and imaginary eigenvalue parts, respectively). The first variance estimates the spread of the relaxation rates, while the second one gives the timescales of the oscillations during the relaxation.
As for the edges, we address the spectral gap and the extent of the spectrum along the real axis (the real part of the eigenvalue with largest absolute real part). These determine respectively the slowest and fastest time scales of relaxation to the steady state. The spectral gap is of physical interest for multi-state Markov processes, see, e.g., [57; 58; 59; 60; 63; 64; 65; 66; 67] for the ASEP and [68; 69] for the contact process. The horizontal extent, in addition to its interpretation as the fastest timescale for CTMCs, is also relevant in the graph theory interpretation, e.g., to quantify the computational complexity of the community detection problem [70; 71] and the max-cut [72; 73] problems.
We demonstrate that the position and variance of the spectral bulk of sparse Kolmogorov operators scale as \(\sim\varphi\) and \(\sim\sqrt{\varphi}\), respectively. These characteristics do not depend on \(D\) but on the first and second moments of the weight distribution. The dependence of the spectral edges on the weight distribution is less straightforward and highly non-universal. In particular, we show that, in the regime of high sparsity, \(\varphi\ll D\), the spectral gap (horizontal extent) depends only on the left (right) tail of the weight distribution. We evaluate the dependence of the spectral gap on \(\varphi\) and \(D\) for weight distributions with exponential and power-law tails.
We consider the cases of \(\chi_{2}^{2}\) and uniform weight distributions in detail. For these distributions, we find that the spectral gap and the horizontal extent of the spectrum can be approximated by the largest and smallest diagonal entry of the generator matrix, respectively. Using the conjecture that this correspondence holds in general, we use extreme value theory (EVT) [74; 75] to analytically derive dependencies of the spectral edges on \(\varphi\) and \(D\). In particular, we infer that the distributions of spectral edges only depend on the tails of the weight distributions.
Finally, we analyze correlations of the eigenvalues of sparse Kolmogorov operators. We show that, for \(\varphi\geq 2\), the complex spacing ratio distributions [61] of the spectral bulks follow the distribution typical to Ginibre's Orthogonal Ensemble.
The paper is organized as follows. In Section II we introduce an ensemble of sparse random Kolmogorov operators. We analyze the bulk of the spectral distributions of the ensembles in Section III. In Sections IV and V we address the spectral gap and the horizontal extent of the
spectrum, respectively. A discussion on correlations between eigenvalues in terms of the complex spacing ratio follows in Section VI. We conclude with a summary of our results in Section VII. Appendices contain information on the models whose spectra are presented in Figure 1, details of the sampling of sparse random Kolmogorov operators, and the details of analytical derivations.
## II Random sparse Kolmogorov operators
In this section, we first recall some basic properties of Kolmogorov operators and review the case of full random \(\mathcal{K}\)-matrices. We then define an ensemble of random sparse operators. In what follows, matrices will be referred to by calligraphic letters (e.g., \(\mathcal{K}\)) while their elements will be denoted by non-calligraphic letters (e.g., \(K_{ij}\)).
### Basic information
In order to be qualified as a Kolmogorov operator, a matrix \(\mathcal{K}\) has to fulfill two conditions, (i) all its off-diagonal elements have to be real and non-negative, \(K_{ij}\geq 0\), \(i\neq j\), and (ii) the sum over every column should be zero. The latter is fulfilled by setting all the diagonal elements as
\[K_{ii}=-\sum_{j\neq i}K_{ji}. \tag{2}\]
The first condition guarantees the preservation of the non-negativity of a vector during the evolution induced by Eq. (1), while the second one guarantees the preservation of the \(\ell_{1}\)-norm of the vector.
The spectrum of \(\mathcal{K}\) is in general complex. Since \(\mathcal{K}\) maps real vectors onto real vectors, the spectrum is invariant under complex conjugation, so all complex eigenvalues come in conjugated pairs. The spectrum contains at least one eigenvalue \(\lambda_{1}=0\) with right eigenvector corresponding to the steady state. By virtue of the Perron-Frobenius theorem [76; 77; 78], the components of the steady state vector can be chosen to be non-negative, which makes it, after normalization, a probability vector. Any Kolmogorov operator can be represented in terms of a real non-negative matrix, \(\mathcal{M}\), \(M_{ij}\geq 0\),
\[\mathcal{K}=\mathcal{M}-\mathcal{J}, \tag{3}\]
where elements of the diagonal matrix \(\mathcal{J}\) are \(J_{jj}=\sum_{i}M_{ij}\).
We now briefly review the case of dense (non-sparse) random Kolmogorov operators [55; 40; 56]. Elements \(M_{ij}>0\) are i.i.d. sampled from a distribution with density \(p(x)\) and first two moments \(\mu_{0}=\int xp(x)dx\) and \(\sigma_{0}^{2}=\int(x-\mu_{0})^{2}p(x)dx\). The particular choice of distribution does not play an essential role (provided that it is not very pathological). For example, we could sample a matrix \(\mathcal{Z}\) from Ginibre's Unitary Ensemble (GinUE) and then square its elements, \(M_{ij}=|Z_{ij}|^{2}\)[40]). The matrix \(\mathcal{M}\) is then full in the sense that, with probability 1, all its elements are different from zero.
The elements of the matrix \(\mathcal{M}\) are i.i.d., thus, in the asymptotic limit, its spectral density is a uniform disk
Figure 1: Spectra of generators of Markovian evolution, Eq. (1). **(a)** Dense (non-sparse) random generator with \(\chi_{2}^{2}\) edge weight distribution, **(b)** totally asymmetric simple exclusion process (TASEP) on a ring with staggered hopping probabilities [61], **(c)** asymmetric simple exclusion process ASEP on a chain with open boundary conditions and next nearest neighbor terms, **(d)** a process of particle hopping on an open boundary grid with random hopping probabilities, **(e)** a contact process on a chain [28], **(f)** a gene transcription model from Ref. [20]. In each plot the real and imaginary axes have the same scale. The models are described in Appendix A.
of radius \(\sqrt{D}\sigma_{0}\), with the center at \(0\). In the dense limit, the elements of \(\mathcal{J}\) are sums of \(D\) independent random variables, so its elements can be approximated with Gaussian-distributed random variables with mean \(D\mu_{0}\) and variance \(D\sigma_{0}^{2}\).
Following the RMT approach [40], the Kolmogorov operator in Eq. (2) can be modelled as
\[\mathcal{K}_{\mathrm{R}M}=-\mu_{0}D\cdot\openone+\sigma_{0}\sqrt{D}( \mathcal{G}-\mathcal{D}), \tag{4}\]
where \(\mathcal{G}\) is a member of Ginibre's Orthogonal Ensemble (GinOE) and \(\mathcal{D}\) is a diagonal matrix. Elements of \(\mathcal{G}\) and \(\mathcal{D}\) are sampled from the normal distribution of zero mean and unit variance. Here \(\sigma_{0}\sqrt{D}\cdot\mathcal{G}\) models \(\mathcal{M}\) while \(\mathcal{J}\) is approximated as \(\mu_{0}D\cdot\openone+\sigma_{0}\sqrt{D}\cdot\mathcal{D}\).
The spectral density of the non-trivial part, \(\mathcal{K}^{\prime}=\mathcal{G}+\mathcal{D}\), is a free convolution of a disk and a Gaussian distribution along the real axis, which results in a spindle-like shape. Figure 2 (a) presents both the spectrum of a single random dense Kolmogorov operator and histogram obtained with 100 samples.
Alternatively, we can state that the spectral density of the rescaled generator
\[\mathcal{K}^{\prime}=\frac{1}{\sigma\sqrt{D}}(\mathcal{K}+\mu_{0}D\cdot \openone) \tag{5}\]
is expected, in the asymptotic limit, to be the \(D\)-independent spindle ("an additive Gaussian deformation of the circular law", according to Ref. [56]).
The spectrum of the random non-sparse generator has a large gap which scales as \(D\), as seen in Figures 1 and 2. We will see that this feature is strongly affected when we introduce sparsity.
### Ensemble of sparse random Kolmogorov operators as a set of oriented graphs
The operator \(\mathcal{K}\) described in the introduction can be considered as the negative Laplacian of a random directed graph with positive, iid edge weights, without self-loops, and with fixed vertex degree equal to \(\varphi\).
For example, the graph corresponding to the \(\mathcal{K}\) generator of a process of a particle hopping on a \(d\)-dimensional hypercubic lattice with periodic boundary conditions and random hopping rates is a particular (to the nearest-neighbor connections) realization of the ensemble with \(\varphi=2d\). Figure 1 (d) shows an example spectrum for \(d=2\).
The regularity of the graphs ensures that, with probability \(1-O(D^{-\varphi-1})\), they are strongly connected as long as \(\varphi\geq 2\)[79]. Strong connectivity is a good feature because it means that the matrix \(\mathcal{K}\) is not of block-diagonal structure and the state space is not partitioned into disconnected subsets. As there is only one strongly connected component, there is only one absorbing component. This implies that the multiplicity of the zero eigenvalue is one and the steady state is unique. Finally, every state in the state space is reachable from every other state. The steady state, therefore, has all states populated.
Some physical models motivating this study presented in Figure 1 are - except for the contact process - all strongly connected. The contact process is only effectively strongly connected. It has two strongly connected components, where one is a single vertex and the other includes the remaining \(D-1\) vertices. The giant component is the only absorbing component and consequently, the steady state is unique.
We will focus on sparse generators with \(\varphi\geq 2\) and will discuss \(\varphi=1\) in Section VI.
The physical models presented in Figure 1 motivate us to focus on two types of dependencies of \(\varphi\) on the matrix size \(D\), namely \(\varphi=\mathrm{const}\) and \(\varphi\sim\log D\). For generators of single particle hopping models - an example is shown in Figure 1 (d) - the average number of non-zero elements per column and row is constant and independent of \(D\). It increases logarithmically with \(D\) in many-body hopping models such as the ASEP or the contact process, Figure 1 (b), (c), and (e). There is no simple dependence of \(\varphi\) on \(D\) in the gene transcription model, Figure 1 (f), as the matrix size \(D\) is controlled by multiple parameters, see Appendix A.
What can we say about spectral densities of the ultra-sparse \(\mathcal{K}\)-generators, with \(\varphi=\mathrm{const}\)? A 'naive' adjustment of the RMT approach, which consists in describing the elements of a sparse matrix \(\mathcal{M}\) with probability density function \(\widetilde{p}(x)=(1-\frac{\varphi}{D})\delta(x)+\frac{\varphi}{D}p(x)\), re-scaling the mean and variance accordingly, and then using the RMT model, Eq. (4), would not work here for two reasons. First, the spectral densities of such sparse matrices cannot be approximated with members of 'dense' RMT ensembles. Second, the Central Limit Theorem no longer applies and the entries of matrix \(\mathcal{J}\) cannot be approximated with normal random variables (the entries become distribution-specific).
## III Position and Width of the Bulk of the Spectrum
In this section, we analyze the dependence of the position and horizontal width of the bulk of the spectrum on the sparsity parameter \(\varphi\) and the matrix dimension \(D\). We first provide (Subsections III.1 and III.2) expressions and bounds for the position and the width, characterized respectively by the mean \(\mu(\lambda)\) of all eigenvalues and the standard deviation \(\sigma(\mathrm{Re}\,\lambda)\) of the real parts of the eigenvalues. These results are expressed in terms of the mean and standard deviation of the weight distribution (distribution of non-zero elements of the Kolmogorov operator \(\mathcal{K}\)), denoted by \(\mu_{0}\) and \(\sigma_{0}\) respectively.
Since the most prominent effect of sparsity is to reduce the parametrically large gap seen in the full random case, it is instructive to analyze the ratio \(\alpha=|\mu(\lambda)|/\sigma(\mathrm{Re}\,\lambda)\). This quantity provides insight into the distance of the
bulk of the spectrum from the origin, relative to the size of the bulk. Subsection III.3 is devoted to an analysis of the ratio \(\alpha\).
Numerical results presented in this section are obtained by sampling edge weights from the \(\chi_{2}^{2}\) and the standard uniform distribution.
The spectrum of dense generators (\(\varphi=D-1\)) consists of two distinct parts - an eigenvalue \(\lambda_{1}=0\) and the rest of the eigenvalues forming the spectral bulk away from the imaginary axis, as shown in Figure 1 (a) and Figure 2 (a). In contrast, the bulk of the spectrum is much closer to the imaginary axis for \(\varphi\ll D\), as seen in Figure 2 for (b) \(\varphi=\sqrt{D}\), (c) for \(\varphi=\log D\) and (d) for \(\varphi=3\). For \(\varphi=\sqrt{D}\), the bulk of the spectrum is visibly separated from the zero, as in the dense case. In fact, the spectral boundary is given by the same spindle (properly rescaled). Whether the spectral distribution is separated from zero for \(\varphi=\log D\) and \(\phi=3\) is difficult to say with certainty from the available numerical data (\(D\approx 8000\)).
### Position
The position of the spectral bulk of \(\mathcal{K}\) can be identified with the mean \(\mu(\lambda)\) of eigenvalues \(\lambda_{i}\),
\[\mu(\lambda)=\left\langle\frac{1}{D}\sum_{i=1}^{D}\lambda_{i}\right\rangle, \tag{6}\]
where the average \(\left\langle\dots\right\rangle\) is taken over the ensemble of random Kolmogorov operators described in Section II. Because the eigenvalues are either real or come in complex conjugate pairs, the mean of the spectral bulk is real, \(\mu(\lambda)=\mu(\mathrm{Re}\,\lambda)\).
A simple calculation, presented in Appendix C, shows that \(\mu(\lambda)\) can be expressed as
\[\mu(\lambda)=\left\langle\frac{1}{D}\operatorname{tr}(\mathcal{K})\right\rangle =-\varphi\mu_{0}, \tag{7}\]
The averaging \(\left\langle\dots\right\rangle\) over the matrix ensemble in Eq. (6) and Eq. (7) is, in principle, not needed since typicality is expected, i.e., for large enough \(D\), a single sample will display all the spectral features of the ensemble. This is because the quantity \(\frac{1}{D}\operatorname{tr}(\mathcal{K})\) is concentrated around its average \(\left\langle\frac{1}{D}\operatorname{tr}(\mathcal{K})\right\rangle\) for increasing \(D\), as shown in Appendix C.
For the four different dependencies of \(\varphi\) on \(D\) shown in Figure 2, Eq. (7) implies the following: For \(\varphi=\mathrm{const}\), the mean is independent of the matrix size \(D\). For \(\varphi=\log D\) (\(\varphi=\sqrt{D}\)) the mean decreases logarithmically with \(D\) (as \(\sim\sqrt{D}\)) and for \(\varphi=D\) the mean decreases linearly with \(D\) as is expected for the dense generators [55].
In Figure 2, the location \(\mu(\lambda)\) of generator matrices \(\mathcal{K}\) is indicated with a red dot in each panel. The real part of the dot resides in the bulk of the spectrum for every dependence of \(\varphi\) on \(D\) shown in Figure 2.
### Horizontal width
In Section III.1 we investigated where the bulk of the spectrum is located in the complex plane. We now analyze the width of the distribution. We are especially interested in the horizontal width.
We characterize the width of the bulk spectrum, both in the real and imaginary directions, \(\mathrm{Re}\,\lambda\) and \(\mathrm{Im}\,\lambda\), using the estimated variances
\[\sigma^{2}(\mathrm{Re}\,\lambda) =\left\langle\frac{1}{D}\sum_{i=1}^{D}\left(\mathrm{Re}\,\lambda_ {i}-\frac{1}{D}\sum_{j=1}^{D}\lambda_{j}\right)^{2}\right\rangle \tag{8}\] \[\sigma^{2}(\mathrm{Im}\,\lambda) =\left\langle\frac{1}{D}\sum_{i=1}^{D}(\mathrm{Im}\,\lambda_{i})^ {2}\right\rangle, \tag{9}\]
where we used the fact that \(\sum_{j=1}^{D}\lambda_{j}\) is real.
Because the eigenvalues appear in complex conjugate pairs, \(\sigma^{2}(\mathrm{Re}\,\lambda)\) and \(\sigma^{2}(\mathrm{Im}\,\lambda)\) are related to the estimated
Figure 2: Spectral densities of random Kolmogorov operators with \(\chi_{2}^{2}\) weight distribution. The matrix size is \(D\approx 8000\) and the densities are estimated with 100 samples. White areas contain no eigenvalues. **(a)** Dense matrix without the zero eigenvalue, **(b)** sparse matrix with \(\varphi=\sqrt{D}\) non-zero elements per row and column, **(c)**\(\varphi=\log D\) and **(d)**\(\varphi=3\). The insets show spectra of single realizations. In each plot, the real and imaginary axes have the same scale. The red dots mark the location of \(\mu(\lambda)\), given by Eq. (6), and the intervals shown in black are \([\mu(\lambda)-\sigma(\lambda),\mu(\lambda)+\sigma(\lambda)]\), where \(\sigma(\lambda)\) is given by Eq. (8).
complex pseudo-variance via
\[\sigma^{2}(\lambda) =\left\langle\frac{1}{D}\sum_{i=1}^{D}\left(\lambda_{i}-\frac{1}{D} \sum_{j=1}^{D}\lambda_{j}\right)^{2}\right\rangle\] \[=\sigma^{2}(\mathrm{Re}\,\lambda)-\sigma^{2}(\mathrm{Im}\,\lambda). \tag{10}\]
The estimated pseudo variance lower bounds the estimated variance of the real parts of the eigenvalues, \(\sigma^{2}(\lambda)\leq\sigma^{2}(\mathrm{Re}\,\lambda)\).
The complex pseudo variance can be analytically calculated for the ensemble of random generator matrices as
\[\sigma^{2}(\lambda) =\left\langle\frac{1}{D}\operatorname{tr}(\mathcal{K}^{2}) \right\rangle-\left\langle\frac{1}{D^{2}}\operatorname{tr}(\mathcal{K})^{2}\right\rangle\] \[=\varphi\left(\sigma_{0}^{2}+\frac{\varphi}{D}\mu_{0}^{2}-\frac{ 1}{D}\sigma_{0}^{2}\right). \tag{11}\]
Details of the calculation are provided in Appendix C. The bound of the estimated real variance by the pseudo variance together with Eq. (11) leads to the asymptotic lower bound of \(\sigma(\mathrm{Re}\,\lambda)\) in terms of the sparsity parameter \(\varphi\). As \(1\leq\varphi\leq D\), the estimated horizontal width of the bulk spectrum cannot grow asymptotically slower than \(\sqrt{\varphi}\),
\[\sigma(\mathrm{Re}\,\lambda)\gtrsim\sqrt{\varphi}. \tag{12}\]
Numerically, we find that the bound in Eq. (12) is asymptotically sharp for \(\varphi\ll D\), as shown in Figure 3 through the ratio \(\alpha\) of mean \(\mu(\mathrm{Re}\,\lambda)\) and width \(\sigma(\mathrm{Re}\,\lambda)\). The collapse of the data points in Figure 3 (c) implies that \(\sigma(\mathrm{Re}\,\lambda)\sim\sqrt{\varphi}\).
### Ratio of mean and horizontal width
In this section, we combine the information of the location of the spectrum given by Eq. (6) and the horizontal width of the bulk given by Eq. (8) into the ratio
\[\alpha=\frac{|\mu(\mathrm{Re}\,\lambda)|}{\sigma(\mathrm{Re}\,\lambda)}. \tag{13}\]
This quantifies how close the bulk spectrum is, relative to its size, to the stationary value \(\lambda_{1}=0\). i.e., to the imaginary axis. For \(\alpha=O(1)\) the estimated width of the bulk is of the same order as the estimated mean, thus the spectrum is located close to \(0\). For \(\alpha\gg 1\) the estimated mean is much bigger than the horizontal width of the bulk and the bulk of the spectrum is far away from \(0\).
The analytical result for the estimated mean of the spectrum, Eq. (7), together with the asymptotic bound on the standard deviation of the real parts of the spectrum, Eq. (11), imply the following asymptotic bound on \(\alpha\)
\[\alpha\lesssim\sqrt{\varphi}. \tag{14}\]
Numerically, we observe that the bound in Eq. (14) is asymptotically tight for \(\varphi\ll D\), i.e.
\[\alpha\approx c_{1}+c_{2}\sqrt{\varphi}, \tag{15}\]
for constants \(c_{1}\) and \(c_{2}\). Since \(\mu(\lambda)\) scales linearly with \(\varphi\), this behavior is consistent with \(\sigma(\mathrm{Re}\,\lambda)\sim\sqrt{\varphi}\), stated previously. The constants are found to be \(c_{1}\approx 0.15\) (\(\approx 0.1\)) and \(c_{2}\approx 0.84\) (\(\approx 1.3\)) for the \(\chi_{2}^{2}\) (uniform) distribution.
Numerical results for \(\alpha\) are summarized in Figure 3. For each combination of \(\varphi\) and \(D\), \(\alpha\) is averaged over \(n\) samples of random generators such that \(nD=50^{\prime}000\). The weight distribution is the \(\chi_{2}^{2}\) distribution in (a) and in the lower part of (c), and is the uniform distribution in \([0,1]\) in (b) and in the upper part of Figure 3 (c). We have found that these results are qualitatively the same for exponentially distributed edge weights.
In Figure 3 (a,b) we show the value of \(\alpha\) as a function of \(D\) and \(\varphi\). On the \(x\)-axis \(D\) varies in steps of \(10^{3}\) between \(10^{3}\) and \(10^{4}\). We observe that \(\alpha\) increases with \(\varphi\) and is independent of \(D\), as predicted by Eq. (15). In Figure 3 (c) we show \(\alpha\) as a function of \(\varphi\) for different dependencies of \(\varphi\) on \(D\). In all the cases, values of \(\alpha\) collapse onto the black solid line given by Eq. (15).
For \(\varphi\sim D\), the ratio \(\alpha\) scales as \(\sim\sqrt{D}\), thus recovering the parametrically large gap in the non-sparse case. For constant \(\varphi\), the location of the bulk relative to its size is constant and independent of \(D\), i.e, if measured relative to the size of the bulk, the bulk does not move away from the imaginary axis with increasing \(D\). We have thus quantified how sparsity cures one of the less physical aspects of the non-sparse random model of Markov generators.
## IV Spectral gap
In this and the following section, we will consider the spectral edges, namely, the locations of the eigenvalues nearest and farthest from the imaginary axis. In this section we will investigate the spectral gap \(\gamma_{*}\) of \(\mathcal{K}\),
\[\gamma_{*}=\min\{|\operatorname{Re}\lambda_{i}|:\operatorname{Re}\lambda_{i}<0\}. \tag{16}\]
The spectral gap \(\gamma_{*}\) is asymptotically, approximately bounded by the right extent of the bulk \(|\mu(\lambda)|-\sigma(\lambda)\), which depends on \(\varphi\) as \(\sim\varphi-\sqrt{\varphi}\sim\varphi\). So for constant \(\varphi\), the spectral gap is bounded from above, while for \(\varphi\) increasing with \(D\) the spectral gap can increase with \(D\).
Here the edge weights are distributed according to the \(\chi_{2}^{2}\) and the standard uniform distributions. We first demonstrate that, for \(\varphi=\operatorname{const}\), the average spectral gap \(\langle\gamma_{*}\rangle\) decreases as \(D^{-1/\varphi}\), while \(\langle\gamma_{*}\rangle\) is constant if \(\varphi\) increases logarithmically with \(D\). We then show that the spectral gap is well approximated by the smallest (in magnitude) diagonal term of \(\mathcal{J}\) (\(\mathcal{K}\)) and use the theory of extreme values to underpin the numerical observations. The results are then generalized to weight distributions with power-law left tails in that for constant \(\varphi\) the average spectral gap decreases as a power-law in \(D\) and the crossover from decreasing to increasing \(\langle\gamma_{*}\rangle\) happens when \(\varphi\sim\log D\).
### Numerical results
In Figure 4 we show the average spectral gap \(\langle\gamma_{*}\rangle\) for edge weights distributed as \(\chi_{2}^{2}\) (a-c) and according to the standard uniform distribution (d-f). For every combination of \(\varphi\) and \(D\), the average of the spectral gap is estimated with 100 samples. In Figure 4 (a) and (d) we show \(\langle\gamma_{*}\rangle\) as a function of \(D\) for different dependencies of \(\varphi\) on \(D\). The average spectral gaps for constant \(\varphi=3,5,8,13\) (presented with colored circles) clearly follow a power-law scaling with \(D\).
In Figure 4 (b) and (e) we show the average spectral gap \(\langle\gamma_{*}\rangle\) as a function of \(\varphi\) and \(D\). The black dashed lines are contour lines of constant \(\langle\gamma_{*}\rangle\). They are near straight lines, showing that for a logarithmic increase of \(\varphi\) in \(D\) the spectral gap is constant.
We show the average spectral gap \(\langle\gamma_{*}\rangle\) as a function of \(D\) for \(\varphi=\frac{4}{5}\log D+8\) in Figure 4 (a) and \(\varphi=\frac{7}{10}\log D+8\) in (d) as black diamonds. These dependencies of \(\varphi\) on \(D\) agree well with the top dashed contour lines in (b) and (e), respectively. The average spectral gap of \(\varphi\) depending logarithmically on \(D\) is constant in Figure 4 (a) and (d).
### Gap \(\approx\) minimum of \(\mathcal{J}\)
Let us assume for a moment that the generator matrix \(\mathcal{K}\) is hermitian with eigenvalues \(\lambda_{D}\leq\cdots\leq\lambda_{2}<\lambda_{1}=0\). Then \(\mathbb{1}=(1,\ldots,1)^{t}\) is the eigenvector with eigenvalue 0 and all other eigenvectors are orthogonal to it. By the Courant-Fischer theorem [80]
\[\gamma_{*}=-\lambda_{2}=\min_{|v|=1,v\pm\mathbb{1}}v^{t}(-\mathcal{K})v, \tag{17}\]
where the minimum runs over all vectors \(v\in\mathbb{R}^{D}\), which have Euclidean norm \(|v|=1\) and are perpendicular to \(\mathbb{1}\). Choosing \(1\leq l\leq D\) arbitrary and \(v\) as (see Appendix D for more details)
\[v_{i}=\begin{cases}\sqrt{1-\frac{1}{D}}&i=l\\ -\frac{1}{\sqrt{D(D-1)}}&i\neq l,\end{cases} \tag{18}\]
a simple calculation shows that (at least for \(\varphi\ll D\))
\[\gamma_{*}\leq\min_{1\leq l\leq D}J_{ll}+O(D^{-1}). \tag{19}\]
Similarly, by using the Courant-Fisher theorem, for the eigenvalue with largest magnitude \(\lambda_{D}\) we find
\[-\lambda_{D}=\max_{|v|=1}v^{t}(-\mathcal{K})v, \tag{20}\]
and with \(v\) as the \(l\)-th vector of the standard basis of \(\mathbb{R}^{D}\)
\[-\lambda_{D}\geq\max_{1\leq l\leq D}J_{ll}. \tag{21}\]
Under some mild conditions on random weights \(K_{ij}\), a result from Ref. [71] shows that the inequality Eq. (21) becomes an equality in the large \(D\) limit with probability approaching 1. Motivated by this observation and the bound from Eq. (19), we expect a similar asymptotic tightness for Eq. (19). However, it is an open question whether the result from Ref. [71] applies to the bound of the spectral gap, Eq. (19). Further, the proof presented in Ref. [71] makes use of the Central Limit Theorem for the diagonal elements \(J_{ll}\) of \(\mathcal{J}\), and so the corresponding result does not apply to the case of constant or logarithmically increasing (with \(D\)) sparsity parameter \(\varphi\).
Nevertheless, the above arguments allow us to conjecture that in the limit of large \(D\) and \(\varphi\ll D\) the following
\[\gamma_{*}\approx\min_{1\leq l\leq D}J_{ll}, \tag{22}\]
holds for general, non-hermitian random generator matrices \(\mathcal{K}\), with iid and non-exotic weight distributions. We
support our conjecture with numerical data presented in Figures 5 (a) and (b). We quantify the approximation in Eq. (22) by the relative error between the spectral gap \(\gamma_{*}\) and the minimum \(\min_{1\leq l\leq D}J_{ll}\) of the diagonal of \(\mathcal{J}\),
\[\delta\gamma_{*}=\frac{|\gamma_{*}-\min_{1\leq l\leq D}J_{ll}|}{\gamma_{*}}. \tag{23}\]
Figure 5 shows \(\langle\delta\gamma_{*}\rangle\) as a function of \(\varphi\) and \(D\) for the \(\chi_{2}^{2}\) distribution and the standard uniform distribution. The average relative error is at least two orders of magnitude smaller than the average spectral gap shown in Figure 4 (b) and (e). For increasing \(D\), the approximation in Eq. (22) improves. Thus, the approximation in Eq. (22), works well in the case \(\varphi\ll D\).
### Extreme value theory
The distribution of the right-hand side of Eq. (22) can be tackled with the theory of extreme values. As all non-zero entries of \(\mathcal{M}\) (edge weights) are identically and independently distributed random variables, so are the diagonal entries of \(\mathcal{J}\). Let the cumulative distribution function (cdf) of the diagonal entries \(J_{ll}\) of \(\mathcal{J}\) be denoted by \(F\) and its probability density function by \(f(x)=\frac{d}{dx}F(x)\). If the edge weights are distributed according to a \(\chi^{2}\) distribution (or any gamma distribution) the cdf \(F\) of \(J_{ll}\) is a gamma distribution function. If the edge weights are uniformly distributed, \(F\) is an Irwin-Hall distribution function, see Table 1. The expected value of
Figure 4: The average spectral gap \(\langle\gamma_{*}\rangle\) with \(\chi_{2}^{2}\) (top) and standard uniform (bottom) weight distributions. Solid lines in the log-log plots are analytical predictions from Eq. (25) in **(a)** and Eq. (28) in **(d)**. Black dashed lines in the heatmaps denote contours of constant gap. White circles in the heatmap in **(e)** are given by Eq. (30).
Figure 5: The average relative error between the spectral gap and the minimal value of \(\mathcal{J}\) in the top row (a) and (b) and between the horizontal extent and the maximum of \(\mathcal{J}\) in the bottom row (c) and (d). The weight distribution is \(\chi_{2}^{2}\) on the left and the standard uniform distribution on the right. Averages are over 100 samples. See Eq. (23) and Eq. (38) for the definition of the relative errors \(\delta\gamma_{*}\) and \(\delta\tilde{\gamma}\), respectively.
is given in terms of \(F\) (and \(f\)) by
\[\left\langle\min_{1\leq l\leq D}J_{ll}\right\rangle=D\int dxxf(x)(1-F(x))^{D-1}. \tag{24}\]
Eq. (22) and Eq. (24) imply that
\[\langle\gamma_{*}\rangle\approx D\int dxxf(x)(1-F(x))^{D-1}. \tag{25}\]
We demonstrate the validity of Eq. (25) with Figure 4 (a), where the solid lines, given by Eq. (25), perfectly match numerically sampled average spectral gap \(\langle\gamma_{*}\rangle\). In the next section, we will use the theory of extreme values to handle the integral in Eq. (25).
#### iv.2.1 Power-law tail distributions
Let us consider first the case \(\varphi=\mathrm{const}\) and increasing \(D\). By the Fisher-Tippet-Gnedenko (or 'extreme value') theorem [75], \(\min_{1\leq l\leq D}J_{ll}\) converges in law, under some mild assumptions on the distribution of \(J_{ll}\) and properly renormalization, to the Weibull distribution. The Weibull cumulative distribution function is given by \(\Psi_{\beta}(x)=e^{-x^{\beta}}\), where \(\beta>0\) and the support is on the positive real line.
For distributions of \(J_{ll}\) with power-law left tail, the renormalization of \(\min_{1\leq l\leq D}J_{ll}\) for convergence to the Weibull distribution is well known, see e.g. Theorem 3.3.2, page 137 in Ref. [75]. We use a version modified to our case. Let a positive random variable \(X\) have cdf \(F\) with \(\beta\)-power left tail, i.e.
\[F(x)=Cx^{\beta}\quad\mathrm{for}\ 0\leq x\leq C^{1/\beta}, \tag{26}\]
where \(C>0\) is a constant. Further, let \(m_{D}=\min_{1\leq l\leq D}X_{l}\), where the \(X_{l}\) are iid copies of \(X\). Then
\[(DC)^{1/\beta}m_{D}\to\Psi_{\beta}\quad\mathrm{in\ law}. \tag{27}\]
The Irwin-Hall distribution has a left power-law tail given by \(F(x)=\frac{x^{\varphi}}{\varphi!}\) for \(0\leq x\leq 1\). The constants for the Irwin-Hall distribution are listed in table 1.
We assume that the convergence in Eq. (27) is not only in distribution but that the renormalized moments of \(m_{D}\) converge as well. If the convergence of the moments is sufficiently fast, then Eq. (27) together with Eq. (22) imply
\[\langle\gamma_{*}\rangle\approx\langle m_{D}\rangle\approx\Gamma\left(1+\frac{ 1}{\varphi}\right)(\varphi!)^{1/\varphi}D^{-1/\varphi} \tag{28}\]
when the weight distribution (distribution of non-zero off-diagonal elements of \(\mathcal{K}\)) is such that the diagonal of \(\mathcal{J}\) has a power-law left tail and the coefficients \(C\) and \(\beta\) are given by \(C=1/\varphi!\) and \(\beta=\varphi\).
Finally, we consider the case that the weight distribution is uniform. We observe that the approximation in Eq. (28) works very well in this case. The solid lines in Figure 4 (d) are given by the right-hand side of Eq. (28) and they match the numerically calculated average spectral gap.
Eq. (28) implies that, for constant \(\varphi=\mathrm{const}\) and increasing \(D\), the average spectral gap decreases as
\[\langle\gamma_{*}\rangle\sim D^{-1/\varphi}. \tag{29}\]
In Figure 4 (f) we show that the numerically retrieved power-law exponents of the average spectral gap, Figure 4 (d), match the scaling in Eq. (29).
We find that the large deviation result is not only valid for constant \(\varphi\) and increasing \(D\) but also for \(\varphi\) increasing logarithmically with \(D\); see Figure 4 (d). This allows us to estimate the crossover from decreasing to increasing spectral gap. Let \(c\) denote a constant and let \(\langle\gamma_{*}\rangle=c\). Then by Eq. (28)
\[D\approx\left[\frac{\Gamma\left(1+\frac{1}{\varphi}\right)}{c}\right]^{ \varphi}\varphi!. \tag{30}\]
In Figure 4 (e) the contour lines of constant average spectral gap \(c\) perfectly line up with the functional dependence of \(D\) on \(\varphi\) through Eq. (30) shown as white dots.
To find \(\varphi\) as a function of \(D\) such that the average spectral gap is constant, we assume that \(\varphi\) is reasonably large and approximate \(\Gamma\left(1+\frac{1}{\varphi}\right)\approx 1\) and by Stirling's formula \((\varphi!)^{1/\varphi}\approx\frac{\varphi}{e}\). Denoting \(y=\log\frac{\varphi}{ce}\) and rearranging Eq. (30) gives us
\[\frac{\log D}{ce}\approx ye^{y}, \tag{31}\]
which can be inverted by the Lambert \(W\) function. Resubstituting \(\varphi=cee^{y}\) we arrive at
\[\varphi\approx ce\cdot e^{W\left(\frac{\log D}{ce}\right)}, \tag{32}\]
which for \(\log D\geq ce^{2}\) behaves as [81]
\[\varphi\approx\frac{\log D}{\left(\log\log D-\log c-1\right)^{1-\eta(D)}}, \tag{33}\]
where \(\eta(D)\to 0\) slowly, as \(\eta(D)\sim(\log\log D)^{-1}\). So in the limit \(1\ll\varphi\ll D\) the crossover from decreasing to
\begin{table}
\begin{tabular}{|c|c|c|} \hline off-diag. \(\mathcal{K}=M_{ij}\) & \(\chi_{k}^{2}\) & uniform \\ diag. \(\mathcal{K}=J_{ll}\) & gamma\(\left(\frac{k\varphi}{2},2\right)\) & Irwin-Hall \\ \hline C & \(\frac{2^{\varphi}}{2^{\varphi}}*\) & \(\frac{1}{\varphi!}\) \\ \(\beta\) & \(\frac{1}{2}\varphi*\) & \(\varphi\) \\ \hline \end{tabular}
\end{table}
Table 1: The distributions of the off-diagonal elements \(M_{ij}\) of \(\mathcal{K}\) (edge weights) and the corresponding distributions of the diagonal elements \(J_{ll}\) of \(\mathcal{K}\) and the corresponding constants \(C\) and \(\beta\) for the convergence of \(J_{ll}\) to the Weibull distribution \(\Psi_{\beta}\) in Eq. (27). (*) constants obtained by a power-law approximation of the left tail of the gamma distribution.
increasing spectral gap happens at \(\varphi\sim\log D\) with corrections of the order \(\log\log D\). This confirms our numerical observations that the average spectral gap \(\langle\gamma_{*}\rangle\) appears to be constant for \(\varphi\sim\log D\) in the range of matrix sizes \(D\) we considered.
#### iv.3.2 Approximate power-law distributions
If the weight distribution is a \(\chi^{2}\) or exponential distribution, the diagonal elements of \(\mathcal{J}\) are distributed according to Gamma distribution, see table 1. The left tail of the Gamma distribution only follows approximately a power-law. Approximating the left tail by a Taylor expansion, we obtain constants \(C\) and \(\beta\) presented in Table 1. Especially, for the \(\chi_{2}^{2}\) distribution, we presented so far in the main text the power-law approximation of the gamma distribution and the large deviation result in the previous subsection suggest that the average spectral gap \(\langle\gamma_{*}\rangle\) decreases for constant \(\varphi\) and increasing \(D\) as a power in \(D\) with exponent given \(-1/\varphi\), see Eq. (29).
In Figure 4 (c) we present the numerically calculated exponents of the power-law decrease of \(\langle\gamma_{*}\rangle\), for \(\chi_{2}^{2}\) weight distribution, with \(D\) and compare it to the prediction \(-1/\varphi\). We find excellent agreement for small \(\varphi\leq 5\). For larger \(\varphi\) the deviation between the numerical exponent and \(-1/\varphi\) is visible but the agreement is still good.
A quantitative comparison between the numerically calculated spectral gap \(\langle\gamma_{*}\rangle\) and the EVT prediction by a power-law approximation of the left tail of the gamma distribution resulted in poor agreement. As the expected minimum value of the diagonal of \(\mathcal{K}\) perfectly agrees with \(\langle\gamma_{*}\rangle\), we attribute the disagreement to the power-law approximation of the left tail and slow convergence of Eq. (27) for diagonal elements of \(\mathcal{J}\) distributed according to the gamma distribution.
### Summary
We presented numerical and analytical arguments that, for the weight distributions considered, the average spectral gap decreases as a power-law for constant \(\varphi\) and increasing \(D\) with exponent given (approximately) by \(-1/\varphi\). The crossover between decreasing and increasing spectral gap happens at \(\varphi\sim\log D\), with \(\log\log D\) corrections, for uniform weight distribution. For \(\chi_{2}^{2}\) distributed edge weights the crossover was observed at \(\varphi\sim\log D\). If \(\varphi\) increases with \(D\) faster than \(\log D\) then the average spectral gap increases.
The presented results generalize. Let us assume that the spectral gap is well approximated by the smallest (in magnitude) diagonal of \(\mathcal{J}\), at least in the regime of large \(D\) and \(\varphi\ll D\). Then, after appropriate renormalization, the distribution of the spectral gap is given by the limiting extreme value distribution of the diagonal elements of \(\mathcal{J}\). Thus the classification of functional dependencies of the spectral gap on \(\varphi\) and \(D\) with respect to weight distributions reduces to the classification of limiting extreme value distributions and renormalizations. Extensive research has been conducted on the latter and the renormalizations of a lot of common distributions are well known [74; 75]. Thus the presented approach allows the calculation of the distribution of the spectral gap for broad classes of weight distributions.
## V Horizontal extent (largest absolute real part)
In this section we investigate the horizontal extent \(\tilde{\gamma}\) of the spectrum given by the eigenvalue with largest absolute real part
\[\tilde{\gamma}=\max_{1\leq i\leq D}|\operatorname{Re}\lambda_{i}|. \tag{34}\]
We focus on the averaged horizontal extent \(\langle\tilde{\gamma}\rangle\). We show that for \(\varphi\sim\log D\) the average horizontal extent increases logarithmically with \(D\) for \(\chi_{2}^{2}\) or uniformly distributed edge weights. For constant \(\varphi\) and increasing \(D\) the dependence of \(\langle\tilde{\gamma}\rangle\) is qualitatively very different for the two distributions. For the \(\chi_{2}^{2}\) distribution \(\langle\tilde{\gamma}\rangle\) increases logarithmically, while for the uniform distribution, the average horizontal extent converges to \(\varphi\) as a power-law in \(D\). Ultimately, this is because the support of the uniform distribution is bounded, while the right tail of the \(\chi_{2}^{2}\) distribution extends to infinity.
Figure 6: Average horizontal extent \(\langle\tilde{\gamma}\rangle\) with \(\chi_{2}^{2}\) (left) and standard uniform (right) weight distributions. \(\varphi\) is constant (top) and \(\varphi\sim\log D\) (bottom). Solid lines are given by Eq. (39) (left) and Eq. (44) (right).
The structure of this section follows closely the one from Section IV. We first present numerical results demonstrating the above statements. We then argue that the horizontal extent is given by the largest, in magnitude, diagonal element of \(\mathcal{K}\) and invoke again EVT to analytically underpin the functional dependencies of \(\langle\tilde{\gamma}\rangle\) on \(\varphi\) and \(D\).
### Numerical results
In Figure 6 we show the average horizontal extent as a function of \(D\) for constant \(\varphi\) and \(\varphi\sim\log D\) for edge weights distributed according to a \(\chi_{2}^{2}\) (a,c) and the standard uniform distribution (b,d). In (a) the dependence of \(\langle\tilde{\gamma}\rangle\) on \(D\) for constant \(\varphi\) shows a clear logarithmic increase with \(D\) for \(\chi_{2}^{2}\) distributed edge weights. In contrast, for the uniform distribution, the average horizontal extent increases with \(\varphi\) as a power-law, see (b). The power-law behavior sets in for small \(\varphi\) only for larger \(D\). For \(\varphi=4\) and \(\varphi=5\) deviations from the straight lines in Figure 6 (b) are visible for \(D<10^{5}\) and \(D<10^{4}\), respectively. The average horizontal extent for constant \(\varphi=2\) and \(\varphi=3\) is not shown. We found that it does not converge to \(\varphi\) in the range of matrix sizes \(D\) we investigated.
For \(\varphi\sim\log D\) the dependence of \(\langle\tilde{\gamma}\rangle\) on \(D\) is logarithmic for both the \(\chi_{2}^{2}\) and the uniform distribution, as shown in Figure 6 (c) and (d).
In the remainder of this section, we will present analytic arguments similar to Section IV. We will explain the difference of the dependence of \(\langle\tilde{\gamma}\rangle\) on \(D\) for constant \(\varphi\) between \(\chi_{2}^{2}\) and uniform-like distributions. We show that \(\langle\tilde{\gamma}\rangle\sim\log D\) for both distributions and \(\varphi\sim\log D\).
### Extent \(\approx\) maximum of \(\mathcal{J}\)
By the Perron-Frobenius theorem the spectrum of \(\mathcal{K}\) is confined to the ball centered around \(\min_{i}K_{ii}<0\) with radius \(r=|\min_{i}K_{ii}|\). Thus \(2\max_{1\leq l\leq D}J_{ll}\geq|\operatorname{Re}\lambda|\) for all eigenvalues \(\lambda\), so
\[\tilde{\gamma}\leq 2\max_{1\leq l\leq D}J_{ll}. \tag{35}\]
For symmetric generator matrices \(\mathcal{K}\) we showed in Section IV.2 that
\[\max_{1\leq l\leq D}J_{ll}\leq\tilde{\gamma} \tag{36}\]
and stated a result from [71] that for symmetric random generator matrices under mild conditions on the weights \(K_{ij}\), \(\max_{1\leq l\leq D}J_{ll}\) concentrates around the largest eigenvalue in magnitude, \(\tilde{\gamma}\). This together with the upper bound by the Perron-Frobenius theorem Eq. (35) leads to our conjecture that the concentration of \(\max_{1\leq l\leq D}J_{ll}\) around \(\tilde{\gamma}\) in the symmetric case extends to the non-hermitian case as well
\[\tilde{\gamma}\approx\max_{1\leq l\leq D}J_{ll}. \tag{37}\]
A concentration result similar to the one in [71] for non-hermitian random generator matrices \(M\) has to the best of our knowledge not appeared in the literature.
To quantify the deviation in Eq. (37) we introduce the relative error of \(\tilde{\gamma}\) and \(\max_{1\leq l\leq D}J_{ll}\)
\[\delta\tilde{\gamma}=\frac{|\tilde{\gamma}-\max_{1\leq l\leq D}J_{ll}|}{\tilde {\gamma}}. \tag{38}\]
In Figure 5 (c) and (d) we show the average relative error \(\langle\delta\tilde{\gamma}\rangle\) as a function of \(D\) and \(\varphi\). If the edge weights are \(\chi_{2}^{2}\) distributed then for \(2\leq\varphi\leq 20\) and \(10^{3}\leq D\leq 10^{5}\) the average relative error is smaller than \(\approx 10^{-3}\) and decreases with increasing \(D\). Thus Eq. (37) is a good approximation for large \(D\) and \(\varphi\ll D\) and the error appears negligible in the limit of large \(D\). For uniformly distributed edge weights the average relative error \(\langle\delta\tilde{\gamma}\rangle\) is smaller than \(10^{-1}\) for \(2\leq\varphi\leq 20\) and \(10^{3}\leq D\leq 10^{5}\) and for \(\varphi\geq 4\) decreases with \(D\). For \(2\leq\varphi\leq 3\), the error does not seem to decrease for increasing \(D\). We conclude that Eq. (37) is an excellent approximation for large \(D\) and \(4\leq\varphi\ll D\).
### Extreme value theory
Recall that the diagonal elements of \(\mathcal{J}\) are iid random variables. Similar to the minimum extreme value statistics, the expected value of \(\max_{1\leq l\leq D}J_{ll}\) is
\[\left\langle\max_{1\leq l\leq D}J_{ll}\right\rangle=D\int dxxf(x)F(x)^{D-1}, \tag{39}\]
where we denoted the cdf of the diagonal elements \(J_{ll}\) of \(\mathcal{J}\) by \(F\) and the pdf by \(f=\frac{d}{dx}F\). A numerical calculation of the integral in Eq. (39) for \(\chi_{2}^{2}\) distributed edge weights is shown in Figure 6 (a) and (c) and compared to the average horizontal extent \(\langle\tilde{\gamma}\rangle\). The quantities agree excellently.
The remainder of this section is devoted to employing the Fisher-Tippet Gnedenko or extreme value theorem to \(\max_{1\leq l\leq D}J_{ll}\) and thus analytically calculate the integral in Eq. (39).
#### iv.3.1 Gamma distribution
Recall that if the edge weights are distributed according to a \(\chi^{2}\) distribution then the diagonal elements of \(\mathcal{J}\) are gamma distributed. The maximum of \(D\) gamma distributed iid random variables \(X_{l}\) converges in law to a standard Gumbel distribution Gum [71],
\[c\left[\max_{1\leq l\leq D}X_{l}-d(D)\right]\rightarrow\text{Gum}\quad\text{ in law}, \tag{40}\]
with parameters \(c\) and \(d(D)\) given in table 2 for the gamma and \(\chi^{2}\) distributions. The cdf of the Gumbel distribution is \(x\to e^{-e^{-x}}\) with mean \(\gamma\), where \(\gamma\) denotes
the Euler-Mascheroni constant, not to be confused with the horizontal extent \(\tilde{\gamma}\). The assumption that the first moment converges and the convergence is fast enough together with Eq. (37) yields
\[\langle\tilde{\gamma}\rangle\approx\left\langle\max_{1\leq l\leq D}J_{ll} \right\rangle\approx\frac{\gamma}{c}+d(D). \tag{41}\]
For constant \(\varphi\) and increasing \(D\) the dominant contribution of \(d(D)\) is \(2\log D\) for the \(\chi^{2}\) distribution. Thus the increase is expected to be logarithmic. This is qualitatively consistent with numerical calculations of the average horizontal extent of random generator matrices \(\mathcal{K}\) with \(\chi_{2}^{2}\) distributed edge weights and constant \(\varphi\) shown in Figure 6 (a). There \(\tilde{\gamma}\) increases logarithmically with \(D\). Quantitatively, the deviation between the average horizontal extent and the right-hand side of Eq. (41) is not small. The deviation decreases for increasing \(D\) (not shown). We attribute the slow convergence to a sub-optimal choice of parameters \(c\) and \(d(D)\), as the right-hand side of Eq. (39) agrees perfectly with the numerically calculated \(\langle\tilde{\gamma}\rangle\).
Let us assume that Eq. (41) is valid for \(\varphi\) increasing logarithmically. Note that for the \(\chi^{2}\) distribution, the rate parameter of the corresponding gamma distribution is linear in \(\varphi\). Thus for large enough \(\varphi\) by Stirling's formula, the dominant term in Eq. (41) is logarithmic in \(D\). Hence the average horizontal extent \(\langle\tilde{\gamma}\rangle\) should increase logarithmically for \(\varphi\sim\log D\). This is again qualitatively confirmed by numerical results shown in Figure 6 (c), where \(\langle\tilde{\gamma}\rangle\) as a function of \(D\) for \(\varphi\sim\log D\) increases logarithmically with \(D\).
#### iv.3.2 Power-law tail distributions with bounded support
If the distribution of the diagonal of \(\mathcal{J}\) has bounded support and the right tail decreases as a power-law, then we can reuse the extreme value result from Section IV.3. For a random variable \(X\) with right support endpoint \(x_{0}\) and cdf \(F\) with power-law right tail, i.e.
\[F(x)=C(x_{0}-x)^{\beta}\quad\text{for }x_{0}-C^{1/\beta}\leq x\leq x_{0}, \tag{42}\]
then \(m_{D}=\max_{1\leq i\leq D}X_{l}\), where \(X_{l}\) are iid copies of \(X\), converges, properly renormalized, in law to a Weibull distribution
\[(DC)^{1/\beta}(x_{0}-m_{D})\to\Psi_{\beta}\quad\text{in law}. \tag{43}\]
Again, assuming that the first moment converges as well and the convergence is fast enough we get for edge weights distributed according to the standard uniform distribution,
\[\langle\tilde{\gamma}\rangle\approx\langle\max_{1\leq l\leq D}J_{ll}\rangle \approx\varphi-\Gamma\left(1+\frac{1}{\varphi}\right)\left(\varphi!\right)^{ 1/\varphi}D^{-1/\varphi}. \tag{44}\]
We find excellent numerical agreement of the right-hand side of Eq. (44) with the average horizontal extent \(\langle\tilde{\gamma}\rangle\) for \(\varphi\geq 4\). In Figure 6 (b) we show \(\langle\tilde{\gamma}\rangle\) as a function of \(D\) for fixed \(\varphi\). The solid lines denote the right-hand side of Eq. (44). They agree perfectly for large enough \(D\) and \(\varphi\geq 4\). For \(4\leq\varphi\lessapprox 6\) and small \(D\) the agreement is still reasonable but deviations are clearly visible. Thus for fixed \(\varphi\geq 4\) and increasing \(D\), \(\langle\tilde{\gamma}\rangle\) converges to \(\varphi\) as a power in \(D\) with exponent \(-1/\varphi\),
\[\varphi-\langle\tilde{\gamma}\rangle\sim D^{-1/\varphi}. \tag{45}\]
Numerically we find that Eq. (44) is valid for \(\varphi\) increasing with \(D\) logarithmically, see Figure 6 (d). There we show the average horizontal extent as a function of \(D\) for \(\varphi=\log D\). It increases logarithmically with \(D\). The logarithmic increase can be justified analytically by extending Eq. (44) beyond constant \(\varphi\). In the limit of large enough \(\varphi\) we approximate \(\Gamma(1+1/\varphi)\approx 1\) and by Stirling's formula \((\varphi!)^{1/\varphi}\approx\varphi/e\) and get
\[\langle\tilde{\gamma}\rangle\approx\varphi(1-D^{-1/\varphi})\sim\varphi. \tag{46}\]
Thus in the limit of large \(\varphi\) the average horizontal extent increases as \(\sim\varphi\sim\log D\).
### Summary
We showed numerically and analytically that the horizontal extent increases logarithmically for \(\chi_{2}^{2}\) and uniformly distributed edge weights if \(\varphi\sim\log D\). For constant \(\varphi\geq 4\) and uniformly distributed edge weights the horizontal extent increases to \(\varphi\) as \(\sim\varphi-D^{-1/\varphi}\), while \(\langle\tilde{\gamma}\rangle\) increases logarithmically for constant \(\varphi\) and \(\chi_{2}^{2}\) distributed edge weights.
The difference of the dependence of the average horizontal extent on \(\varphi\) between the \(\chi_{2}^{2}\) and uniform distribution goes back to the difference of the right tails. When edge weights are uniformly distributed the diagonal has bounded support and a power-law right tail, while it has unbounded support and an exponentially decaying right tail for \(\chi_{2}^{2}\) distributed edge weights.
Similar to the spectral gap the limiting distribution of the horizontal extent is given by the limiting extreme value distribution of the diagonal elements of \(\mathcal{K}\), under the assumption that the largest (in magnitude) diagonal
\begin{table}
\begin{tabular}{|c|c|} \hline & gamma\((k,\theta)\) \\ \(c\) & \(\frac{1}{\theta}\) \\ \(d(D)\) & \(\theta(\log D+(k-1)\log\log D-\log\Gamma(k))\) \\ \hline \(\chi_{n}^{2}\) & gamma\((k=\frac{n}{2}\varphi,\theta=2)\) \\ \hline \end{tabular}
\end{table}
Table 2: (top) The normalizing parameters \(c\) and \(d(D)\) for \(\max_{1\leq l\leq D}J_{ll}\) to converge to the Gumbel distribution, where \(J_{ll}\) is gamma distributed with shape and rate parameter \(k\) and \(\theta\), respectively, see Eq. (40). (bottom) The relation between the \(\chi^{2}\) and the gamma distribution.
of \(\mathcal{K}\) is approximating \(\tilde{\gamma}\) well enough. Thus the classification of the horizontal extent with respect to weight distributions reduces to the classifications of convergence in extreme value theory.
## VI Complex spacing ratios
So far we considered the marginal distribution of eigenvalues of sparse random generator matrices. But correlations between the eigenvalues are also of interest. Correlations between eigenvalues of real spectra are often quantified with the distribution of consecutive level spacings or their ratios. The latter avoids the need to unfold the corresponding spectrum [82, 83] and has been generalized to complex eigenvalues in the recent work [61]. The complex spacing ratio (CSR) of eigenvalue \(\lambda\) of matrix \(\mathcal{K}\) is defined as
\[z=\frac{\lambda^{NN}-\lambda}{\lambda^{NNN}-\lambda}, \tag{47}\]
where \(\lambda^{NN}\) and \(\lambda^{NNN}\) denote the closest, by the Euclidean distance, and second closest eigenvalue (of \(\mathcal{K}\)) to \(\lambda\), respectively. By definition, the density of CSRs is supported on the unit disk on the complex plane.
If eigenvalues \(\lambda\) are uncorrelated, the CSR density is uniform. Eigenvalues of generic random matrix ensembles are typically correlated and feature mutual repulsion. This leads to vanishing CSR density at \(z=0\) and \(z=1\). According to [84], complex level spacings categorize random matrix ensembles in three universality classes. Generic random matrices fall into one of these classes according to their symmetries. The random generators considered in this paper have real entries so they obey the same symmetry as real Ginibre matrices (Gi-nOE).
In Figure 7 we show the CSR densities of (a) GinOE members with Gaussian entries (b-d) and sparse random generators with \(\chi_{2}^{2}\) distributed edge weights and \(\varphi=1,2,3\). The densities are estimated from 100 samples for \(D=10^{4}\). We also checked that the obtained densities are independent of the weight distribution. As suggested in Ref. [61], we avoid eigenvalues close to the real line (by excluding all eigenvalues from the strip \(\operatorname{Im}\lambda<10^{-14}\)) when sampling CSR densities.
The CSR density of GinOE matrices shown on Figure 7 (a) exhibit typical depletion at \(z=0\) and \(z=1\). In Ref. [40], it was shown that the CSR density obtained for dense random Kolmogorov operators agrees well with the distribution shown in Figure 7 (a). The CSR density of sparse generators with sparsity \(\varphi\geq 2\) (c,d) agrees remarkably well with the GinOE case.
The CSR density for \(\varphi=1\) is anomalous, see Figure 7 (b). It has an extremely high density around \(z=-1\) while being nearly flat on the rest of the unit disk. In this ultimate case, the operator can be presented as
\[\mathcal{K}=\mathcal{V}\cdot(\mathcal{P}-\openone\mathbb{I}), \tag{48}\]
where \(\mathcal{V}\) is a diagonal matrix (with elements distributed according to, e.g., \(\chi_{2}^{2}\)) and \(\mathcal{P}\) is a circulant permutation matrix corresponding to a cyclic unit shift. The spectrum of \(\mathcal{P}-\openone\mathbb{I}\) lies on a circle of unit radius centered at \(\lambda=-1\) and constitutes a set of equidistant roots of unity. This spectrum is slightly deformed and split into several loops by multiplication of \(\mathcal{P}-\openone\mathbb{I}\) with \(\mathcal{V}\). Away from \(\lambda=-1\), \(\mathcal{V}\) dominates which results in the appearance of a real 'tail'; see Fig. 8.
In graph theory terms, such a sparse random graph fragments into a set of disjoint elementary cycle graphs. The independence of the spectra of these cycles leads to the flatness of the density away from \(z=-1\), while the elementary cycle structure of the connected components is responsible for the CSR peak at \(z=-1\).
To quantify the distance between CSR densities, we use the average length \(\langle r\rangle\) and the average cosine of the angle \(-\langle\cos\theta\rangle\) of spacing ratios, where \(\langle\dots\rangle\) again denotes the average over the random matrix ensemble [61]. We numerically estimate \(\langle r\rangle_{\text{GinOE}}\approx 0.7379\) and \(-\langle\cos\theta\rangle_{\text{GinOE}}\approx 0.2347\) for \(100\)\(1^{4}\times 10^{4}\)-matrices. These agree well with \(\langle r\rangle\) and \(-\langle\cos\theta\rangle\) for \(\varphi=2\) and \(\varphi=3\), as shown in Table 3. We found similar results for \(\varphi>3\) (not shown). In contrast, the corresponding quantities for \(\varphi=1\) deviate substantially from \(\langle r\rangle_{\text{GinOE}}\) and \(-\langle\cos\theta\rangle_{\text{GinOE}}\), as also shown in Table 3. We conclude that, for \(\varphi\geq 2\), correlations between eigenvalues of sparse random Kolmogorov operators agree with correlations of eigenvalues of GinOE matrices.
Figure 7: Density of complex spacing ratios for **(a)** real Ginibre ensemble and **(b)-(d)** sparse Kolmogorov operators with \(\varphi=1,2,3\). The number of states \(D=10^{4}\) and densities are obtained from \(10^{2}\) samples. Edge weights are distributed according to the \(\chi_{2}^{2}\) distribution. The color range is from 0 to 0.8 in (a), (c), and (d) and from 0 to 260 in (b).
## VII Discussion
### Summary of results
Motivated by the incapability of dense random Kolmogorov operators to capture spectral features of model Markov processes, we introduced and analyzed an ensemble of sparse random Kolmogorov operators. We showed that, if the number of non-zero elements per column (and row) \(\varphi\) increases with the matrix size \(D\), the bulk of the spectrum is shifted away from the stationary eigenvalue \(0\) in the limit of large matrix size \(D\). This is independent of the weight distribution, i.e. of the distribution of the nonzero matrix elements.
In contrast, the spectral edges depend on the tails of the weight distribution. The tails of the weight distribution determine, together with \(\varphi\), the tails of diagonal elements of generator matrices. We numerically showed that the spectral edges are well approximated by the extremes of the diagonal elements. From extreme value theory it follows that for diagonal distributions with power-law left tails (this includes among others edge weights being uniform, exponential, \(\chi^{2}\), gamma or beta distributed), the average spectral gap decreases as a power-law in \(D\) for fixed \(\varphi\), is constant for \(\varphi\sim\log D\) and increases, whenever \(\varphi\) increases with \(D\) substantially faster than \(\log D\).
A similar approach was used to calculate the horizontal extent, given by the eigenvalue with the smallest real part. We linked the horizontal extent to the largest diagonal element (in magnitude) of the generator matrix and used extreme value theory to calculate the latter.
Finally, we showed that complex spacing ratio distributions of generator matrices with \(\varphi\geq 2\) follow the distribution typical of Ginibre's Orthogonal Ensemble, while there is a strong anomaly for \(\varphi=1\).
### Open questions
(1) We have introduced sparsity to model \(\mathcal{K}\)-generators of physical Markov processes, and have used the sparsity to tune spectral features of the generators. There are other ways of providing random matrices with a structure that models physical constraints (e.g., locality). E.g., one could consider banded matrices [85, 86, 87, 88, 89, 90, 91, 92, 93, 94] or matrices with decaying off-diagonal terms [95, 96, 93] or temperature based models [97]. These are alternate routes to tuning spectral features. To the best of our knowledge, generators of CTMCs with such structures have not yet been considered.
(2) The application of extreme value theory to find the limiting distribution of the spectral edges relied on the observation that the spectral edges are well approximated by the minimum and the maximum of the diagonal of the generator matrix. By the Courant-Fisher theorem, the extremes of the diagonal are upper and lower bounds, respectively, for symmetric generators. In this case, a concentration of the largest eigenvalue in magnitude around the maximum of the diagonal was shown in [71]. An analytical treatment of general non-symmetric generators and the spectral gap is to the best of our knowledge not known. We hope that our results motivate a rigorous investigation of the connection between the spectral edges and the diagonal of the generator matrix.
(3) Generators of CTMCs have real entries and thus their eigenvalues are real or come in complex conjugate pairs. In the investigation of correlations between eigenvalues, we left out real eigenvalues. The appearance of a large number of real eigenvalues in the spectrum of non-Hermitian matrices is a phenomenon of wide interest [98, 99, 100, 55, 101, 102, 103, 104, 105, 106, 107]. For real Ginibre matrices, the average number of real eigenvalues is \(\sim D^{-1/2}\)[99, 100, 101] while for dense generators of CTMCs, it is substantially larger [55]. We observed that the fraction of real eigenvalues is larger for small \(\varphi\) and smaller for larger \(\varphi\) (not presented). Understanding of the functional dependence of the number of real eigenvalues for sparse CTMC generators is an interesting problem.
(4) We focused on the location and extent of the bulk spectrum as well as the spectral edges. One could inquire about the evolution of other features of the spectral distribution as a function of sparsity, e.g., about the envelope of the spectral distribution. In [40] the spectral density of dense random CTMCs was described
Figure 8: Spectrum of a random Kolmogorov operator with \(\varphi=1\) and \(\chi^{2}_{2}\) weight distribution. The matrix size is \(D=10^{3}\). Inset: same data plotted with both axes having the same scale.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & GinOE & \(\varphi=1\) & \(\varphi=2\) & \(\varphi=3\) \\ \hline \(-\langle\cos\theta\rangle\) & 0.7379 & 0.7871 & 0.7359 & 0.7372 \\ \(\langle r\rangle\) & 0.2347 & 0.3516 & 0.2225 & 0.2284 \\ \hline \end{tabular}
\end{table}
Table 3: Mean and angle of spacing ratio distributions obtained with \(10^{2}\) samples of random \(10^{4}\times 10^{4}\) matrices rounded to the 4th digit. The matrix ensembles correspond to the ones shown in Figure 7.
by the convolution of two asymptotically free matrices, leading to the prominent spindle shape. Free probability arguments break down for sparse random CTMCs. Analytical tools which have been employed to calculate the spectral density of sparse, random matrices include replica tricks [108; 109; 110; 111; 112], single defect and effective medium approximations [113; 114; 115], supersymmetry-based techniques [116; 117] and the cavity approach [112; 118; 119; 120]. Spectral properties of symmetric, sparse, random CTMCs have been investigated with the cavity method [121; 122; 123] and with supersymmetric approaches [117]. Investigations of the spectral density of non-symmetric sparse, random Kolmogorov operators with the above methods might be an interesting objective.
(5) We have considered sparse generators of CTMCs based on strongly connected, sparse random graphs. It is an open question whether our results can be generalized to other sparse graph ensembles. One potential avenue to explore are directed Erdos-Renyi (dER) graphs.
In dER graphs, the probability of an edge connecting any two vertices is \(0<p\leq 1\). For a dER graph to be strongly connected with a high probability, the value of \(p\) must exceed \(\sim\log D/D\)[124; 125]. As a result, the average degree of the vertices must increase logarithmically with \(D\) to ensure strong connectivity. Consequently, the range of constant average vertex degree and increasing vertex number \(D\) is excluded.
Nonetheless, modifying the dER graph by enforcing a minimum (in- and out-) degree \(\geq 2\) guarantees strong connectivity with high probability [79]. Exploring the spectral properties of CTMC generators based on dER graphs may represent a promising next step towards generalizing our results.
(6) Finally, there is an interesting question: What could'sparsity' mean in the quantum limit? Namely, what is'sparsity' for Lindblad operators?
Here we start from the genetic link which allows to obtain a generator of a classical (quantum) Markovian evolution as a properly normalized non-probability (trace) preserving stochastic map (channel). Eq. (3), where \(\mathcal{M}\) is a no-probability-preserving map and \(\mathcal{J}\) takes care of normalization, illustrates this link in the case of Kolmogorov operators. It seems to be intuitive that sparsity \(\varphi\) of a stochastic matrix can be associated with rank \(r\) of a quantum channel [126].
Thus, in the ultimate limit \(\varphi=r=1\), quantum versions of stochastic maps - that are permutations now - are rank-one channels - that are unitaries. It is tempting to extend this analogy beyond the limit \(\varphi=r=1\) and state that mixed-unitary channels (convex combinations of unitaries) are quantum versions of bistochastic matrices (that are, according to Birkhoff, convex combinations of permutations [127]).
However, there is also notion of double stochastic (or "unital") channels [126]. The two classes, mixed unitaries and bistochastic channels, are not identical: there are double stochastic channels that are outside of the convex hull of unitaries [128]. What class to associate with classical bistochastic maps is then 'a matter of taste' [129]. To resolve the dichotomy, one could rely on the concept of super-decoherence [40] and state that all channels which have classical bistochastic matrices as their fully decohered versions, are quantum analogues of the matrices. In this case the broader class of double stochastic channels is chosen [129].
The superdecoherence-based reasoning can also be applied to generators. In this case the unitary (Hamiltonian) part of a Lindblad operator does not play any role since it vanishes in the limit of complete decoherence [40] and the quantum'sparsity' is defined by the rank of the dissipative part (the minimal number of jump operators).
Interestingly, Lindblad operators of ultra-low rank \(r=1\) were considered in Refs. [52] and [130]. Features similar to ones we detected for ultra-sparse Kolmogorov operators were reported (e.g., the spectral gap is defined by a real-valued outlier with position independent of \(D\)).
###### Acknowledgements.
The authors thank Gernot Akemann, Alexander van Werden, Ioana Dumitriu, Tomaz Prosen, Karol Zyczkowski, and Wojciech Tarnowski for helpful discussions and comments. This research is supported by the Deutsche Forschungsgemeinschaft through SFB 1143 (project-id 247310070) [GN and MH] and Research Council of Norway, project IKTPLUSS-IKT og digital innovajon - 333979 (as a part of the ERA-NET project "DQUANT: A Dissipative Quantum Chaos perspective on Near-Term Quantum Computing") [SD].
## Appendix A Models in Figure 1
In the following, we denote spin creation and annihilation operators as \(\sigma^{+}\) and \(\sigma^{-}\), respectively. We split the generator matrix \(\mathcal{K}\) into an off-diagonal matrix \(\mathcal{M}\) and a diagonal matrix \(\mathcal{J}\) such that \(\mathcal{K}=\mathcal{M}-\mathcal{J}\) and the diagonal entries of \(\mathcal{J}\) are the sums of the columns of \(\mathcal{M}\).
In Figure 1 (b) we show the TASEP on a ring with \(L=12\) sites and staggered hopping amplitudes. The \(\mathcal{M}\) matrix is given by [61]
\[\mathcal{M}=\frac{1}{2}\sigma_{1}^{+}+\frac{1}{2}\sigma_{1}^{-}+\sum_{j=1}^{L}p _{j}\sigma_{j}^{-}\sigma_{j+1}^{+}, \tag{10}\]
where \(p_{j}=1\) if \(j\) is even and \(p_{j}=0.2\) if \(j\) is odd.
In Figure 1 (c) we show the ASEP on a chain of length \(L=12\) with open boundary conditions and next nearest neighbor hopping. The \(\mathcal{M}\) matrix is given by
\[\mathcal{M}=\sigma_{1}^{+}+\sigma_{L}^{-}+\sum_{j=1}^{L}\sigma_{j}^{-}\sigma_{ j+1}^{+}+\sum_{j=1}^{L/2}\sigma_{2j}^{-}\sigma_{2j+2}^{+}. \tag{11}\]
In Figure 1 (d) we show the spectrum of a single particle hopping on a \(65\times 65\) grid with periodic boundary
conditions and random hopping amplitudes. The \(\mathcal{M}\) matrix is given by
\[\mathcal{M}=\sum_{\langle(i,j),(i^{\prime},j^{\prime}j^{\prime})\rangle}p_{(i,j) \to(i^{\prime},j^{\prime})}\sigma_{i,j}^{-}\sigma_{i^{\prime},j^{\prime}} \tag{10}\]
where \(\langle\dots\rangle\) denotes summation over nearest neighbors and \(p_{(i,j)\to(i^{\prime},j^{\prime})}\) are randomly uniformly chosen between \(0\) and \(1\) under the constraint that \(p_{(i,j)\to(i^{\prime},j^{\prime})}=1-p_{(i^{\prime},j^{\prime})\to(i,j)}\). This diffusion model can of course be extended to many particles, but we choose to show the single-particle sector here.
In Figure 1 (e) we show the spectrum of a contact process [28] on a chain with \(L=12\) sites and open boundary conditions. The master equation is generated by \(-H\), where \(H\) is given by
\[H=\sum_{i=1}^{L}M_{i}+\sum_{i=1}^{L-1}\left[n_{i}Q_{i+1}+Q_{i}n_{i+1}\right], \tag{11}\]
and
\[M=\begin{pmatrix}0&-1\\ 0&1\end{pmatrix},\quad n=\begin{pmatrix}0&0\\ 0&1\end{pmatrix},\quad Q=\begin{pmatrix}1&0\\ -1&0\end{pmatrix}. \tag{12}\]
Finally, in Figure 1 (f) we show the spectrum of the generator matrix \(\mathcal{K}\) of a gene transcription model taken from [20]. The following master equations model the accumulation and release of mechanical strain of DNA during transcription. The parameters chosen for the spectral data in Figure 1 (f) are the mRNA transcription rate \(r=2\) and decay rate \(\lambda=0.05\), the maximum number of transcripts until no further strain can be put on the DNA \(m_{c}=10\), the relaxation rate of the DNA string \(g=0.05\) and a maximum number of transcription events \(m_{\text{max}}=400\) to make the generator matrix \(M\) finite. By \(m\) we denote the number of current transcripts and by \(\alpha\) the number of transcripts made since the last relaxation event. Then for \(0\leq m\leq m_{\text{max}}\) and \(1\leq\alpha\leq m_{c}-1\) the master equation reads
\[\frac{d}{dt}P_{\alpha} =-(r+g+\lambda m)P_{\alpha}(m,t)+\lambda(m+1)P_{\alpha}(m+1,t)\] \[+rP_{\alpha-1}(m-1,t) \tag{13}\]
while for \(\alpha=0\) we have
\[\frac{d}{dt}P_{0} =-(r+g+\lambda m)P_{0}(m,t)+\lambda(m+1)P_{0}(m+1,t)\] \[+g\sum_{\alpha=0}^{m_{c}}P_{\alpha}(m,t) \tag{14}\]
and for \(\alpha=m_{c}\)
\[\frac{d}{dt}P_{m_{c}} =-(g+\lambda m)P_{m_{c}}(m,t)+\lambda(m+1)P_{m_{c}}(m+1,t)\] \[+rP_{m_{c}-1}(m-1,t). \tag{15}\]
## Appendix B Sampling of sparse random generators
In order to obtain a sparse random generator matrix, our approach involves first sampling a random directed graph with \(D\) vertices and both in- and out-vertex degrees of \(\varphi\). Subsequently, the non-zero elements of the corresponding adjacency matrix are sampled from a common positive distribution. This procedure results in the off-diagonal matrix \(\mathcal{M}\). The random Markov generator matrix is then constructed as \(\mathcal{K}=\mathcal{M}-\mathcal{J}\), where \(\mathcal{J}\) is a diagonal matrix with diagonal elements equal to the sums of the columns of \(\mathcal{M}\).
The random directed graph is generated by iteratively connecting each vertex to \(\varphi\) other vertices, while rejecting edges if the corresponding vertex already has \(\varphi\) incoming edges. For the final vertices, it may not be feasible to connect to other vertices without violating the constraint of \(\varphi\) incoming edges for each vertex. In such cases, the entire process is restarted. To mitigate the risk of restarting the procedure, we reduce the probability of connecting to a vertex that already has a high degree. Following this approach, we find that we rarely need to restart the algorithm for the matrix sizes and vertex degrees \(\varphi\) examined in this study.
To compute the eigenvalues of the Markov matrices, we utilize an exact diagonalization method, while the Arnoldi method is employed to calculate the spectral gap. We deem an eigenvalue to have converged once the norm of the residuals of the Schur vectors is less than \(10^{-12}\).
## Appendix C Analytical results for the bulk spectrum
In this section, we will derive the analytical results of the estimated mean \(\mu(\lambda)\) in Eq. (7) and the estimated pseudo-variance in Eq. (11) in the main text and show that \(\frac{1}{D}\sum_{j=1}\lambda_{j}\) concentrates around its average \(\langle\dots\rangle\).
Denote by \(\iota\) the function \(\iota:\{1,\dots,\varphi\}\times\{1,\dots,D\}\to\{1,\dots,D\}^{2}\) with \(\iota(l,j)=(i,j)\) where \(i\) is the \(l\)th non-zero index in column \(j\) in \(\mathcal{M}\). Note that \(\iota(l,j)=(i,j)\) implies \(i\neq j\) and \(l\to\iota(l,j)\) is injective for fixed \(j\). Further, let in this appendix the location of the bulk be denoted as
\[\mu(\lambda)=\frac{1}{D}\sum_{j=1}^{D}\lambda_{j}=\frac{1}{D}\operatorname{tr}( \mathcal{K}).\]
and the pseudo-variance as
\[\sigma^{2}(\lambda)=\frac{1}{D}\sum_{j=1}^{D}\lambda_{j}^{2}- \left(\frac{1}{D}\sum_{j=1}^{D}\lambda_{j}\right)^{2}\] \[=\frac{\operatorname{tr}(\mathcal{K}^{2})}{D}-\frac{\operatorname{ tr}(\mathcal{K})^{2}}{D^{2}}. \tag{16}\]
Here we explicitly do not include the averaging over the matrix ensemble \(\langle\dots\rangle\) in contrast to the main text.
### Location
The average value with respect to \(\left\langle\ldots\right\rangle\) of the location \(\mu(\lambda)\) can then be computed as
\[\left\langle\mu(\lambda)\right\rangle =\left\langle\frac{1}{D}\operatorname{tr}(\mathcal{K})\right\rangle =\frac{1}{D}\sum_{j=1}^{D}\left\langle K_{jj}\right\rangle\] \[=\frac{1}{D}\sum_{j=1}^{D}\sum_{l=1}^{\varphi}\left\langle K_{i (l,j)}\right\rangle=-\varphi\mu_{0},\]
where we used that \(\left\langle K_{i(l,j)}\right\rangle=-\mu_{0}\). This is Eq. (7) in the main text. Similar,
\[\left\langle\operatorname{tr}(\mathcal{K})^{2}\right\rangle =\sum_{j_{1},j_{2}=1}^{D}\sum_{l_{1},l_{2}=1}^{\varphi}\left\langle K _{\iota(l_{1},j_{1})}K_{\iota(l_{2},j_{2})}\right\rangle\] \[=\sum_{j=1}^{D}\left[\sum_{l=1}^{\varphi}\left\langle K_{i(l,j)} ^{2}\right\rangle+\sum_{l_{1}\neq l_{2}}\left\langle K_{i(l_{1},j)}K_{\iota( l_{2},j)}\right\rangle\right]\] \[+\sum_{j_{1}\neq j_{2}}\sum_{l_{1},l_{2}=1}^{\varphi}\left\langle K _{\iota(l_{1},j_{1})}K_{\iota(l_{2},j_{2})}\right\rangle.\]
Although the off-diagonal elements of \(\mathcal{K}\) are weakly dependent because of the constraint that the number of non-zero elements per row and column has to equal \(\varphi\), the non-zero elements \(K_{\iota(l,j)}\) are independent. Hence, and \(\left\langle K_{\iota(l_{1},j_{1})}\right\rangle\left\langle K_{\iota(l_{2},j )}\right\rangle\), so
\[\left\langle\operatorname{tr}(\mathcal{K})^{2}\right\rangle =D\varphi(\sigma_{0}^{2}+\mu_{0}^{2})+D\varphi(\varphi-1)\mu_{0}^ {2}+D(D-1)\varphi^{2}\mu_{0}^{2}\] \[=D\varphi\sigma_{0}^{2}+(D\varphi\mu_{0})^{2},\]
where we used that the second moment \(\left\langle K_{\iota(l,j)}^{2}\right\rangle\) equals \(\sigma_{0}^{2}+\mu_{0}^{2}\). This implies that
\[\left\langle\mu(\lambda)^{2}\right\rangle-\left\langle\mu\right\rangle^{2}= \left\langle\frac{\operatorname{tr}(\mathcal{K})^{2}}{D^{2}}\right\rangle- \left\langle\frac{\operatorname{tr}(\mathcal{K})}{D}\right\rangle^{2}=\frac{ \varphi\sigma_{0}^{2}}{D}.\]
The right-hand side vanishes for increasing \(D\) and \(\varphi\) growing slower with \(D\) than linear. Relatively to \(\left\langle\mu(\lambda)\right\rangle\) the typical deviation of \(\mu(\lambda)\) from its average value always vanishes for either increasing \(D\) or \(\varphi\), as
\[\frac{\sqrt{\left\langle\mu(\lambda)^{2}\right\rangle-\left\langle\mu\right\rangle ^{2}}}{\left|\left\langle\mu(\lambda)\right\rangle\right|}=\frac{\sigma_{0}}{ \mu_{0}}\left(\varphi D\right)^{-1/2}.\]
### Complex pseudo-variance
The first term in the averaged pseudo-variance given by Eq. (11) can be calculated as
\[\left\langle\operatorname{tr}(\mathcal{K}^{2})\right\rangle =\sum_{i,j=1}^{D}\langle K_{ij}K_{ji}\rangle\] \[=\sum_{i=1}^{D}\langle K_{ii}^{2}\rangle+\sum_{i\neq j}\langle K _{ij}K_{ji}\rangle. \tag{12}\]
We proceed with \(\sum_{i=1}^{D}\langle K_{ii}^{2}\rangle\) in Eq. (12) and get
\[\sum_{i=1}^{D}\left\langle K_{ii}^{2}\right\rangle =\sum_{i=1}^{D}\left\langle\left(-\sum_{j\neq i}K_{ji}\right)^{2}\right\rangle\] \[=\sum_{i=1}^{D}\sum_{j,l\neq i}\langle K_{ji}K_{li}\rangle\] \[=\sum_{i=1}^{D}\sum_{j\neq i}\langle K_{ji}^{2}\rangle+\sum_{i=1} ^{D}\sum_{j,l\neq i;j\neq l}\left\langle K_{ji}\right\rangle\langle K_{li} \rangle\,. \tag{13}\]
The former sum in Eq. (13) is given by
\[\sum_{i=1}^{D}\sum_{j\neq i}\langle K_{ji}^{2}\rangle=\sum_{i=1}^{D}\sum_{l=1 }^{\varphi}\langle K_{\iota(l,i)}^{2}\rangle=D\varphi(\sigma_{0}^{2}+\mu_{0}^{ 2}), \tag{14}\]
where again we used that \(\langle K_{\iota(l,i)}^{2}\rangle=\sigma_{0}^{2}+\mu_{0}^{2}\), while the latter sum in Eq. (13) is
\[\sum_{i=1}^{D}\sum_{j,l\neq i;j\neq l}\langle K_{ji}\rangle\langle K _{li}\rangle\] \[\quad=\sum_{i=1}^{D}\sum_{k=1}^{\varphi}\sum_{n=1;\iota(n,i)\neq i \neq\iota(k,i)}^{\varphi}\langle K_{\iota(k,i)}\rangle\langle K_{\iota(n,i)}\rangle\] \[\quad=D\varphi(\varphi-1)\mu_{0}^{2}. \tag{15}\]
Combining Eq. (14) and Eq. (15) we get
\[\sum_{i=1}^{D}\left\langle K_{ii}^{2}\right\rangle =D\varphi(\sigma_{0}^{2}+\mu_{0}^{2})+D\varphi(\varphi-1)\mu_{0}^{2}\] \[=D\varphi\sigma_{0}^{2}+D\varphi^{2}\mu_{0}^{2}.\]
Now, we are left with calculating \(\sum_{i\neq j}\langle K_{ij}K_{ji}\rangle\), the second term in Eq. (12),
\[\sum_{i\neq j}\langle K_{ij}K_{ji}\rangle=\sum_{i=1}^{D}\sum_{l=1}^{\varphi} \left\langle K_{\overline{\iota(l,i)}}M_{\iota(l,i)}\right\rangle,\]
where the \(\overline{\iota}\) denotes swapping the first and second component. Note that \(K_{\overline{\iota(l,i)}}\) is not necessarily a non-zero entry of \(\mathcal{K}\), hence \(K_{\overline{\iota(l,i)}}^{2}\) and \(K_{\iota(l,i)}\) depend weakly on each other. In the large \(D\) limit we can assume that the dependence is sufficiently weak and we treat \(K_{\overline{\iota(l,i)}}\) and \(K_{\iota(l,i)}\) as independent, thus \(\left\langle K_{\overline{\iota(l,i)}}K_{\iota(l,i)}\right\rangle=\mu_{0} \left\langle K_{\overline{\iota(l,i)}}^{2}\right\rangle\). By the assumed independence the mean of _every_ entry in the \(i\)th row, except the diagonal, is \(\left\langle K_{\overline{\iota(l,i)}}\right\rangle=\frac{\varphi}{D}\mu_{0}\). Hence,
\[\sum_{i\neq j}\langle K_{ij}K_{ji}\rangle=\sum_{i=1}^{D}\frac{1}{D}\varphi^{2} \mu_{0}^{2}=\varphi^{2}\mu_{0}^{2}.\]
Collecting the above results we arrive at
\[\left\langle\operatorname{tr}(\mathcal{K}^{2})\right\rangle =D\varphi\sigma_{0}^{2}+D\varphi^{2}\mu_{0}^{2}+\varphi^{2}\mu_{0}^ {2}\] \[=D\varphi\sigma_{0}^{2}+(D+1)\varphi^{2}\mu_{0}\]
The second term of the averaged pseudo-variance in Eq. (157) has been calculated in the previous subsection,
\[\left\langle\operatorname{tr}(\mathcal{K})^{2}\right\rangle=D\varphi\sigma_{0} ^{2}+(D\varphi\mu_{0})^{2}\]
Finally, we can evaluate
\[\left\langle\sigma^{2}(\lambda)\right\rangle =\left\langle\frac{\operatorname{tr}(\mathcal{K}^{2})}{D}\right\rangle -\left\langle\frac{\operatorname{tr}(\mathcal{K})^{2}}{D^{2}}\right\rangle\] \[=\varphi\sigma_{0}^{2}+\varphi^{2}\mu_{0}^{2}+\frac{1}{D}\varphi ^{2}\mu_{0}^{2}-\frac{1}{D}\varphi\sigma_{0}^{2}-\varphi^{2}\mu_{0}^{2}\] \[=\varphi\left(\sigma_{0}^{2}+\frac{\varphi}{D}\mu_{0}^{2}-\frac{ 1}{D}\sigma_{0}^{2}\right),\]
which is Eq. (11) in the main text.
## Appendix D Bound of spectral gap for symmetric \(M\)
In this section, we give the proof of Eq. (19). Let \(\mathcal{K}=\mathcal{M}-\mathcal{J}\) be a symmetric generator matrix. By Eq. (17) we have to show that \(v^{t}\mathcal{K}v\leq\min_{1\leq l\leq D}J_{ll}+O\left(D^{-1}\right)\) for the vector \(v\) given
\[v_{i}=\begin{cases}\sqrt{1-\frac{1}{D}}&i=l\\ -\frac{1}{\sqrt{D(D-1)}}&i\neq l,\end{cases}\]
where \(1\leq l\leq D\) is arbitrary. It is easy to see that \(|v|=1\) and \(v\perp v_{1}\). So we proceed with
\[\gamma_{*} \leq v^{t}(\mathcal{J}-\mathcal{M})v=\sum_{i,j=1}^{D}v_{i}v_{j}( \mathcal{J}-\mathcal{M})_{ij}\] \[=\sum_{i=1}^{D}v_{j}^{2}J_{jj}-\sum_{i,j=1}^{D}v_{i}v_{j}M_{ij}\] \[=\sum_{i,j=1}^{D}v_{j}^{2}M_{ij}-\sum_{i,j=1}^{D}v_{i}v_{j}M_{ij}\] \[=\sum_{i,j=1}^{D}v_{j}M_{ij}(v_{j}-v_{i}). \tag{158}\]
Note that any summand in Eq. (158) where either \(i=j=l\) or \(i\neq l\) and \(j\neq l\) is zero. Inserting the definition of \(v\) we get
\[\gamma_{*} \leq\sum_{i\neq l}v_{l}M_{il}(v_{l}-v_{i})+\sum_{j\neq l}v_{j}M_ {lj}(v_{j}-v_{l})\] \[=\sum_{i\neq l}\sqrt{1-\frac{1}{D}}M_{il}\left(\sqrt{1-\frac{1}{ D}}+\frac{1}{\sqrt{D(D-1)}}\right)\] \[-\sum_{j\neq l}\frac{1}{\sqrt{D(D-1)}}M_{lj}\left(-\frac{1}{\sqrt {D(D-1)}}-\sqrt{1-\frac{1}{D}}\right)\] \[=\left(\sqrt{1-\frac{1}{D}}+\frac{1}{\sqrt{D(D-1)}}\right)\] \[\times\sum_{i\neq l}\left[\sqrt{1-\frac{1}{D}}M_{il}+\frac{1}{ \sqrt{D(D-1)}}M_{li}\right]. \tag{159}\]
After collecting all the prefactors in Eq. (159) the spectral gap is upper-bounded by
\[\gamma_{*}\leq\sum_{i\neq l}\left[M_{il}+\frac{1}{D-1}M_{li}\right]=J_{ll}+ \frac{1}{D-1}\tilde{J}_{ll},\]
where we denote \(\tilde{J}_{ll}=\sum_{i\neq l}M_{il}\). As the number of non-zero elements of \(\mathcal{M}\) in every row and column is the same, the distribution of \(J_{ll}\) and \(\tilde{J}_{ll}\) coincide. In the limit of large \(D\), \(J_{ll}\) and \(\tilde{J}_{ll}\) are independent. Thus we can approximate \(\gamma_{*}\leq J_{ll}+O\left(D^{-1}\right)\) at least for \(\varphi\ll D\). As the index \(l\) was chosen arbitrarily we get
\[\gamma_{*}\leq\min_{1\leq l\leq D}J_{ll}+O\left(D^{-1}\right),\]
which is Eq. (19) in the main text.
|
2310.04311
|
Distributed Deep Joint Source-Channel Coding with Decoder-Only Side
Information
|
We consider low-latency image transmission over a noisy wireless channel when
correlated side information is present only at the receiver side (the Wyner-Ziv
scenario). In particular, we are interested in developing practical schemes
using a data-driven joint source-channel coding (JSCC) approach, which has been
previously shown to outperform conventional separation-based approaches in the
practical finite blocklength regimes, and to provide graceful degradation with
channel quality. We propose a novel neural network architecture that
incorporates the decoder-only side information at multiple stages at the
receiver side. Our results demonstrate that the proposed method succeeds in
integrating the side information, yielding improved performance at all channel
conditions in terms of the various quality measures considered here, especially
at low channel signal-to-noise ratios (SNRs) and small bandwidth ratios (BRs).
We have made the source code of the proposed method public to enable further
research, and the reproducibility of the results.
|
Selim F. Yilmaz, Ezgi Ozyilkan, Deniz Gunduz, Elza Erkip
|
2023-10-06T15:17:45Z
|
http://arxiv.org/abs/2310.04311v2
|
# Distributed Deep Joint Source-Channel Coding with Decoder-Only Side Information
###### Abstract
We consider low-latency image transmission over a noisy wireless channel when correlated side information is present only at the receiver side (the Wyner-Ziv scenario). In particular, we are interested in developing practical schemes using a data-driven joint source-channel coding (JSCC) approach, which has been previously shown to outperform conventional separation-based approaches in the practical finite blocklength regimes, and to provide graceful degradation with channel quality. We propose a novel neural network architecture that incorporates the decoder-only side information at multiple stages at the receiver side. Our results demonstrate that the proposed method succeeds in integrating the side information, yielding improved performance at all channel noise levels in terms of the various distortion criteria considered here, especially at low channel signal-to-noise ratios (SNRs) and small bandwidth ratios (BRs). We also provide the source code of the proposed method to enable further research and reproducibility of the results.
Joint source-channel coding, side information, Wyner-Ziv coding, wireless image transmission, deep learning, multi-view learning.
## I Introduction
Conventional communication systems follow a two-step approach for the transmission of image/video data: (i) the source coding stage eliminates inherent redundancy within the image, and (ii) the channel coding stage introduces structured redundancy with error correcting codes to enable resiliency against channel's corrupting effects, such as noise and fading. Although Shannon's _separation theorem_[1] proves that such a modular two-step source and channel coding approach is theoretically optimal in the asymptotic limit of infinitely long source and channel blocklengths, it is known that the optimality of separation no longer holds in the finite blocklength, non-ergodic or multi-user scenarios. Such scenarios arise in many time-critical emerging applications, such as Internet of everything (IoE), vehicle-to-everything (V2X), as well as augmented/virtual reality (AR/VR) applications.
Despite having been investigated from a theoretical standpoint for many decades [2, 3], joint source-channel coding (JSCC) schemes found limited use in practical applications, mainly due to their high complexity and the difficulty in designing such codes. Recently, a deep neural network (DNN)-based JSCC scheme [4], namely deep joint source-channel coding (DeepJSCC), has achieved remarkable performance and rekindled research interest in this direction. Specifically, the authors propose to use a data-driven approach to learn nonlinear mappings directly from the input image space to the channel input symbols in an end-to-end fashion, by adopting an autoencoder architecture. DeepJSCC approach not only enjoys improved performance at a specific known channel state, but also exhibits _graceful degradation_ with channel signal-to-noise ratio (SNR), unlike the separation-based schemes that suffer from the _cliff effect_ phenomenon; that is, their performance collapses when the channel SNR falls below a certain threshold.
In this paper, we are interested in the scenario, in which the receiver has access to a correlated _side information_ sequence (see Fig. 1). For example, consider a distributed network of cameras, which aim to transmit their images to a joint central processing unit. In such a scenario, the images are highly correlated; hence, transmitting each image independently results in a significant communication overhead that scales with the number of cameras. In this work, as a first step towards exploiting source correlations in a large network, we consider the simple scenario with two cameras, where the first camera wants to send its image to the second camera.
It is known that separate source and channel coding remains to be optimal in the case of communication with decoder-only side information [5]. This is achieved by
Figure 1: Separation-based (**top**) vs. JSCC-based (**bottom**) communication schemes, having decoder-only side information.
Wyner-Ziv lossy compression of the source sequence exploiting decoder's side information through binning [6], followed by capacity-achieving channel coding. However, the price to pay for near-optimal performance is, again, high complexity and delay since achieving the channel capacity on the Wyner-Ziv rate-distortion function necessitates large blocklengths. We instead leverage the universal function approximation capability of neural networks [7] to find constructive solutions for JSCC with decoder-only side information in the non-asymptotic regime.
Our goal is to design a practical JSCC scheme that can benefit from the side information at the receiver. Our main contributions can be summarized as follows:
* To the best of our knowledge, we introduce the first DeepJSCC-based image transmission method that explicitly exploits decoder-only side information, termed _DeepJSCC-WZ_. The proposed transmission scheme is an important building block towards fully distributed practical DeepJSCC schemes for correlated image/video signals.
* We demonstrate that our method significantly outperforms both the point-to-point DeepJSCC scheme (with no side information) and the separation-based scheme with side information, in terms of both traditional and perception-oriented fidelity metrics for all the considered channel SNR and bandwidth ratio (BR) values.
* As an upper bound, we also provide a solution for the scenario in which the side information is available at both the encoder and decoder. This allows us to quantify the performance gain by providing the side information also to the encoder.
* To facilitate further research and reproducibility, we also provide the source code of our framework and simulations on github.com/ipc-lab/deepjSCC-wz.
## II Related Works
In this section, we briefly go over previous works upon which we construct our methods.
### _Deep Joint Source-Channel Coding (DeepJSCC)_
The first work employing DNN-based JSCC approach in wireless image transmission, termed DeepJSCC, was originally proposed in [4]. This data-driven communication scheme has later been extended to different channel models [8, 9], different source signals [10, 11, 12], inference problems [13], multi-user scenarios [9, 14] and as well as to perceptual quality-oriented image transmission [15, 16]. The authors in [17] proposed data-driven point-to-point and distributed JSCC schemes, employing sinusoidal representation networks (SIRENs) that are inspired by the Shannon-Kotel'nikov mappings [18], for the transmission of independent and identically distributed (i.i.d.) and multivariate Gaussian sources over orthogonal Gaussian channels. However, for the decoder-only side information case, their approach only considers synthetic datasets and specific correlation patterns, and it is unclear how this approach would scale up to realistic and high-dimensional correlated information sources, such as stereo images, which we consider in this paper.
### _Distributed Source Coding (DSC)_
Notable prior work on the source coding side is related to the _distributed source coding_ (DSC) problems. Slepian and Wolf [19] proved their seminal result that an encoder that does not observe a correlated side information can asymptotically achieve the same compression rate as the one that does (in both cases, the decoder has access to the side information), if the joint distribution statistics are known and compression is lossless. Later, Wyner and Ziv [6] characterized the rate-distortion function with side information available at both the encoder and decoder, or only at the decoder. Surprisingly, they showed that there is no rate loss in the latter scenario compared to the former, if the sources are Gaussian and mean-squared error is set as the distortion criterion. Practical research effort for the Wyner-Ziv setup has been spearheaded by distributed source coding using syndromes (DISCUS) [20], which formulated the compression problem as a dual source-channel coding problem.
Recently, distributed deep neural compression schemes have been proposed in [21, 22, 23, 24]. These works, inspired by the Slepian-Wolf and Wyner-Ziv results [6, 19], are concerned with exploiting the side information in order to further compress the primary input source, compared to the point-to-point (having no side information) compression scenario. However, these works do not consider the impact of the wireless channel.
**Notation:** Unless stated otherwise; boldface lowercase letters denote tensors (e.g., \(\mathbf{p}\)), non-boldface letters denote scalars (e.g., \(p\) or \(P\)), and uppercase calligraphic letters denote sets (e.g., \(\mathcal{P}\)). \(\mathbb{R}\), \(\mathbb{N}\), \(\mathbb{C}\) denote the set of real, natural and complex numbers, respectively. \(|\mathcal{P}|\) denotes the cardinality of set \(\mathcal{P}\). We define \([n]\triangleq\{1,2,\cdots,n\}\), where \(n\in\mathbb{N}^{+}\), and \(\mathbb{I}\triangleq[255]\).
## III Problem Formulation
We consider the wireless image transmission problem over an additive white Gaussian noise (AWGN) channel with noise variance \(\sigma^{2}\). The transmitter maps an input image \(\mathbf{x}\in\mathbb{I}^{C_{\mathrm{in}}\times W\times H}\), where \(W\) and \(H\) denote the width and height of the image, while \(C_{\mathrm{in}}\) represents the R, G and B channels for colored images, with a nonlinear encoding function \(E_{\mathbf{\Theta}}:\mathbb{I}^{C_{\mathrm{in}}\times W\times H}\to\mathbb{C} ^{k}\) parameterized by \(\mathbf{\Theta}\), into a complex-valued latent vector \(\mathbf{z}=E_{\mathbf{\Theta}}(\mathbf{x},\sigma)\), where \(k\) is the available channel bandwidth.
We impose an average transmission power constraint \(P_{\mathrm{avg}}\) on the transmitted signal \(\mathbf{z}\in\mathbb{C}^{k}\):
\[\frac{1}{k}\left\|\mathbf{z}\right\|_{2}^{2}\leq P_{\mathrm{avg}}. \tag{1}\]
The receiver subsequently receives the noisy latent vector \(\mathbf{y}\in\mathbb{C}^{k}\) as \(\mathbf{y}=\mathbf{z}+\mathbf{n}\), where \(\mathbf{n}\in\mathbb{C}^{k}\) represents the i.i.d. complex Gaussian noise term i.e., \(\mathbf{n}\sim\mathcal{CN}(\mathbf{0},\sigma^{2}\mathbf{I}_{k})\). We assume that \(\sigma\) is known at both the transmitter and the receiver.
The decoder, a nonlinear decoding function \(D_{\mathbf{\Phi}}:\mathbb{C}^{k}\to\mathbb{I}^{C_{\mathrm{in}}\times W\times H}\) parametrized by \(\mathbf{\Phi}\), is employed at the receiver which has access to the side information image,
\(\mathbb{I}^{C_{\mathrm{in}}}{\times}W{\times}^{H}\), reconstructs the input image as:
\[\hat{\mathbf{x}}=D_{\Phi}(\mathbf{y},\mathbf{x}_{\mathrm{side}},\sigma).\]
The bandwidth ratio characterizes the available channel resources, and for the image transmission scenario, is defined as:
\[\rho\triangleq\frac{k}{C_{\mathrm{in}}WH}\ \mathrm{channel\,symbols/pixel}.\]
We also define the channel \(\mathrm{SNR}\) as:
\[\mathrm{SNR}\triangleq 10\log_{10}\left(\frac{P_{\mathrm{avg}}}{\sigma^{2}} \right)\ \mathrm{dB}. \tag{2}\]
Our learning objective is to minimize the average distortion between the original input image \(\mathbf{x}\) at the transmitter and the reconstructed image \(\hat{\mathbf{x}}\) at the receiver, i.e.,
\[\underset{\mathbf{\Theta},\mathbf{\Phi}}{\arg\ \min}\ \mathbb{E}\left[d\left( \mathbf{x},\hat{\mathbf{x}}\right)\right],\]
where the expectation is over source and side information, \((\mathbf{x},\mathbf{x}_{\mathrm{side}})\sim p(\mathbf{x},\mathbf{x}_{\mathrm{ side}})\), as well as channel noise. Here, \(d(\cdot,\cdot)\) can be any differentiable distortion measure.
## IV Methodology
In this section, we describe our novel architecture that we build upon the current state-of-the-art DeepJSCC architecture. Algorithm 1 summarizes the training methodology of the proposed DeepJSCC-WZ method.
``` repeat\(\triangleright\) Iterate through epochs Shuffle \(\mathcal{D}_{\mathrm{train}}\) and set \(l=t=0\) for\(\|\mathbf{x},\mathbf{x}_{\mathrm{side}}\rangle\in\mathcal{D}_{\mathrm{train}}\)do\(\triangleright\) At the transmitter Calculate \(\mathbf{x}\) via (2) for \(\mathrm{SNR}\sim\mathrm{Uniform}\left[-5,5\right]\)\(\triangleright\) Encoding \(\hat{\mathbf{x}}=\rho_{\mathrm{g}}(\mathbf{x},\sigma)\) \(\triangleright\) Initialize power normalization \(\triangleright\) AWCN channel Sample \(\mathbf{n}\sim\mathcal{CN}(\mathbf{0},\sigma^{2}\mathbf{I}_{k})\) \(\triangleright\) Received signal \(\mathbf{y}=\mathbf{z}+\mathbf{n}\) \(\triangleright\) At the receiver \(\mathbf{\hat{x}}=D_{\mathrm{g}}(\mathbf{y},\mathbf{x}_{\mathrm{side}},\sigma)\)\(\triangleright\) Decoding \(l=l+\left(\mathbf{x},\hat{\mathbf{x}}\right)\) \(\triangleright\) Initialize \(\mathbf{x}\) via (2) for\(\|\mathbf{x}\|\leq\mathbf{x}\)do\(\triangleright\) Received signal \(\mathbf{y}\) at the receiver \(\mathbf{x}\) via (2) for\(\|\mathbf{x}\|\leq\mathbf{x}\)do\(\triangleright\)
where \(\mathrm{MSE}(\mathbf{x},\hat{\mathbf{x}})\triangleq\frac{1}{m}\left\|\mathbf{x}- \hat{\mathbf{x}}\right\|_{2}^{2}\) is the mean squared error (MSE) loss, where \(m\) is the total number of elements in \(\mathbf{x}\), i.e., \(m=C_{\mathrm{in}}WH\), and learned perceptual image patch similarity (LPIPS) [31] is a commonly used perceptual quality metric. Here, \(\lambda\) is the trade-off parameter, determining the relative importance of LPIPS loss w.r.t. the MSE loss. Adapted for predicting the similarity of distorted patches, LPIPS computes the distance in the feature space of a DNN model that is originally trained for image classification task. Unlike the MSE distortion metric (which is applied point-wise), perception-oriented distortion criteria, such as LPIPS and multi-scale SSIM (MS-SSIM) [32], measure the similarity between the original image and the reconstructed one using a proxy for the human perception.
## V Numerical Results and Discussion
In this section, we present our experimental setup to show the performance gains of our method in different scenarios.
### _The Datasets_
For the first part of the experiments, we use the _Cityscape_ dataset [33], consisting of \(5000\) stereo image pairs, where \(2975\) pairs are used for training, and \(500\) and \(1525\) pairs were used for validation and test, respectively. For the second part of experiments, following [21, 34], we compose our dataset from KITTI 2012 [35] and KITTI 2015 datasets [36, 37], termed as _KITTIStereo_, where \(1576\) pairs are used for training, and we validated and tested models on \(790\) image pairs each.
### _Baselines_
To quantify how much our scheme can benefit from the decoder-only side information, we first compare our method with conventional _DeepJSCC_, which ignores the side information. In addition, we also experiment with further reduction in the number of parameters by using the same set of parameters for encoding both the input image \(\mathbf{x}\) at the transmitter, and \(\mathbf{x}_{\mathrm{side}}\) at the receiver side, called _DeepJSCC-WZ-sm_. In other words, unlike _DeepJSCC-WZ_, the encoding functions at the transmitter and at the receiver share the same parameters \(\mathbf{\Theta}\). DeepJSCC-WZ-_sm_ also receives binary indicator of encoding either \(\mathbf{x}\) or \(\mathbf{x}_{\mathrm{side}}\) to encoder's AF modules. For a complete comparison, we further introduce a model where the transmitter _also_ has access to \(\mathbf{x}_{\mathrm{side}}\) image, which serves as an upper bound on the performance of the model of interest. This model variant is named as _DeepJSCC-Cond_. We also consider separate source and channel coding designs using the neural image compression approach that incorporates side information in [21], followed by ideal capacity-achieving channel codes. We denote this separation-based design as _DeepNIC+Capacity_ for brevity. For the performance of DeepNIC+Capacity, we use the results from the original paper [21] and adopt an upper bound for this scheme by equating the reported average rate values in [21] to the capacity of a complex AWGN channel multiplied by the BR value.
### _Implementation and Training Details_
We utilize standard hyperparameters for our method that has been commonly used in the literature [4, 26]. We employ a learning rate of \(1\times 10^{-4}\), a batch size of \(32\), and an average power constraint of \(P_{\mathrm{avg}}=1.0\) (see Equation (1)) for all the analyzed methods. We use Adam optimizer [38] to minimize the training loss in Equation (4). We train the network with channel noise \(\mathbf{n}\) determined by the uniformly sampled SNR between \(-5\) and \(5\) dB. We conduct several experiments using the PyTorch framework [39]. Following [21, 22], we centre-crop each \(375\times 1242\) image of the KITTIStereo dataset to obtain images of size \(370\times 740\), and then downsample them to \(128\times 256\). For the Cityscape dataset, we directly downsample each image to the size of \(128\times 256\).
### _Comparison with the Baselines_
Fig. 4 demonstrates the performance gains of our proposed method under different SNR conditions and BR values of \(\rho=\{1/16;1/32\}\), on KITTIStereo and Cityscape datasets. We highlight that all the JSCC-based models are trained using the composite loss function in Equation (4), but are evaluated across various traditional (e.g., PSNR) and perceptual (e.g., MS-SSIM and LPIPS) distortion quality metrics. For both BR values, DeepJSCC-WZ outperforms its point-to-point counterpart, that
Figure 2: Encoder architecture of our method **(top)**, which is used to encode both the input image \(\mathbf{x}\) at the transmitter and the side information image \(\mathbf{x}_{\mathrm{side}}\) at the receiver side. Red arrows indicate the flow of the encoded side information. \(\mathbf{s}_{1}\), \(\mathbf{s}_{2}\), \(\mathbf{s}_{3}\) and \(\mathbf{s}_{4}\) denote the encoded side information at different scales, which are to be used at the receiver side. Decoder architecture of our method **(bottom)**, which is used to reconstruct the input image from the noisy channel output \(\mathbf{y}\) and the side information \(\mathbf{x}_{\mathrm{side}}\).
is DeepJSCC, as well as its separation-based analogue, that is DeepNIC+Capacity, at all the evaluated SNRs in terms of the distortion criteria considered. Unlike the DeepNIC model, we observe that DeepJSCC-WZ does not suffer from the cliff effect, and provides a graceful performance degradation as the channel SNR varies.
Notably, we observe a stark performance improvement in the LPIPS distortion metric, which is widely accepted to be more aligned with human perception of image quality (see Fig. 3 for some visual examples).
Looking at the performance of DeepJSCC-WZ-sm, we note that imposing the same set of parameters for the encoding of both images, \(\mathbf{x}\) and \(\mathbf{x}_{\mathrm{side}}\), yields little to no effect on the performance. We also remark that DeepJSCC-WZ achieves a comparable performance with the model DeepJSCC-Joint, whose performance is expected to serve as an upper limit bound on the DeepJSCC-WZ model, considering perceptual distortion criteria such as MS-SSIM and LPIPS. This empirically proves our proposed model's capability of successfully exploiting the decoder-only side information.
We refer to Table I for a comparison of the number of parameters of all the methods we have considered. For the two BR values, \(\rho=1/16\) and \(\rho=1/32\), the amount of increase in the number of parameters is \(\approx 25\%\), \(\approx 53\%\) and \(\approx 85\%\) for DeepJSCC-WZ-sm, Deep-JSCC-WZ and DeepJSCC-Cond, respectively, compared to the standard point-to-point DeepJSCC. DeepJSCC-WZ-sm, DeepJSCC-WZ and DeepJSCC-Cond models have additional filter parameters at the decoder, in comparison to DeepJSCC, in order to fuse the encoded \(\mathbf{x}_{\mathrm{side}}\) features. DeepJSCC-WZ-sm has less parameters than DeepJSCC-WZ thanks to the parameter sharing of encoders that encode \(\mathbf{x}\) and \(\mathbf{x}_{\mathrm{side}}\). Therefore, one can opt for the variant DeepJSCC-WZ-sm, instead of the DeepJSCC-WZ model, for a comparable performance in order to keep the number of parameters within a reasonable budget. DeepJSCC-Cond has three different encoder modules, two of which are to encode \(\mathbf{x}_{\mathrm{side}}\) at the transmitter and the receiver, while the other one is to encode \(\mathbf{x}\) at the transmitter.
## VI Conclusion
We have introduced learning-based image transmission schemes that are capable of exploiting decoder-only side information that is in the form of a correlated image. We have demonstrated that the receiver is able to successfully exploit this side information, yielding superior performance over all the channel conditions and distortion criteria considered compared to ignoring the side information. Possible avenues for future work include analyzing the fully distributed communication scenarios using DeepJSCC, which has the potential to be an important ingredient towards realizing the task of practical image/video transmission over wireless sensor networks.
|
2302.04100
|
Newtonian pulsations of relativistic ONe-core ultra-massive DA white
dwarfs
|
Ultra-massive H-rich (DA spectral type) white dwarf stars ($M_{\star} > 1.05
M_{\odot}$) are expected to be substantially crystallized by the time they
reach the ZZ Ceti instability strip ($T_{\rm eff} \sim 12\,000$ K).
Crystallization leads to a separation of $^{16}$O and $^{20}$Ne (or $^{12}$C
and $^{16}$O) in the core of ultra-massive WDs, which strongly impacts their
pulsational properties. An additional factor to take into account when modeling
the evolution and pulsations of WDs in this range of masses are the
relativistic effects, which induce changes in the cooling times and the stellar
masses derived from the effective temperature and surface gravity. Given the
arrival of large amounts of photometric data from space missions such as {\it
Kepler}/{\it K2} and {\it TESS}, it is important to assess the impact of
General Relativity in the context of pulsations of ultra-massive ZZ Ceti stars.
In this work, we present results of Newtonian gravity($g$)-mode pulsation
calculations in evolutionary ultra-massive WD models computed in the frame of
the General Relativity theory.
|
Alejandro H. Córsico, Leandro G. Althaus, María E. Camisassa
|
2023-02-08T14:47:43Z
|
http://arxiv.org/abs/2302.04100v1
|
# Newtonian pulsations of relativistic ONe-core ultra-massive DA white dwarfs
###### Abstract
Ultra-massive H-rich (DA spectral type) white dwarf stars (\(M_{\star}>1.05M_{\odot}\)) are expected to be substantially crystallized by the time they reach the ZZ Ceti instability strip (\(T_{\rm eff}\sim 12\,000\) K). Crystallization leads to a separation of \({}^{16}\)O and \({}^{20}\)Ne (or \({}^{12}\)C and \({}^{16}\)O) in the core of ultra-massive WDs, which strongly impacts their pulsational properties. An additional factor to take into account when modeling the evolution and pulsations of WDs in this range of masses are the relativistic effects, which induce changes in the cooling times and the stellar masses derived from the effective temperature and surface gravity. Given the arrival of large amounts of photometric data from space missions such as _Kepler/K2_ and _TESS_, it is important to assess the impact of General Relativity in the context of pulsations of ultra-massive ZZ Ceti stars. In this work, we present results of Newtonian gravity(\(g\))-mode pulsation calculations in evolutionary ultra-massive WD models computed in the frame of the General Relativity theory.
## 1 Introduction
ZZ Ceti stars are pulsating DA (H-rich atmospheres) white dwarfs (WDs) with effective temperatures in the range \(10\,500<T_{\rm eff}<12\,600\) K. These variable stars exhibit pulsation periods from \(\sim 100\) s to \(\sim 1400\) s due to nonradial gravity(\(g\)) modes with harmonic degree \(\ell=1\) and \(\ell=2\)(Winget & Kepler, 2008; Fontanue & Brassard, 2008; Althaus et al., 2010; Corsico et al., 2019). Although the vast majority of ZZ Ceti stars are DA WDs with masses between \(\sim 0.5\) and \(\sim 0.8M_{\odot}\), \(g\)-mode pulsations have also been detected at least in four ultra-massive (\(M_{\star}>1.05M_{\odot}\)) ZZ Ceti stars so far; they are BPM 37093 (\(M_{\star}=1.1M_{\odot}\)Kanaan et al., 1992), GD 518 (\(M_{\star}=1.24M_{\odot}\)Hermes et al., 2013), SDSS J084021 (\(M_{\star}=1.16M_{\odot}\)Curd et al., 2017), and WD J212402 (\(M_{\star}=1.16M_{\odot}\)Rowan et al., 2019). It is likely that pulsating WDs even more massive (\(M_{\star}>1.30M_{\odot}\)) will be identified in the coming years with the advent of huge volumes of high-quality photometric data collected by space missions such as the ongoing _TESS_ mission (Ricker et al., 2014) and the future _PLATO_ space telescope (Rauer et al., 2014). This big amount of data is expected to make asteroseismology a promising tool to study the structure and chemical composition of ultra-massive WDs (Corsico et al., 2019). The increasing number of ultra-massive WDs with masses beyond \(1.30M_{\odot}\), as well as the immediate prospect of detecting pulsating WDs with such masses, demand new appropriate theoretical evolutionary models to analyze them. In particular, it is necessary to calculate models that take into account relativistic effects and to evaluate the impact of General Relativity on the pulsational properties of ultra-massive WDs. In this exploratory investigation, we take the first step in this direction, calculating Newtonian pulsations on fully relativistic equilibria models.
## 2 Relativistic WD models
We have generated ultra-massive WD model sequences with ONe cores taking into account the full effects General Relativity employing the LPCODE stellar evolution code (Althaus et al., 2022). We considered realistic initial chemical profiles as predicted by the progenitor evolutionary history (Siess, 2007, 2010; Camisassa et al., 2019), and computed model sequences of \(1.29,1.31,1.33,1.35\), and \(1.369M_{\odot}\) WDs. The standard equations of stellar structure and evolution have been generalized to include the effects of General Relativity, following Thorne (1977). For comparison purposes, the same sequences have been computed but for the Newtonian gravity case. We have included the energy released during the crystallization
process, both due to latent heat and due to the induced chemical redistribution as in Camisassa et al. (2019).
We show in Fig. 1 the stellar radius and gravity of ultra-massive DA WD models with \(1.29M_{\odot}\) (left panels) and \(1.369M_{\odot}\) (right panels), for the Newtonian case (black curves) and the fully relativistic case (red curves). Clearly, general relativity induces smaller radii and larger gravities, and this effect is more pronounced for larger stellar masses.
## 3 Chemical profiles and the Brunt-Vaisala and Lamb frequencies
The cores of our models are composed mostly of \({}^{16}\)O and \({}^{20}\)Ne and smaller amounts of \({}^{12}\)C, \({}^{23}\)Na, and \({}^{24}\)Mg. Since element diffusion and gravitational settling operate throughout the WD evolution, our models develop pure \({}^{1}\)H envelopes. The \({}^{4}\)He content of our WD sequences is given by the evolutionary history of progenitor star, but instead, the \({}^{1}\)H content of our canonical envelopes [\(\log(M_{\rm H}/M_{*})=-6\)] has been set by imposing that the further evolution does not lead to \({}^{1}\)H thermonuclear flashes on the WD cooling track. The temporal changes of the chemical abundances due to element diffusion are assessed by using a new full-implicit treatment for time-dependent element diffusion (Althaus et al., 2020).
The chemical profiles in terms of the fractional mass for \(1.29M_{\odot}\) and \(1.369M_{\odot}\) ONe-core WD models at \(T_{\rm eff}\sim 12\,000\) K and H envelope thickness \(\log(M_{\rm H}/M_{*})=-6\) are shown in the upper panels of Fig. 2. At this effective temperature, typical of the ZZ Ceti instability strip, the chemical rehomogenization due to crystallization has already finished, giving rise to a core where the abundance of \({}^{16}\)O increases and \({}^{20}\)Ne decreases outward.
In the lower panels of Fig. 2 we show the squared Brunt-Vaisala and Lamb (\(\ell=1\)) frequencies corresponding to the same models shown in the upper pan
Figure 1: The stellar radius (upper panels) and gravity (bottom panels) in terms of the outer mass fraction coordinate corresponding to ultra-massive DA WD models with \(M_{*}=1.29M_{\odot}\) (left) and \(M_{*}=1.369M_{\odot}\) (right), for the Newtonian case (black curves) and the fully relativistic case (red curves), for \(T_{\rm eff}\sim 12\,000\) K.
els for the Newtonian case (black curves) and the relativistic case (red red curves). The triple chemical transition between \({}^{12}\)C, \({}^{16}\)O, and \({}^{20}\)Ne located at \(-\log(1-M_{r}/M_{*})\sim 1.5\) is within the solid part of the core, so it has no relevance for the mode-trapping properties of the models. This is because, according to the "hard-sphere" boundary conditions adopted for the pulsations (Montgomery & Winget, 1999), the \(g\)-mode eigenfunctions do not penetrate the crystallized region (gray areas). In this way, the mode trapping properties are entirely determined by the presence of the \({}^{4}\)He/\({}^{1}\)H transition, which is located in more external regions. Note that the Brunt-Vaisala and Lamb frequencies for the Newtonian and relativistic models are indistinguishable for the case of \(1.29M_{\odot}\), but they notoriously differ when \(M_{*}=1.369M_{\odot}\).
## 4 Pulsation spectrum of \(g\) modes for Newtonian and relativistic ultra-massive WD models
Adiabatic nonradial \(g\)-mode Newtonian pulsations have been computed with the LP-PUL pulsation code (Corsico & Althaus, 2006). This code neglects the oscillations of \(g\) modes in the crystallized region of the WD core (hard-sphere boundary condition; see Montgomery & Winget, 1999; Corsico et al., 2004). Fig. 3 shows the asymptotic period spacing (computed as in Tassoul et al., 1990) for the sequences of \(1.29,1.31,1.33,1.35\) and \(1.369M_{\odot}\) WD models in terms of the effective temperature all along the ZZ Ceti instability strip. We note that the asymptotic period spacing is smaller for the case of the relativistic WD sequences as compared with the Newtonian sequences. This is what we expect since the asymptotic period
Figure 2: Abundances by mass of the different chemical species as a function of the fractional mass (upper panels), and the logarithm of the squared Brunt-Väisälä and Lamb (\(\ell=1\)) frequencies, corresponding to ONe-core DA WD models with \(M_{*}=1.29M_{\odot}\) (left) and \(M_{*}=1.369M_{\odot}\) (right), for \(T_{\rm eff}\sim 12\,000\) K. The gray areas correspond to the crystallized regions.
spacing is inversely proportional to the integral of the Brunt-Vaisala frequency divided by the radius. Since the Brunt-Vaisala frequency is larger for the relativistic case (see Fig. 2), then the integral is larger and its inverse smaller than in the Newtonian case. The difference is larger for larger stellar masses.
In Fig. 4 we depict the dipole \(\ell=1\) forward period spacing, the kinetic oscillation energy, and the rate of period change for the models with \(1.29M_{\odot}\) and \(1.369M_{\odot}\), at \(T_{\rm eff}=12\,000\) K. We can see that, in general, the period spacing is larger in the Newtonian case, as expected (see Fig. 3). On the other hand, the oscillation kinetic energy of the modes is higher in the relativistic case, since the WDs are more compact and dense than in the Newtonian case. Finally, the rate of change of periods is larger for the relativistic case, since the cooling timescale is shorter due to relativistic effects, in comparison with the Newtonian case (Althaus et al., 2022).
## 5 Summary and conclusions
In order to start the study of the impact of General Relativity on the pulsations of ultra-massive WDs representative of ZZ Ceti stars, we have calculated as an initial step the Newtonian \(g\)-mode pulsations on relativistic equilibrium WD structures. We have found that the Brunt-Vaisala frequency, the period spacing, the oscillation kinetic energy, and the rates of period change are remarkably modified in relation to Newtonian models for the case of very high masses, close to the Chandrasekhar mass, although the effects are much less noticeable for lower masses. The next step in this project is to compute the additional effects of General Relativity on the stellar pulsations by solving the nonradial stellar pulsation equations in the relativistic Cowling approximation. This will allow us to assess the combined effect of General Relativity on the equilibrium structures and on the pulsation modes.
## Acknowledgments
A.H.C. warmly thanks Klaus Werner and the Local Organizing Committee of the 22th European White Dwarf Workshop for support that allowed him to attend this conference.
|
2304.09972
|
MasakhaNEWS: News Topic Classification for African languages
|
African languages are severely under-represented in NLP research due to lack
of datasets covering several NLP tasks. While there are individual language
specific datasets that are being expanded to different tasks, only a handful of
NLP tasks (e.g. named entity recognition and machine translation) have
standardized benchmark datasets covering several geographical and
typologically-diverse African languages. In this paper, we develop MasakhaNEWS
-- a new benchmark dataset for news topic classification covering 16 languages
widely spoken in Africa. We provide an evaluation of baseline models by
training classical machine learning models and fine-tuning several language
models. Furthermore, we explore several alternatives to full fine-tuning of
language models that are better suited for zero-shot and few-shot learning such
as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern
exploiting training (PET), prompting language models (like ChatGPT), and
prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API).
Our evaluation in zero-shot setting shows the potential of prompting ChatGPT
for news topic classification in low-resource African languages, achieving an
average performance of 70 F1 points without leveraging additional supervision
like MAD-X. In few-shot setting, we show that with as little as 10 examples per
label, we achieved more than 90\% (i.e. 86.0 F1 points) of the performance of
full supervised training (92.6 F1 points) leveraging the PET approach.
|
David Ifeoluwa Adelani, Marek Masiak, Israel Abebe Azime, Jesujoba Alabi, Atnafu Lambebo Tonja, Christine Mwase, Odunayo Ogundepo, Bonaventure F. P. Dossou, Akintunde Oladipo, Doreen Nixdorf, Chris Chinenye Emezue, sana al-azzawi, Blessing Sibanda, Davis David, Lolwethu Ndolela, Jonathan Mukiibi, Tunde Ajayi, Tatiana Moteu, Brian Odhiambo, Abraham Owodunni, Nnaemeka Obiefuna, Muhidin Mohamed, Shamsuddeen Hassan Muhammad, Teshome Mulugeta Ababu, Saheed Abdullahi Salahudeen, Mesay Gemeda Yigezu, Tajuddeen Gwadabe, Idris Abdulmumin, Mahlet Taye, Oluwabusayo Awoyomi, Iyanuoluwa Shode, Tolulope Adelani, Habiba Abdulganiyu, Abdul-Hakeem Omotayo, Adetola Adeeko, Abeeb Afolabi, Anuoluwapo Aremu, Olanrewaju Samuel, Clemencia Siro, Wangari Kimotho, Onyekachi Ogbu, Chinedu Mbonu, Chiamaka Chukwuneke, Samuel Fanijo, Jessica Ojo, Oyinkansola Awosan, Tadesse Kebede, Toadoum Sari Sakayo, Pamela Nyatsine, Freedmore Sidume, Oreen Yousuf, Mardiyyah Oduwole, Tshinu Tshinu, Ussen Kimanuka, Thina Diko, Siyanda Nxakama, Sinodos Nigusse, Abdulmejid Johar, Shafie Mohamed, Fuad Mire Hassan, Moges Ahmed Mehamed, Evrard Ngabire, Jules Jules, Ivan Ssenkungu, Pontus Stenetorp
|
2023-04-19T21:12:23Z
|
http://arxiv.org/abs/2304.09972v2
|
# MasakhaNEWS: News Topic Classification for African Languages
###### Abstract
African languages are severely under-represented in NLP research due to lack of datasets covering several NLP tasks. While there are individual language specific datasets that are being expanded to different tasks, only a handful of NLP tasks (e.g. named entity recognition and machine translation) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. In this paper, we develop **MasakhaNEWS** -- a new benchmark dataset for news topic classification covering 16 languages widely spoken in Africa. We provide an evaluation of baseline models by training classical machine learning models and fine-tuning several language models. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning such as cross-lingual parameter-efficient fine-tuning (like MAD-X), pattern exploiting training (PET), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit and Cohere Embedding API). Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we
show that with as little as 10 examples per label, we achieved more than 90% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach.
## 1 Introduction
News topic classification is a text classification task in NLP that involves categorizing news articles into different categories like sports, business, entertainment or politics. It has shaped the development of several machine learning algorithms over the years such as topic modeling (Blei et al., 2001; Dieng et al., 2020) and deep learning models (Zhang et al., 2015; Joulin et al., 2017). Similarly, news topic classification is a popular downstream task for evaluating the performance of large language models (LLMs) in both fine-tuning, and prompt-tuning setups (Yang et al., 2019; Sun et al., 2019; Brown et al., 2020; Liu et al., 2023).
In the recent "prompting" paradigm, it has been shown that with as little as 5 or 10 labelled examples, one can obtain an impressive predictive performance for text classification by leveraging LLMs (Schick and Schutze, 2021; Sanh et al., 2022; Scao et al., 2022). However, most of the evaluation have only been performed in English language and a few other high-resource languages. It is _unclear how this approach extends to pre-trained multilingual language models_ for low-resource languages. For instance, BLOOM (Scao et al., 2022) was pre-trained on 46 languages, including 22 African languages (mostly from the Niger-Congo family). However, extensive evaluation on these set of African languages was not performed due to lack of evaluation datasets. In general, only a handful of NLP tasks such as machine translation (Adelani et al., 2022a; NLLB-Team et al., 2022), named entity recognition (Adelani et al., 2021; 2022b), and sentiment classification (Muhammad et al., 2023) have standardized benchmark datasets covering several geographical and typologically-diverse African languages. Another popular task that can be used for evaluating the downstream performance of language models is news topic classification, but human-annotated datasets for benchmarking topic classification using language models for African languages are _scarce_.
In this paper, we address two problems of lack of evaluation datasets, and lack of extensive evaluation of LLMs for African languages. We create **MasakhaNEWS** -- a large-scale news topic classification dataset covering 16 typologically-diverse languages widely spoken in Africa including English and French, with the same label categories across all languages. We provide several baseline models using both classical machine learning approaches and fine-tuning LLMs. Furthermore, we explore several alternatives to full fine-tuning of language models that are better suited for zero-shot and few-shot learning (e.g. 5-examples per label) such as cross-lingual parameter-efficient fine-tuning (like MAD-X (Pfeiffer et al., 2020)), pattern exploiting training (PET) (Schick and Schutze, 2021), prompting language models (like ChatGPT), and prompt-free sentence transformer fine-tuning (SetFit (Tunstall et al., 2022) and Coherence Embedding API).
Our evaluation in zero-shot setting shows the potential of prompting ChatGPT for news topic classification in low-resource African languages, achieving an average performance of 70 F1 points without leveraging additional supervision like MAD-X. In few-shot setting, we show that with as little as 10 examples per label, we achieved more than 90% (i.e. 86.0 F1 points) of the performance of full supervised training (92.6 F1 points) leveraging the PET approach. We hope that **MasakhaNEWS** encourages the NLP community to benchmark and evaluate LLMs on more low-resource languages. For reproducibility, the data and code are available on Github 1.
Footnote 1: [https://github.com/masakhaane-io/masakhaane-news](https://github.com/masakhaane-io/masakhaane-news)
## 2 Related Work
**Topic classification**, an application of text classification, is a popular task in natural language processing. For this task, several datasets for various languages (Zhang et al., 2015), including African languages, have been created using either manual or automatic annotation techniques. However, these efforts are currently limited to a small number of African languages. For example, Hedderich et al. (2020) created a dataset that was manually annotated for Hausa and Yoruba languages, sourced from VOA Hausa and the BBC Yoruba, with 7 and 5 categories respectively. Niyongabo et al. (2020)
also developed a moderately large news topic classification dataset for Kinyarwanda and Kirundi, using human annotators to reclassify news from various Rwandan news websites into 14 categories for Kinyarwanda and 12 categories for Kirundi, from the initial 48 and 26 categories. Similarly, Azime and Mohammed (2021) curated a 6-category topic classification dataset for Amharic by gathering topics and their predefined labels from several websites, then manually reviewing and removing any inconsistencies. Another news topic classification dataset is the ANTC dataset (Alabi et al., 2022), an automatically created dataset collected from various sources such as VOA, BBC, Global Voices, and Isolezwe newspapers. It contains five African languages: Lingala, Somali, Naija, Malagasy, and isiZulu and uses the predefined labels from the different websites.
To the best of our knowledge, these are the few publicly available topic classification datasets for African languages, covering approximately 11 languages. These datasets, however, have limitations due to the fact that they were created with little or no human supervision and using different labeling schemes. In contrast, in this work, we present news topic classification data for 16 typologically diverse African languages, with a consistent labeling scheme applied across all languages.
Prompting Language Modelsusing manually designed prompts to guide text generation have recently been applied to a myriad of NLP tasks including topic classification. Models such as GPT3 (Brown et al., 2020) and T5 (Raffel et al., 2020) are able to learn more structural and semantic relationships between words and have shown impressive results even in multilingual scenarios when tuned for different tasks. One approach to prompt-tuning a language model for topic classification is to design a "template" for classification and insert a sequence of text into template. This is then used to condition the language model to generate the corresponding class for that span of text. Using this approach Le Scao and Rush (2021) show that effectiveness of prompting is heavily dependent on the quality of the designed prompts and that a prompt is potentially worth 100 data points. This means that prompting might represent a new approach to learning in low-resource settings, this is commonly known as few-shot learning.
There are some other exciting approaches to few-shot learning without prompting. One of them is SetFit (Tunstall et al., 2022), which takes advantage of sentence transformers to generate dense representations for input sequences. These representations are then passed through a classifier to predict class labels. The sentence transformers are trained on a few examples using contrastive learning where positive and negative training pairs are sampled by in-class and out-class sampling. Another common approach is Pattern-Exploiting Training also known as PET (Schick and Schutze, 2021). PET is a semi-supervised training approach that used restructured input sequences to condition language models to better understand a given task, while iPET (Schick and Schutze, 2021) is an iterative variant of PET that is also shown to perform well in few-shot scenarios. In this work, we benchmark the performance of all these approaches for topic classification in African languages.
## 3 Languages
Table 1 presents the languages covered in **MasakhaNEWS** along with information on their language families, their primary geographic regions in Africa, and the number of speakers. Our dataset consists of a total of 16 typologically-diverse languages, and they were selected based on the availability of publicly available news corpora in each language, the availability of native-speaking annotators, geographical diversity and most importantly, because they are widely spoken in Africa. English and French are official languages in 42 African countries, Swahili is native to 12 countries, and Hausa is native to 6 countries. In terms of geographical diversity, we have four languages spoken in West Africa, seven languages spoken in East Africa, two languages spoken in Central Africa (i.e. Lingala and Kiswahili), and two spoken in Southern Africa (i.e chiShona and isiXhosa). Also, we cover four language families, Niger-Congo (8) Afro-Asiatic (5), Indo-European (2), and English Creole (1). The only English creole language is Nigerian-Pidgin, also known as Naija. Each language is spoken by at least 10 million people, according to Enhancing (Eberhard et al., 2021).
## 4 Data
### Data Source
The data used in this research study were sourced from multiple reputable news outlets. The collection process involved crawling the British Broadcasting Corporation (BBC) and Voice of America (VOA) websites. We crawled between 2k-12k articles depending on the number of articles available on the websites. Some of the websites already have some pre-defined categories, we make use of this to additionally filter articles that do not belong to categories we plan to annotate. We took _inspiration_ of news categorization from **BBC English** with six (6) pre-defined and well-defined categories (_"business"_, _"entertainment"_, _"health"_, _"poliltics"_, _"sports"_, and _"technology"_) with over 500 articles in each category. For English, we only crawled articles belonging to these categories while for the other languages, we crawled all articles. Our target is to have around **3,000** articles for annotation but three languages (Lingala, Rundi, and Somali) have less than that. Table 2 shows the news source per language and the number of articles crawled.
### Data Annotation
We recruited volunteers from the Masakhane community - an African grassroots community focused on advancing NLP for African languages. The annotators were asked to label 3k articles into eight categories: _"business"_, _"entertainment"_, _"health"_, _"politics"_, _"religion"_, _"sports"_, _"technology"_, and _"uncategorized"_. Six of the categories are based on BBC English major news categories, the _"religion"_ label was added since many African news websites frequently cover this topic. Other articles that do not belong to the first seven categories, are assigned to the "uncategorized" label.
For each language, the annotation followed two stages. In the **first stage**, we randomly shuffled the entire dataset and ask annotators to label the first 200 articles manually. In the **second stage**, we make use of active learning by combining the first 200 annotated articles with articles with pre-defined labels from news websites when available, and trained a classifier (i.e. by fine-tuning AfroXLMR-base LLM (Alabi et al., 2022)). We ran predictions on the rest of the articles, and ask annotators to correct the mistakes of the classifier. This approach helped to speed up the annotation process.
Annotation toolWe make use of an in-house annotation tool built for text classification to label the articles. Appendix A shows an example of the interface of the tool. To further simplify the annotator effort, we ask annotators to label articles based on the headlines instead of the entire article. However, since some headlines are not very descriptive, we decided to concatenate the headline and the first two sentences of the news text to provide an additional context to annotators.
Inter-agreement scoreWe report Fleiss Kappa score (Fleiss et al., 1971) to measure the agreement of annotation. Table 2 shows that all languages have a moderate to perfect Fleiss Kappa score
\begin{table}
\begin{tabular}{l l l l|l c}
**Language** & **Family/branch** & **Region** & **\# speakers** & **News Source** & **\# articles** \\ \hline Amharic (amh) & Afro-Asiatic / Ethio-Semitic & East Africa & 57M & BBC & 8,204 \\ English (eng) & Indo-European / Germanic & Across Africa & 1268M & BBC & 5,073 \\ French (fra) & Indo-European / Romance & Across Africa & 277M & BBC & 5,683 \\ Hausa (hau) & Afro-Asiatic / Chudic & West Africa & 77M & BBC & 6,965 \\ Igbo (ibo) & Niger-Congo / Vota-Niger & West Africa & 31M & BBC & 4,628 \\ Lingala (lin) & Niger-Congo / Bantu & Central Africa & 40M & VOA & 2,022 \\ Lugala (lug) & Niger-Congo / Bantu & Central Africa & 11M & Gambuzue & 2,621 \\ Naija (pcm) & English Creole & West Africa & 121M & BBC & 7,783 \\ Oromo (orm) & Afro-Asiatic / Cushite & East Africa & 37M & BBC & 7,782 \\ Rundi (run) & Niger-Congo / Bantu & East Africa & 11M & BBC & 2,995 \\ chisBona (sma) & Niger-Congo / Bantu & Southern Africa & 11M & VOA \& Kwayedza & 11,146 \\ Somali (som) & Afro-Asiatic / Cushite & East Africa & 22M & BBC & 2,915 \\ Kiswahili (swa) & Niger-Congo / Bantu & East \& Central Africa & 71M-106M & BBC & 6,431 \\ Tipping (big) & Afro-Asiatic / Ethio-Semitic & East Africa & 9M & BBC & 4,372 \\ isiXhos (xho) & Niger-Congo / Bantu & Southern Africa & 19M & Isoelzwe & 24,658 \\ Yoruböq (yor) & Niger-Congo / Volta-Niger & West Africa & 46M & BBC & 6,974 \\ \hline \end{tabular}
\end{table}
Table 1: **Languages covered in MasakhaNEWS and Data Source**: including language family, region, number of L1 & L2 speakers, and number of articles from each news source.
(i.e. 0.55 - 0.85), which shows a high agreement among the annotators recruited for each language. Languages with only one annotator (i.e. Luganda and Rundi) were excluded in the evaluation.
Deciding a single label per articleAfter annotation, we assign the final label to each article by majority voting. Each label of an article needs to be agreed by a minimum of two annotators to be assigned the label. We only had exceptions for Luganda and Rundi, since they had one annotator. Our final dataset for each language consist of a minimum of 72 articles per topic, and a maximum of 500, except for English language where the classes are roughly balanced. We excluded the infrequent labels so we do not have a highly unbalanced dataset. The choice of a minimum of 72 articles ensures a minimum of 50 articles in the training set. Our target is to have at least four topics per language with a minimum of 72 articles. This approach worked smoothly except for two languages: Lingala ("politics", "health" and "sports") and chiShona ("business", "health" and "politics"), where we had only three topics with more than 72 articles. To ensure we have more articles per class, we had to resolve the conflict in annotation between Lingala annotators to ensure we have more labels for the "business" category. This approach still results in infrequent classes for chiShona. We had to crawl additional "sports" articles from a local chiShona website (_Kwayedza_), followed by manual filtering of unrelated sports news.
Data SplitTable 2 provides the data split for **MasakhaNEWS** languages. We also provide the distribution of articles by topics. We divided the annotated data into TRAIN, DEV and TEST split following 70% / 10% / 20% split ratio.
## 5 Baseline Experiments
We trained baseline text classification models by concatenating the news headline and news text using different approaches.
### Baseline Models
We trained three classical ML models: Naive Bayes, multi-layer perceptron, and XGBoost using the popular sklearn tool 2. We employed the "CountVectorizer" method to represent the text data, which converts a collection of text documents to a matrix of token counts. This method allows us to convert text data into numerical feature vectors.
Footnote 2: [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/)
Furthermore, we fine-tune nine kinds of multilingual text encoders, seven of them are BERT/RoBERTa-based i.e. XLM-R(base & large) (Conneau et al., 2020), AfriBERTa-large (Ogueji et al., 2021), RemBERT (Chung et al., 2021), AfroXLM-R (base & large) (Alabi et al., 2022), and
\begin{table}
\begin{tabular}{l l l|c c c c c c|c c} \hline \hline \multirow{2}{*}{**Language**} & \multirow{2}{*}{**Train/Dev/Test**} & \multirow{2}{*}{**\# topics**} & \multicolumn{6}{c}{**Topics (number of articles per topic)**} & \multicolumn{2}{c}{**Fleiss**} \\ & & & **bus** & **\#** & **\#** & **\# health** & **\# pool** & **\#** & **\# scbr** & **\# Annotator** & **Kappa** \\ \hline Amharc (amh) & 1311/ 188/ 376 & 4 & 404 & - & 500 & 500 & - & 471 & & 5 & 0.81 \\ English (eng) & 3309/ 472/ 948 & 6 & 799 & 750 & 746 & 821 & - & 1000 & 613 & 7 & 0.81 \\ French (fra) & 1476/ 211/ 422 & 5 & 500 & - & 500 & 500 & - & 500 & 109 & 3 & 0.83 \\ Hausa (hua) & 2219/ 377/ 637 & 7 & 399 & 500 & 493 & 500 & 493 & 497 & 291 & 5 & 0.85 \\ Ijeb (ibo) & 1356/ 194/ 900 & 6 & 292 & 366 & 424 & 500 & 73 & 285 & - & 4 & 0.65 \\ Lingala (lin) & 608/ 87/ 175 & 4 & 82 & - & 193 & 500 & 95 & - & 2 & 0.56 \\ Luganda (jug) & 7717/ 110/ 223 & 5 & 169 & - & 228 & 500 & 91 & 116 & - & 1 & - \\ Oromo (com) & 1015/ 145/ 292 & 4 & - & 119 & 447 & 500 & - & 386 & - & 3 & 0.63 \\ Naija (pcm) & 1060/ 152/ 305 & 5 & 97 & 460 & 159 & 309 & - & 492 & - & 4 & 0.66 \\ Rundi (tun) & 1117/ 159/ 322 & 6 & 76 & 158 & 372 & 500 & 73 & 419 & - & 1 & - \\ chiShona (ana) & 1288/ 185/ 369 & 4 & 500 & - & 425 & 500 & - & 417 & - & 3 & 0.63 \\ Somali (com) & 1021/ 14/ 294 & 7 & 114 & 139 & 354 & 500 & 73 & 148 & 135 & 3 & 0.55 \\ Kiswahii (swsw) & 1687/ 377/ 46 & 7 & 316 & 98 & 500 & 500 & 292 & 500 & 165 & 4 & 0.72 \\ Tgiriya (ir) & 947/ 137/ 272 & 6 & 80 & 167 & 395 & 500 & - & 125 & 89 & 2 & 0.63 \\ siXhosa (sho) & 1032/ 147/ 297 & 5 & 72 & 500 & 100 & 308 & - & 496 & - & 3 & 0.89 \\ Yorhih (sor) & 1433/ 206/ 411 & 5 & - & 500 & 398 & 500 & 317 & 335 & - & 5 & 0.80 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **MasakhaNEWS dataset**. We provide the data size of the annotated data, news topics, and number of annotators. The topics are labelled by their prefixes in the table **(topics)**: **business**, **entertainment**, **health**, **politics**, **religion**, **sport**, **technology.
AfroLM (Dossou et al., 2022), the other two are mDeBERTaV3 (He et al., 2021), and LaBSE (Feng et al., 2022). mDeBERTaV3 pre-trained a DeBERTa-style model (He et al., 2021) with replaced token detection objective proposed in ELECTRA (Clark et al., 2020). On the other hand, LaBSE is a multilingual sentence transformer model that is popular for mining parallel corpus for machine translation.
Finally, we fine-tuned 6 multilingual Text-to-Text (T2T) models, mT5-base (Xue et al., 2021), Flan-T5-base (Chung et al., 2022), AfriMT5-base (Adelani et al., 2022), AfriTeVA-base (Ogundepo et al., 2022). The finetuning and evaluation of the multilingual text-encoders and T2T models were performed using HuggingFace Transformers (Wolf et al., 2020) and PyTorch Lightning3.
Footnote 3: [https://pypi.org/project/pytorch-lightning/](https://pypi.org/project/pytorch-lightning/)
The LLMs evaluated were both massively multilingual (i.e. typically trained on over 100 languages around the world) and African-centric (i.e. trained mostly on languages spoken in Africa). The African-centric multilingual text encoders are all modeled after XLM-R. AfriBERTa was pretrained from scratch on 11 African languages, AfroXLMR was adapted to African languages through fine-tuning the original XLM-R model on 17 African languages and 3 languages commonly spoken in Africa, while AfroLM was pretrained on 23 African languages utilizing active learning. Similar to the PLMs, the T2T models used in this study were pretrained on hundreds of languages, and they are all based on the T5 model (Raffel et al., 2020), which is an encoder-decoder model trained with the span-mask denoising objective. mT5 is a multilingual version of T5, and Flan-T5 was fine-tuned on multiple tasks using T5 as a base. The study also included adaptations of the original models, such as AfriMT5-base, as well as AfriTeVA-base, a T5 model pre-trained on 10 African languages.
### Baseline Results
Table 4 shows the result of training several models on **MasakhaNEWS** TRAIN split and evaluation on the TEST split for each language. Our evaluation shows that classical ML models are worse in general than fine-tuning multilingual LLMs on average, however, the drop in performance is sometimes comparable to LLMs if the language was not covered during the pre-training of the
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r r r r r r} \hline \hline
**Model** & **size** & **amb** & **eng** & **fra** & **ham** & **ibo** & **In** & **ling** & **arm** & **pcm** & **rnn** & **som** & **swa** & **tir** & **sho** & **yor** & **ANG** \\ \hline _classical ML_ & & & & & & & & & & & & & & & & & & \\ M.P. & 2002 & 80.2 & 84.6 & 86.7 & 80.1 & 84.3 & 82.6 & 86.7 & 93.5 & 85.9 & 92.6 & 71.1 & 79.8 & 81.9 & 94.5 & 89.3 & 85.7 \\ Naviehres & \(<\)20k & 91.8 & 8.0 & 87.4 & 83.3 & 79.8 & 82.8 & 84.0 & 85.6 & 92.8 & 79.9 & 91.5 & 74.8 & 76.8 & 71.4 & 91.0 & 84.0 & 83.7 \\ XGBout & \(<\)20k & 90.1 & 86.0 & 81.2 & 84.7 & 78.6 & 74.8 & 83.8 & 83.2 & 93.3 & 79.2 & 94.3 & 68.5 & 74.9 & 75.2 & 91.1 & 85.2 & 82.8 \\ \hline \multicolumn{12}{l}{multilingual-level _encoders_} & \multicolumn{1}{c}{} \\ \hline Arithmetic & 126M & 90.6 & 88.9 & 76.4 & 89.2 & 87.3 & 87.0 & 85.1 & 89.4 & 96.1 & 91.3 & 89.3 & 83.9 & 83.7 & 80.6 & 86.9 & 90.3 & **37.8** \\ XLM-R-2 & 270M & 90.9 & 90.6 & 90.4 & 88.4 & 82.5 & 87.9 & 85.3 & 82.2 & 97.8 & 85.9 & 83.9 & 78.3 & 85.6 & 54.6 & 78.6 & 84.5 & 83.0 \\ AfroLM-2 & 270M & 94.2 & 92.5 & 91.0 & 90.7 & 90.3 & 92.4 & 92.1 & 92.2 & 91.4 & 95.4 & 82.2 & 88.6 & 85.4 & 97.0 & 93.0 & 91.7 \\ AfroLM-2 & 270M & 80.3 & 87.7 & 77.5 & 88.3 & 88.4 & 85.7 & 88.0 & 83.5 & 95.6 & 90.8 & 92.5 & 82.0 & 83.2 & 85.4 & 96.4 & 86.5 & 86.1 \\ mDeBERTa & 270M & 91.7 & 93.0 & 89.2 & 88.3 & 88.1 & 83.5 & 86.7 & 86.8 & 94.3 & 94.9 & 72.0 & 84.8 & 78.7 & 90.5 & 90.5 & 86.0 \\ LaMSE & 271M & 92.5 & 91.6 & 90.9 & 90.0 & 91.6 & 98.6 & 86.3 & 86.7 & 94.1 & 91.4 & 94.6 & 82.1 & 87.6 & 83.8 & 94.7 & 92.1 & 90.3 \\ MXM-R-2 & 550M & 93.1 & 91.2 & 91.6 & 94.6 & 91.4 & 91.9 & 73.9 & 89.4 & 84.7 & 80.8 & 85.1 & 85.3 & 62.7 & 89.2 & 84.5 & 86.1 \\ AfroXLMR-2 & 550M & **94.4** & **93.1** & 91.1 & **92.2** & **93.4** & **93.7** & **89.9** & **92.1** & **94.8** & **98.2** & **97.4** & **86.3** & **88.7** & **88.5** & **97.3** & **94.0** & **92.6** \\ RankBERT & 550M & 94.4 & 92.4 & 90.8 & 90.5 & 91.1 & 91.5 & 86.7 & 83.7 & 95.2 & 90.6 & 93.0 & 75.9 & 86.7 & 69.9 & 92.5 & 93.0 & 90.1 \\ \hline \multicolumn{12}{l}{multilingual-level _test-to-out LLMs_} & \multicolumn{1}{c}{} \\ \hline Arithmetic-base & 229M & 87.0 & 80.3 & 71.9 & 85.8 & 79.9 & 82.8 & 60.2 & 82.9 & 95.2 & 80.0 & 84.4 & **80.0** & **80.7** & **85.2** & **94.0** & **86.4** & **77.5** \\ mTS-base & 580M & 73.2 & 83.9 & 89.0 & 82.7 & 76.8 & 80.6 & 79.2 & 92.6 & 85.7 & 90.4 & 75.0 & 76.1 & 65.1 & 71.8 & 86.2 & 80.0 \\ Pan T5-base & 580M & 54.2 & 94.8 & 89.5 & 84.5 & 86.6 & 90.6 & 84.1 & 85.8 & 97.8 & 87.3 & 90.6 & 75.0 & 79.0 & 41.5 & 90.8 & 88.0 & 82.4 \\ AfroLMSE-base & 580M & 90.2 & 90.3 & 87.4 & 89.7 & 80.0 & 86.5 & 84.6 & 83.9 & 93.6 & 91.0 & 91.5 & 77.8 & **84.4** & 80.8 & **91.6** & **80.8** & **88.7** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Baseline results on MasakhaNEWS**. We compare several ML approaches using both classical ML and LLMs. Average is over 5 runs. Evaluation is based on weighted F1-score. Africa-centric models are in gray color
LMs. For example, MLP, NaiveBayes and XGBoost have better performance than AfriBERTa on fra and sna since they were not seen during pre-training of the LLM. Similarly, AfroLM had worse result for fra for the same reason. On average, XLM-R-base, AfroLM, mDeBERTaV3, XLM-R-large gave \(83.0\) F1, \(86.1\) F1, \(86.0\) F1, and \(86.1\) F1 respectively, with worse performance compared to the other LLMs (\(87.8-92.6\) F1) because they do not cover some of the African languages during pre-training (see Table 3) or they have been pre-trained on a small data (e.g. AfroLM pretrained on less than 0.8GB despite seeing 23 African languages during pre-training). Larger models such as LABSE and RemBERT that cover more languages performed better than the smaller models, for example, LABSE achieved over of \(2.5\) F1 points over AfriBERTa.
The best result achieved is by AfroXLMR-base/large with over \(4.0\) F1 improvement over AfriBERTa. The larger variant gave the overall best result due to the size. AfroXLMR models benefited from being pre-trained on most of the languages we evaluated on. We also tried multilingual text-to-text models, but none of the models reach the performance of AfroXLMR-large despite their larger size. We observe the same trend that the adapted mT5 model (i.e. AfriMT5) gave better result compared to mT5 similar to how AfroXLMR gave better result than XLM-R. We found FlanT5-base to be competitive to AfriMT5 despite seeing few African languages, however, the performance was very low for amh andtir probably due to the model not supporting the Ge'ez script.
Headline-only trainingWe compare our results using headline+text (as shown in Table 4) with training on the article headline - with shorter content, we find out that fine-tuned LLMs gave impressive performance with only headlines while classical ML methods struggle due to shorter content. Figure 1 shows the result of our comparison. AfroXLMR-base and AfroXLMR-large both improve by (\(2.3\)) and (\(1.5\)) F1 points respectively if we use headline+text instead of headline. Classical ML models improve the most when we make use of headline+text instead of headline; MLP, NaiveBayes and XGBoost improve by large F1 points (i.e. \(7.4-9.7\)). Thus, for the remainder of this paper, we make use of headline+text. Appendix B provides the breakdown of the result by languages for the comparison of headline and headline+text.
## 6 Zero and Few-shot learning
### Methods
Here, we compare different zero-shot and few-shot methods
1. **Fine-tune** (Fine-tune on a _source language_, and evaluate on a _target language_) using AfroXLMR-base. This is only used in the **zero-shot setting**.
2. **MAD-X 2.0**(Preiffer et al., 2020; 2021) - a parameter efficient approach for cross-lingual transfer leveraging the modularity, and portability of adapters (Houlsby et al., 2019). We
Figure 1: **Comparison of article content type used for training news topic classification models. We report the average across all languages when either headline or headline+text is used**
followed the same **zero-shot** setup as Alabi et al. (2022), however, we make use of hau and swa as source languages since they cover all the news topics used by all languages.
3. **PET/iPET**(Schick and Schutze, 2021;b), also known as (**I**terative) **P**attern **E**xploiting **T**raining is a semi-supervised approach that makes use of few labelled examples and a prompt/pattern to a LLM for few-shot learning. It involves three steps. (1) designing of a prompt/pattern and a verbalizer (that maps each label to a word from LLM vocabulary). (2) train an LLM on each pattern based on few labelled examples (3) distill the knowledge of the LLM on unlabelled data. Therefore, PET leverages unlabelled examples to improve few-shot learning. iPET on the other hand, repeats step 2 and 3 iteratively. We make use of the same set of patterns used for AGNEWS English dataset provided by the PET/iPET authors. The patterns are (a) \(P_{1}(x)=\underline{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}\_{\}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}\_{\}\{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}\_{}\_{}\_{\}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}\_{\}{\_}{\}\_{\}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\}{\_}{\_}{\_}{\}\_{\}{\_}{\}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{\_}{
domain and task (i.e Fine-tune & MAD-X), gives superior result (\(+11\) F1) than PET, SetFit, and ChatGPT. ChatGPT gave impressive results with over 15 F1 point better than SETFIT and PET showing that superior capabilities of instruction-tuned LLMs over smaller LLMs. Surprisingly, the results were comparable to the Finetune approach for some languages (Amharic, English, Luganda, Oromo, Naija, Somali, isiXhosa, and Yoruba), without leveraging any additional technique apart from prompting the LLM.
In general, it may be advantageous to consider leveraging knowledge from other languages with available training data when no labelled data is available for the target language. Also, we observe that Swahili (swa) achieves better result as a source language than Hausa (hau) especially when transferring to fra (\(+13.8\)), lug (\(+9.0\)), and eng (\(+3.6\)). The reason for the impressive performance from Swahili to Luganda might be due to both languages belonging to the same Greater Lake Bantu language sub-group, but it is unclear why Hausa gave worse results than Swahili when adapting to English or French. However, with few examples, PET and SetFit methods are powerful without leveraging training data and models from other languages.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c} \hline \hline
**Model** & **amh** & **eng** & **fra** & **hau** & **lho** & **lin** & **lng** & **orm** & **ppm** & **run** & **sna** & **som** & **swa** & **tir** & **sho** & **yor** & **AVG** \\ \hline _Fine-tune (Att/MLML-Ray)_ & & & & & & & & & & & & & & & & & & & \\
[MISSING_PAGE_POST]
#### 6.2.2 Few-shot evaluation
Table 6 shows the result of the few-shot learning approaches. With only 5-shots, we find all the few-shot approaches to be better than the usual fine-tune baselines for most languages. However, as the number of shots increases, they have comparable results with SetFit and Coherence API especially for \(K=20,50\) shots. However, we found that PET achieved very impressive results even with 5-shots (\(81.9\) on average), matching the performance of SetFit/Cohere API with 50-shots. The results are even better with more shots i.e (\(k=10\), \(86.0\) F1), (\(k=20\), \(87.9\) F1), and (\(k=50\), \(89.9\) F1). Surprisingly, with 50-shots, PET gave competitive result to the full-supervised setting (i.e. fine-tuning all TRAIN data) that achieved (\(92.6\) F1) (see Table 4). It's important to note that PET/iPET make use of additional unlabelled data while SetFit and Coherence API does not. In general, our result highlight the importance of getting few labelled examples for a new language we are adapting to even if it is as little as 10 examples per class, which is not time-consuming to obtain by native speakers (Lauscher et al., 2020; Hedderich et al., 2020).
## 7 Conclusion
In this paper, created the largest news topic classification dataset for 16 typologically diverse languages spoken in Africa. We provide an extensive evaluation using both full-supervised and few-shot learning settings. Furthermore, we study different techniques of adapting prompt-based tuning and non-prompt methods of LLMs to African languages. Our experimental results show the potential of prompt-based few-shot learning approaches like PET/iPET for African languages. In the future, we plan to extend this dataset to more African languages, include bigger multilingual LLMs like BLOOM, mT0 (Muennighoff et al., 2022) and XGLM (Lin et al., 2022) in our evaluation, and extend analysis to other text classification tasks like sentiment classification (Shode et al., 2022; Muhammad et al., 2023).
#### Acknowledgments
We would like to thank Yuxiang Wu for the suggestions on the few-shot experiments. We are grateful for the feedback from the anonymous reviewers of AfricaNLP that helped improved this draft. David Adelani acknowledges the support of DeepMind Academic Fellowship programme. This work was supported in part by Oracle Cloud credits and related resources provided by Oracle for Research.
|
2303.13755
|
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient
Vision Transformers
|
Vision Transformers (ViT) have shown their competitive advantages
performance-wise compared to convolutional neural networks (CNNs) though they
often come with high computational costs. To this end, previous methods explore
different attention patterns by limiting a fixed number of spatially nearby
tokens to accelerate the ViT's multi-head self-attention (MHSA) operations.
However, such structured attention patterns limit the token-to-token
connections to their spatial relevance, which disregards learned semantic
connections from a full attention mask. In this work, we propose a novel
approach to learn instance-dependent attention patterns, by devising a
lightweight connectivity predictor module to estimate the connectivity score of
each pair of tokens. Intuitively, two tokens have high connectivity scores if
the features are considered relevant either spatially or semantically. As each
token only attends to a small number of other tokens, the binarized
connectivity masks are often very sparse by nature and therefore provide the
opportunity to accelerate the network via sparse computations. Equipped with
the learned unstructured attention pattern, sparse attention ViT (Sparsifiner)
produces a superior Pareto-optimal trade-off between FLOPs and top-1 accuracy
on ImageNet compared to token sparsity. Our method reduces 48% to 69% FLOPs of
MHSA while the accuracy drop is within 0.4%. We also show that combining
attention and token sparsity reduces ViT FLOPs by over 60%.
|
Cong Wei, Brendan Duke, Ruowei Jiang, Parham Aarabi, Graham W. Taylor, Florian Shkurti
|
2023-03-24T02:12:28Z
|
http://arxiv.org/abs/2303.13755v1
|
# Sparsifiner: Learning Sparse Instance-Dependent Attention
###### Abstract
Vision Transformers (ViT) have shown their competitive advantages performance-wise compared to convolutional neural networks (CNNs) though they often come with high computational costs. To this end, previous methods explore different attention patterns by limiting a fixed number of spatially nearby tokens to accelerate the ViT's multi-head self-attention (MHSA) operations. However, such structured attention patterns limit the token-to-token connections to their spatial relevance, which disregards learned semantic connections from a full attention mask. In this work, we propose a novel approach to learn instance-dependent attention patterns, by devising a lightweight connectivity predictor module to estimate the connectivity score of each pair of tokens. Intuitively, two tokens have high connectivity scores if the features are considered relevant either spatially or semantically. As each token only attends to a small number of other tokens, the binarized connectivity masks are often very sparse by nature and therefore provide the opportunity to accelerate the network via sparse computations. Equipped with the learned unstructured attention pattern, sparse attention ViT (Sparsifiner) produces a superior Pareto-optimal trade-off between FLOPs and top-1 accuracy on ImageNet compared to token sparsity. Our method reduces 48% \(\sim\) 69% FLOPs of MHSA while the accuracy drop is within 0.4%. We also show that combining attention and token sparsity reduces ViT FLOPs by over 60%.
## 1 Introduction
Vision Transformers (ViTs) [13] have emerged as a dominant model for fundamental vision tasks such as image classification [13], object detection [3], and semantic segmentation [6, 7]. However, scaling ViTs to a large number of tokens is challenging due to the quadradic computational complexity of multi-head self-attention (MHSA) [35]. This is particularly disadvantageous for large-scale vision tasks because computing on high-resolution and high-dimensionality inputs is desirable. For example, input modalities such as video frames and 3D point clouds have a large number of tokens even for basic use cases. Novel algorithms are needed to continue to scale ViTs to larger, more complex vision tasks.
Prior works have largely taken two approaches to improve the computational efficiency of ViTs: token pruning and using fixed sparse attention patterns in MHSA. Token pruning methods [27] reduce the number of tokens by a fixed ratio called the keep rate, but accuracy degrades quickly when pruning early layers in the network [15, 30, 31]. For example, introducing token pruning into shallower layers of EViT [15] causes a significant \(3.16\)% top-1 accuracy drop on ImageNet [12]. This is
Figure 1: Comparison of Sparsifiner and fixed attention patterns. Twins [10] (a), Swin [22] (b), and Axial [16] (c) address quadratic MHSA complexity using fixed attention patterns, which does not consider the instance-dependent nature of semantic information in images. To address this, we propose Sparsifiner (d): an efficient module for sparse instance-dependent attention pattern prediction.
sue is due to the restriction of pruning an entire token, which account to pruning an entire row and column of the attention matrix at once. One way to alleviate this is to prune individual connectivities of the attention matrix instead of entire tokens. Existing methods that take this attention matrix connectivity-pruning approach use fixed sparse attention patterns [8]. For example, local and strided fixed attention patterns are used [8, 14], in combination with randomly-initialized connectivities [40]. However, such fixed attention patterns limit the the capacity of the self-attention connections to a fixed subset of tokens (Fig. 1). These attention patterns' fixed nature is less effective compared with the direct communication between tokens in full self attention. For example, Swin transformer [21, 22] has a limited receptive field at shallower layers and needs many layers to model long-range dependencies. And BigBird [40] needs to combine multiple fixed attention patterns to achieve good performance. Rather, it is desirable to design sparse attention algorithms that mimic full self attention's instance-dependent nature [35], thereby capturing the variable distribution of semantic information in the input image content.
To address aforementioned challenges, we propose a method called Sparsifiner that learns to compute sparse connectivity patterns over attention that are both instance-dependent and unstructured. The instance-dependent nature of the attention pattern allows each token to use its limited attention budget of nonzero elements more efficiently compared to fixed sparse attention patterns. For example, in attention heads that attend to semantic rather than positional content [35, 36], tokens containing similar semantic information should be considered to have high connectivity scores despite their spatial distance. Similarly, nearby tokens with irrelevant semantic relation should have lower connectivity scores despite their spatial proximity. Furthermore, Sparsifiner improves attention pattern flexibility compared to token pruning by pruning individual connectivities, instead of entire rows and columns of the attention matrix. This allows Sparsifiner to reduce FLOPs in the early layers of the network without incurring significant top-1 accuracy degradation (SS4). By pruning individual connectivities dependent on image content, Sparsifiner generalizes prior approaches to sparsifying MHSA in ViTs, and in doing so produces a favourable trade-off between accuracy and FLOPs.
Our contributions can be summarized as:
* We propose a novel efficient algorithm called Sparsifiner to predict instance-dependent sparse attention patterns using low-rank connectivity patterns. Our investigation into instance-dependent unstructured sparsity is to the best of our knowledge novel in the context of ViTs.
* We show that such learned unstructured attention sparsity produces a superior Pareto-optimal tradeoff between FLOPs and top-1 accuracy on ImageNet compared to token sparsity. Furthermore, we show that Sparsifiner is complementary to token sparsity methods, and the two approaches can be combined to achieve superior performance-accuracy tradeoffs.
* We propose a knowledge distillation-based approach for training Sparsifiner from pretrained ViTs using a small number of training epochs.
## 2 Related Work
**Efficient Attention --** Developing an efficient attention mechanism for high resolution image encoding is the focus of this work. Efficient attention mechanisms have been widely studied in NLP tasks to model long sequences. They can be categorized as follows: **Low-rank methods** such as Linformer [37] use a low-rank projection to linearize the multi-head attention operation. Linformer [37] replaces the scaled dot product with linear attention that approximates the attention with a low-rank matrix. **Kernelization**, including Performer [9], Linear Transformers [18], and Random Feature Attention [24] use kernels to avoid explicitly computing the attention matrix. **Sparse attention with fixed attention patterns**[8, 10, 23, 25, 16]. This type of technique sparsifies the attention matrix by limiting the field of view to predefined patterns such as local and strided windows. **Similarity and clustering-based methods** including Routing Transformer [29], Reformer [19], and Sinkhorn Transformer [33]. These models measure token relevance by sorting or clustering and then assign tokens to buckets for within-bucket attention. **Neural memory mechanisms** such as Set Transformer [20], Compressive Transformer [26], and Longformer [1]. These use extra global tokens that gather long-range information as a model memory.
**Vision Transformers --** Recent progress has demonstrated that variants of Transformers [35] can also be competitive alternatives to CNNs and achieve promising results on different vision tasks. In addition to image classification, Transformers have also been applied to various vision tasks, including object detection [4, 46, 11, 44], image generation [5, 23], and video processing [42, 45]. Vision Transformer (ViT) [13] splits images as small patches and treats the patches as the input word tokens. ViT shows better performance than CNN-type models with sufficient extensive training data. DeiT [34] incorporates knowledge distillation techniques into ViT training so that we can train a competitive Transformer using only ImageNet-1k [12]. LV-ViT [17] further improves the performance of ViT by introducing a new training objective named token labelling. Most of these methods have quadratic complexity of self-attention with respect to the input image size.
**Efficient Vision Transformers --** There is a thrust to
model long sequences of image patches at much higher resolutions. Recent works such as Pyramid Vision Transformer (PVT) [38], Swin-Transformer [22], T2T-ViT [39], and Vision Longformer (ViL) [43] apply transformer layers on different resolution scales by stacking a pyramid of ViTs to form a multi-scale architecture. To achieve linear complexity, Swin-Transformer [22] uses shifted local window attention. Vision Longformer [43] adapts the local attention pattern with the global memory tokens from Longformer [1]. TimeSformer [2] applies multiple attentions, each along a single axis of the input video. Those methods all leverage fixed, predefined attention patterns to reduce the quadratic cost. In contrast, our method generates sparse dynamic attention patterns based on the input content. Another group of works reduce the number of tokens by pruning [15, 27, 32], or merging tokens [28, 30, 41]. Recent work, DynamicViT [27] and EViT [15] study unstructured token sparsification by gradually dropping tokens in the inference of ViTs [13]. However, quadratic attention cost remains in early layers where input tokens cannot be largely sparsified. Our method instead prunes connectivities at every layer, allowing complexity savings at early layers.
## 3 Method
Our proposed method to learn sparse attention patterns, Sparsifiner, consists of a normal ViT [13] as the backbone with sparse attention modules at each layer. Our sparse attention module consists of a connectivity mask predictor and a sparse multi-head self-attention (MHSA) module. In both training and inference, we generate a sparse connectivity mask by restricting the number of connections predicted by the mask predictor according to a hyperparameter budget size \(B\). Following this, a sparse MHSA module is used to perform sparse attention based on the connectivity mask. The sparse MHSA module implements an efficient computation using a sparse element-wise product between the full attention map and the sparse connectivity mask to produce a sparse reconstructed attention map. Then, a sparse-dense attention-value product between the sparse reconstructed attention map and the value matrix produces the output of the sparse MHSA module.
For clarity, in the following we describe MHSA for a single attention head only. In practice, we apply the proposed method to each attention head in a ViT. We concatenate the resulting output values from all attention heads and feed them to a linear layer to produce the input to the next transformer layer [35].
**ViT Architecture and Naive MHSA --** We base our method on the existing ViT model architecture [13] and naive implementation of MHSA [35]. A ViT first tokenizes an input image \(I\in\mathbb{R}^{h\times w\times 3}\) into a set of \(n\) tokens \(X\in\mathbb{R}^{n\times d}\), each with dimension \(d\). Each token consists of a patch embedding, retrieved via linear projection of the non-overlapping image patches, and a positional encoding. The resulting sequence of tokens is then fed into MHSA modules to compute the attention matrix \(A\in\mathbb{R}^{n\times n}\) as the product of query \(Q\in\mathbb{R}^{n\times d}=X^{l}W^{Q}\) and key \(K\in\mathbb{R}^{n\times d}=X^{l}W^{K}\) matrices, where the learned projection matrices \(W^{Q}\in\mathbb{R}^{d\times d}\) and \(W^{K}\in\mathbb{R}^{d\times d}\) compute query and key as projections of the input \(X^{l}\in\mathbb{R}^{n\times d}\) to layer \(l\). Naive MHSA then computes the attention matrix \(A\) as the softmax of outer product of query and key matrices as shown in the left part of Fig. 2.
**Connectivity Mask Predictor --** To enable instance-dependent and meaningful attention patterns while limiting the number of connections, we train a connectivity mask predictor and achieve sparsity by thresholding. Specifically, we first compute the low-rank approximation \(A^{\text{down}}\in\mathbb{R}^{n\times n_{\text{down}}}\) of the attention matrix \(A\)
\[A^{\text{down}}=\text{softmax}\bigg{(}\frac{Q{(W^{\text{down}}K)}^{\top}}{ \sqrt{\tilde{d}}}\bigg{)}, \tag{1}\]
which we sparsify by thresholding:
\[\tilde{A}^{\text{down}}_{ij}=\begin{cases}A^{\text{down}}_{ij}&\text{if }A^{\text{ down}}_{ij}>\tau\\ 0&\text{otherwise}\end{cases}. \tag{2}\]
In the low-rank attention computation (Eq. 1), we first down-project the token dimension of key matrix \(K\) to a lower dimension \(n_{\text{down}}\) using a learned projection matrix \(W^{\text{down}}\in\mathbb{R}^{n_{\text{down}}\times n}\). Then, a low-rank approximation of the attention matrix is computed from the outer product of query and down-projected key matrices. Note that in the low-rank attention sparsification (Eq. 2), with a sparse matrix representation we need not explicitly store the zeros.
Next, the connectivity mask predictor (Eq. 3) performs a sparse matrix multiplication of a sparse up-projection matrix \(W^{\text{up}}\in\mathbb{R}^{n_{\text{down}}\times n}\) followed by binarization. This produces an up-projected sparse connectivity mask:
\[M=\mathbf{1}\bigg{[}\text{Top-}k(\tilde{A}^{\text{down}}W^{\text{up}})\bigg{]}. \tag{3}\]
Here, \(\tilde{A}^{\text{down}}W^{\text{up}}\) denotes sparse-sparse matrix multiplication, which is efficiently computed. Our key insight is that the post-softmax low-rank attention matrix (Eq. 1) should naturally be sparse. We show an example in Fig. 7.
We apply top-\(k\) on the up-projected sparse attention matrix \(\tilde{A}^{\text{down}}W^{\text{up}}\), which is the attention connectivity score map. \(k\) is set to the budget size \(B\). We discard zero values and binarize to produce a sparse low-rank connectivity mask \(M\in\mathbb{R}^{n\times n}\). We indicate binarization by the indicator function \(\mathbf{1}[\cdot]\) in the connectivity mask predictor (Eq. 3).
**Sparse MHSA --** In Fig. 2, we compare our method to naive MHSA [35] and Linformer [37] in a single head example. In our method, guided by the sparse connectivity mask \(M\), we compute only the nonzero elements of the
sparse full-rank attention matrix \(\tilde{A}\). In order to ensure computational efficiency, we want to have both a sparse up-projection and a sparse low-rank attention matrix. This is equivalent to reconstructing the sparse attention matrix \(\tilde{A}\) as an affine combination over a set of sparse basis vectors using a sparse coefficient vector:
\[\tilde{A}_{ij}=\text{softmax}\bigg{(}\frac{QK^{\top}}{\sqrt{d}}\bigg{)}_{ij}\quad \text{iff}\quad M_{ij}=1. \tag{4}\]
Another way of formulating the sparse full-rank attention matrix is as a sparse element-wise product of the sparse connectivity mask \(M\) with the full-rank attention matrix:
\[\tilde{A}=M\odot_{\text{sparse}}A. \tag{5}\]
Here, \(\odot_{\text{sparse}}\) is the sparse element-wise product operator, which skips multiplications by zero. Therefore, computing the sparse full-rank attention matrix \(\tilde{A}\) (Eq. 4) costs only as many FLOPs as there are nonzero elements in the connectivity mask \(M\). In particular, computing the sparse full-rank attention matrix costs less than the \(O(n^{2}d)\) required by naive MHSA.
Finally, Sparsifiner computes a sparse attention-value product using the sparse full-rank attention matrix \(\tilde{A}\) and the value matrix \(V\):
\[X^{l+1}=\tilde{A}V. \tag{6}\]
By computing the sparse full-rank attention matrix \(\tilde{A}\) (Eq. 4) guided by the sparse connectivity mask, and then computing the sparse attention-value product, we remove the \(O(n^{2}d)\) complexity required by the naive MHSA operation. Instead, the sparse MHSA operation in Sparsifiner performs a number of operations proportional to the number of nonzero elements in the connectivity mask \(M\).
**Objective functions --** The training of Sparsifiner includes training the attention connectivity predictor modules and fine-tuning the backbone to make it adapt to sparse attention. We adopt the standard cross-entropy loss:
\[\mathcal{L}_{\text{cls}}=\text{CrossEntropy}(\mathbf{y}^{\text{pred}},\mathbf{ y}) \tag{7}\]
where \(\mathbf{y}^{\text{pred}}\) is the predicted class distribution and \(\mathbf{y}\) is the ground-truth class distribution.
To minimize the influence on performance of the attention sparsification process, we use a pre-trained backbone model as a teacher within a knowledge distillation framework. First, we make the tokens at the last layer close to the ones of the teacher model, where \(\mathbf{x}\) and \(\mathbf{x}^{\text{teach}}\) are the tokens after the last block of the Sparsifiner and the teacher model, respectively.
\[\mathcal{L}^{\text{token}}_{\text{distill}}=\text{MSE}(\mathbf{x},\mathbf{x}^ {\text{teach}}). \tag{8}\]
Second, we minimize the difference of Sparsifiner and the teacher model's predictions via KL divergence:
\[\mathcal{L}^{\text{cls}}_{\text{distill}}=\text{KL}(\mathbf{y}^{\text{pred}}|| \mathbf{y}^{\text{teach}}). \tag{9}\]
Third, we want the connectivity score map generated by the connectivity mask predictor to be a good low-rank approximation of the teacher attention, which can be viewed as knowledge distillation of the attention map. We minimize the Euclidean distance between them:
\[\mathcal{L}^{\text{attn}}_{\text{distill}}=\text{MSE}(\tilde{A}^{\text{ down}}W^{\text{up}},A^{\text{teach}}). \tag{10}\]
Finally, to enforce the sparsity of the up-projection matrix, we use the \(L_{2}\) regularization. We tried \(L_{1}\) regularization but found that \(L_{2}\) gives better training convergence with sufficient sparsity in practice.
\[\mathcal{L}_{\text{spa}}=\sum_{i}(w_{i}^{\text{up}})^{2} \tag{11}\]
Figure 2: Single head comparison of the MHSA module for naïve MHSA [35], Linformer [37], and Sparsifiner. **Naïve MHSA** incurs quadratic \(O(n^{2})\) complexity in the number of tokens \(n\). **Linformer** reduces the complexity to linear \(O(nn_{\text{down}})\) by using a projection of the key and value matrices to projected key \(K^{\text{proj}}\in\mathbb{R}^{n_{\text{down}}\times d}\) and value \(V^{\text{proj}}\in\mathbb{R}^{n_{\text{down}}\times d}\) matrices in a low-rank approximation of the attention matrix. **Sparsifiner**’s key insight is to use the low-rank approximation to learn a sparse connectivity mask \(M\in\mathbb{R}^{n\times n}\) and sparse up-projection basis \(W^{\text{up}}\). Using sparse matrix multiplication, Sparsifiner reduces overall MHSA FLOPs relative to Linformer without restricting the attention matrix to be low rank. Note that in the rightmost column only (Sparsifiner), the attention matrix \(A\) is not explicitly constructed, and rather is used to represent sparse attention reconstruction (Eq. 5).
The full training objective combines all objectives:
\[\mathcal{L}=\mathcal{L}_{\text{cls}}+\lambda_{\text{distill}}^{\text{token}} \mathcal{L}_{\text{distill}}^{\text{token}}+\lambda_{\text{distill}}^{\text{ cls}}\mathcal{L}_{\text{distill}}^{\text{cls}}+\lambda_{\text{distill}}^{\text{attn}} \mathcal{L}_{\text{distill}}^{\text{attn}}+\lambda_{\text{spa}}\mathcal{L}_{ \text{spa}} \tag{12}\]
Where we set the weight decay as \(0.05\) in the optimizer instead of directly adding \(\lambda_{\text{spa}}\mathcal{L}_{\text{spa}}\) to the objective.
## 4 Experiments and Results
**Implementation details --** We train all of the models on the ImageNet dataset [12]. By default, the connectivity mask predictor module is incorporated into every layer of DeiT-S [34] and LV-ViT-S [17]. In all of our experiments, we set the reduced dimension \(n_{\text{down}}\) to \(32\) and \(\tau\) to \(0.05\) which ensures 87% sparsity ratio of the basis coefficient. The attention budget \(B\) is in the range \((0,\text{number of tokens}]\). Budget \(B\) is directly determined by the attention keep rate in \((0,1]\) as the ceiling of the keep rate multiplied by the total number of tokens.
We follow most of the training techniques used in DeiT-S and LV-ViT-S. We use pre-trained ViT models to initialize the backbone models. To improve speed of convergence, we propose a two-phase training strategy. In the first phase, we freeze the backbone model and train the connectivity mask predictor module with attention distillation loss and L2 regularization only. Specifically, we set \(\lambda_{\text{distill}}^{\text{token}}=0.0\), \(\lambda_{\text{distill}}^{\text{cls}}=0.0\), \(\lambda_{\text{distill}}^{\text{attn}}=1.0\) and we also apply a threshold 1e-2 on basis \(W^{\text{up}}\) to ensure 90% sparsity. We found that this setting helps the connectivity mask predictor to learn \(W^{\text{up}}\) quickly and loss converges within 5 epochs. In the second phase, we jointly train the backbone model and the connectivity mask predictor module for another 40 epochs. we set \(\lambda_{\text{distill}}^{\text{token}}=0.5\), \(\lambda_{\text{distill}}^{\text{cls}}=0.5\), \(\lambda_{\text{distill}}^{\text{attn}}=0.0\). More details can be found in the supplementary material.
**Sparse connectivities and attention visualization --** In order to qualitatively investigate the quality of Sparsifiner's sparse attention approximation, we visualize its connectivity mask and sparse reconstructed attention map (Fig. 3). We show the original input image and the connectivity mask of the query patch, where the dark regions represent tokens that are not attended to by the query patch token. For each attention head, Sparsifiner generates a corresponding connectivity mask. We find that the connectivity mask acts as a region proposal mechanism, which allows different attention heads to locate different informative tokens and gather diverse semantic information. Furthermore, we visualize the sparse attention map efficiently generated using the connectivity mask and compare it with the full attention map. We find that the sparse attention map retains all of the highest connectivity values, while discarding lower connectivity values. Hence the visualizations show that Sparsifiner retains the most salient relations for a given token, while discarding noisy background relations.
**Comparison with token pruning --** We train and evaluate Sparsifiner on ImageNet and compare to state-of-the-art token pruning baselines (Tab. 1). Since our research question addresses the problem of reducing MHSA complexity, we report trade-offs between top-1 accuracy on ImageNet and computation in terms of MHSA FLOPs. We compare Sparsifiner against baselines by adjusting two hyperparameters: token and attention keep rate. The token keep rate is the fraction of tokens kept in the network at pre-determined layers where pruning occurs, which we set according to established token pruning baselines [15, 27]. The attention keep rate is the fraction of attention connectivities at any given MHSA layer, as determined by the connectivity mask predictor (Eq. 3). Hence, varying the attention keep rate reduces FLOPs without necessitating removal of tokens as in token pruning. But both techniques can be combined to achieve complementary effects.
To provide a variety of comparisons we experiment with adding token pruning and Sparsifiner to two common baseline ViT models: DeiT [34] and LV-ViT [17]. On both models, Sparsifiner achieves significant computation saving while maintaining a relatively modest drop in top-1 accu
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & Tok. & Att. & \multirow{2}{*}{MHSA} & Top-1 \\ & keep & keep & & (MFLOPs) & Acc. \\ & rate & rate & & (\%) \\ \hline DeiT-S [34] & \(1.0\) & \(1.0\) & \(357.7\) & \(79.8\) \\ \hline EViT [15] & \(0.7\) & \(1.0\) & \(193.1\) (\(-46\)\%) & \(\mathbf{79.5}\) \\ DynamicViT [27] & \(0.7\) & \(1.0\) & \(193.1\) (\(-46\)\%) & \(79.3\) \\
**Sparsif-EVT (ours)** & \(0.7\) & \(0.25\) & \(113.3\) (\(-68\)\%) & \(\mathbf{79.5}\) \\
**Sparsifiner (ours)** & \(0.7\) & \(0.25\) & \(113.3\) (\(-68\)\%) & \(\mathbf{79.5}\) \\ \hline EViT [15] & \(0.5\) & \(1.0\) & \(149.1\) (\(-58\)\%) & \(78.5\) \\ DynamicViT [27] & \(0.5\) & \(1.0\) & \(149.1\) (\(-58\)\%) & \(77.3\) \\
**Sparsif-EVT (ours)** & \(0.5\) & \(0.25\) & \(\mathbf{86.6}\) (\(-\mathbf{76\)\%)} & \(78.7\) \\
**Sparsifiner (ours)** & \(0.5\) & \(0.25\) & \(\mathbf{86.6}\) (\(-\mathbf{76\)\%)} & \(78.4\) \\ \hline LV-ViT-S [17] & \(1.0\) & \(1.0\) & \(476.9\) & \(83.3\) \\ \hline EViT-LV-S [15] & \(0.7\) & \(1.0\) & \(256.0\) (\(-46\)\%) & \(83.0\) \\ EViT-LV-S [15] & \(0.5\) & \(1.0\) & \(198.8\) (\(-58\)\%) & \(82.5\) \\ DynViT-LV-S [27] & \(0.7\) & \(1.0\) & \(256.0\) (\(-46\)\%) & \(83.0\) \\ DynViT-LV-S [27] & \(0.5\) & \(1.0\) & \(198.8\) (\(-58\)\%) & \(82.0\) \\
**Sparsif-LV-S (ours)** & \(1.0\) & \(0.5\) & \(339.7\) (\(-29\)\%) & \(\mathbf{83.4}\) \\
**Sparsif-LV-S (ours)** & \(1.0\) & \(0.25\) & \(221.7\) (\(-54\)\%) & \(83.3\) \\
**Sparsif-LV-S (ours)** & \(1.0\) & \(0.1\) & \(\mathbf{149.5}\) (\(-\mathbf{69\)\%)} & \(82.8\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with token pruning methods on DeiT-S [34] and LV-ViT-S [17] base models. Token pruning methods such as EViT [15] and DynamicViT [27] prune tokens at fixed layers. We show that token pruning methods combine with Sparsifiner’s sparse attention connectivities to produce a complementary effect. Sparsifiner combined with EViT [15] achieves a 68% reduction in FLOPs compared with the DeiT-S [34] baseline, while maintaining a top-1 accuracy of \(79.5\)%. Hence Sparsifiner achieves the same top-1 accuracy as EViT [15] with significantly better MHSA FLOPs reduction. The input resolution is \(224\times 224\).
racy. For example, LV-ViT-S [17] trained with Sparsifiner with an attention keep rate of \(0.25\) reduces the MHSA FLOPs by \(53.5\)% while maintaining the top-1 accuracy of the baseline LV-ViT-S model on ImageNet. When used in combination with token pruning, Sparsifiner achieves an even superior reduction in MHSA FLOPs while maintaining comparable top-1 accuracy to EViT, and superior top-1 accuracy to DynamicViT.
**Varying MHSA attention budget --** We varied the attention budget of MHSA in order to investigate the tradeoff between MHSA FLOPs and top-1 accuracy for Sparsifiner-S (Tab. 2). The results evaluated on ImageNet show that Sparsifiner-S produces a superior Pareto frontier compared with previous approaches (Fig. 4). In particular, Sparsifiner-S models with attention budgets of \(40\) and above achieved top-1 accuracy within \(0.1\)% of the full-rank DeiT-S [34] model, while using \(58.8\)% fewer FLOPs in MHSA. Furthermore, Sparsifiner-S models with high attention budgets of \(79\) and above achieved superior top-1 accuracy compared with the full-rank DeiT-S [34] model, while using fewer FLOPs in MHSA. This suggests that Sparsifiner's sparse full-rank attention reconstruction mechanism induces a useful regularization effect that improves model generalization.
**Accelerating ViT on high-resolution images --** To show the effectiveness of our method on larger input size, we apply our method to DeiT-T [34] with \(384\times 384\) res
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Att. keep rate} & \multirow{2}{*}{Att. num.} & MHSA & Top-1 \\ & & (MFLOPs) & Acc (\%) \\ \hline \hline
1.0 (DeiT-S [34]) & \(197\) & \(357.7\) & \(79.82\) \\ \hline
0.9 & \(178\) & \(396.8\) & \(80.02\) \\
0.8 & \(158\) & \(360.6\) & \(79.97\) \\ \(0.7\) & \(138\) & \(324.6\) (\(-9\)\%) & \(79.96\) \\
0.6 & \(119\) & \(290.3\) (\(-19\)\%) & \(79.98\) \\
0.5 & \(99\) & \(254.2\) (\(-29\)\%) & \(79.94\) \\
0.4 & \(79\) & \(218.0\) (\(-39\)\%) & \(79.92\) \\
0.3 & \(60\) & \(183.6\) (\(-49\)\%) & \(79.83\) \\
0.2 & \(40\) & \(147.5\) (\(-59\)\%) & \(79.71\) \\
0.1 & \(20\) & \(111.4\) (\(-69\)\%) & \(79.42\) \\
0.05 & \(10\) & \(93.3\) (\(-74\)\%) & \(78.75\) \\
0.01 & \(2\) & \(78.9\) (\(-78\)\%) & \(73.03\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effect of attention budget on FLOPs and top-1 accuracy. Here the “keep rate” refers to the number of attention connectivities retained at each layer. All other attention connectivities in the sparse full-rank attention matrix (Eq. 4) are set to zero. When keeping only \(10\) attention connectivities, Sparsifiner produces a top-1 accuracy reduced by only \(1.0\)% compared to the full-attention baseline DeiT-S [34], but with a \(73.9\)% reduction in FLOPs. The input resolution is \(224\times 224\).
Figure 3: Visualization of connectivity mask (b) with sparse (c) and full (d) attention maps for a given query patch (a). In the heatmaps, the blue darker color indicates lower, and yellow brighter color indicates higher attention value. Here we visualize the attention maps for only 3 layers and 4 heads of the ViT. For the dog image (top) we visualize layers 3–5, while for the bear image (bottom) we visualize layers 6–8. We observe that in earlier layers the attention map focuses more on positional information such as nearby tokens, while in later layers semantic relations with distant tokens are more important. For each query patch indicated by a yellow square in the input image, Sparsifiner predicts a sparse connectivity mask using a low-rank approximation to full attention. Using the sparse connectivity mask, Sparsifiner efficiently computes a sparse full-rank attention matrix. By comparison with the rightmost full attention, sparse attention retains all of the most salient relations with the given query patch, while discarding redundant or noisy information in the rest of the image.
olution (Tab. 3). When dealing with high-resolution images, due to quadratic complexity in the number of tokens, MHSA becomes increasingly expensive compared to the feedforward operations. We reduce the MHSA complexity of the DeiT-T [34] model with \(384\times 384\) input by over \(80\)% with less than \(1\)% accuracy drop. Our method shows a great potential to accelerate ViT on even higher resolution images where token quantity dominates the model complexity.
**Low-rank: connectivities or attention? --** Our approach raises a research question: does the utility of the dense low-rank attention matrix come from its use as a connectivity mask? Or is it sufficient to directly use the dense low-rank attention matrix, foregoing the need to reconstruct the sparse full-rank attention matrix, i.e., the Linformer [37] approach? We answered this question by comparing the top-1 accuracy of the two approaches (Tab. 4). In this experiment, Sparsifiner-S and Linformer [37] were trained under identical settings, differing only in the attention approximation method. Sparsifiner-S uses a reconstructed sparse full-rank attention matrix, while Linformer uses the dense low-rank attention matrix directly. In order to give both models similar representational capacity, we set the low-rank dimension of Linformer [37] equal to the sparse attention bud
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & MHSA (MFLOPs) & Top-1 Acc (\%) \\ \hline Linformer [37] & \(246.73\) & 77.54 \\
**Sparsifiner-S (ours)** & \(224.04\) & 79.79 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of sparse full-attention reconstruction with low-rank attention reconstruction. Sparsifiner-S achieves a \(2.1\)% absolute percentage point improvement in top-1 accuracy compared with Linformer [37].
Figure 4: MHSA computation (FLOPs) and top-1 accuracy trade-offs on ImageNet. We compare Sparsifiner with the state-of-the-art token pruning methods. Sparsifiner achieves superior trade-offs compared to the baseline. We also report MHSA FLOPs and top-1 accuracy for Sparsifiner-S under varying attention keep rate.
Figure 5: Low-rank attention (Linformer) and full-rank sparse attention (Sparsifiner) heatmaps. For a given query patch indicated by a yellow square (a), we visualize its low-rank attention map (Linformer) (c) and full-rank sparse attention map (Sparsifiner) (d). Due to discarding the long tail of the attention matrix’s eigenspectrum, low-rank attention produces a coarse attention map. By contrast, full-rank sparse attention bears closer resemblance to full attention (b) with low-salience connectivities discarded.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & \begin{tabular}{c} Att. \\ keep \\ rate \\ \end{tabular} & \begin{tabular}{c} MHSA \\ (MFLOPs) \\ \end{tabular} & \begin{tabular}{c} Overall \\ (GFLOPs) \\ \end{tabular} &
\begin{tabular}{c} Top-1 Acc \\ (\%) \\ \end{tabular} \\ \hline DeiT-T & \(1.0\) & \(1534.1\) & \(3.58\) & \(75.45\) \\ \hline
**Sparsifiner-T** & \(0.5\) & \(851.0\) (\(-45\)%) & \(2.89\) (\(-19\)%) & \(75.45\) \\
**Sparsifiner-T** & \(0.25\) & \(452.9\) (\(-70\)%) & \(2.49\) (\(-30\)%) & \(75.35\) \\
**Sparsifiner-T** & \(0.1\) & \(240.5\) (\(-84\)%) & \(2.28\) (\(-36\)%) & \(74.58\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results on high resolution \(384\times 384\) images. We apply Sparsifiner on DeiT-T [34] with resolution 384. We show that Sparsifiner reduce the MHSA complexity of DeiT-T-384 [34] by over \(84\)% with modest accuracy drop. Since the number of tokens is quadratic in the resolution, Sparsifiner can reduce a larger portion of MHSA complexity on high-resolution images.
get of Sparsifiner-S. This enforces that the attention-value product of both models' MHSA has the same complexity.
Using the sparse full-rank attention matrix produces a \(2.1\)% absolute percentage point improvement in top-1 accuracy compared with Linformer. This improvement reinforces the superiority of using the low-rank query-key product as a connectivity mask, rather than using the low-rank attention matrix directly. Using the low-rank attention matrix to directly compute the attention-value product with a down-projected value discards the long tail of the full attention matrix's eigenspectrum [37]. In contrast, using the low-rank query-key product as a connectivity mask reduces computation by a different mechanism. By using a low-rank connectivity mask to produce a sparse full-rank attention matrix, the long-tail of the full attention matrix's eigenspectrum is preserved. Based on the significant improvement in top-1 accuracy, we conclude that these long-tail eigenvalues are important for model predictive quality in ViTs.
**Low- and full-rank attention visualization --** In order to further illuminate the qualitative difference between low- and full-rank attention in ViTs, we also present the masked attention heatmap and the full attention heatmap of the query patch (Fig. 5). We show that a connectivity mask can accurately preserve key tokens that are highly related to the query patch and remove the irrelevant ones. As a result, the masked attention heatmap preserves structure and discards noise compared with the full attention heatmap. The visualization results also validate that our Sparsifiner can effectively approximate the full attention ViT.
**Sparse low-rank basis and up-projection matrix visualization --** To demonstrate that the connectivity mask can be computed by sparse-sparse matrix multiplication, we visualize the up-projection matrix \(W^{\text{up}}\) of the first six layers of Sparsifiner (Fig. 6). Because the reconstructed sparse attention matrix is a combination of the up-projection matrix's weights, we refer to it as a sparse basis. We show that Sparsifiner naturally learns a sparse basis of local regions resembling 2D Gaussians. For a given token, the sparse bases corresponding to object locations with salient semantic and/or spatial information will activate. Since the sparse attention reconstruction (Eq. 5) is a product of the sparse low-rank attention matrix with the up-projection matrix, we also visualize the post-softmax low-rank attention matrix. Here we view the low-rank attention matrix as a sparse coefficient of the sparse basis (Fig. 7). Qualitatively, the sparse coefficient also exhibits a high degree of sparsity, further validating the efficiency of the sparse attention reconstruction via sparse-sparse matrix multiplication.
## 5 Conclusions
We presented a novel computationally efficient approach to learn unstructured, instance-dependent attention in ViTs. The development of sparse attention mechanisms such as Sparsifiner opens the door to further research into accelerating sparse ViTs using software-hardware systems approaches. Sparsifiner shows the promise of sparse attention for scaling ViTs to larger and more complex vision tasks. But software-hardware systems approaches are needed to realize its full potential. We hope that our work inspires further research at the intersection of sparse algorithms for ViTs and software-hardware systems approaches to support those sparse algorithms.
Figure 6: Visualization of the up-projection matrix \(W^{\text{up}}\) of the first 6 layers of Sparsifiner-S, which we refer to here as a sparse basis. We visualize \(24\) dimensions of the sparse basis. Dark blue weights indicate low values, which are pruned after training so that only the bright yellow weights are left over. Qualitatively, the sparse basis has a high level of sparsity, making sparse attention reconstruction efficient.
Figure 7: Visualization of the sparse basis coefficient of the \(5\)th attention head over \(12\) layers of Sparsifiner-S. Dark blue regions indicate low values that are pruned before sparse attention reconstruction during inference, leaving only bright yellow coefficients.
|
2310.14102
|
Tau data-driven evaluation of the Hadronic Vacuum Polarization
|
Windows in Euclidean time have become a standard tool for comparing lattice
QCD and data-driven computations of the hadronic vacuum polarization (HVP)
contribution to the muon $g-2$. Here we review our results, obtained using
isospin-rotated $\tau^-\to\pi^-\pi^0\nu_\tau$ data instead of
$e^+e^-\to\pi^+\pi^-$ measurements, and compare them to other approaches.
Consistency of the tau-based and lattice results hints to underestimated
uncertainties in the $e^+e^-$ data. If that is the case, the theory prediction
of the muon $g-2$ would only lie at $\sim 2\sigma$ from its measured value.
|
Pere Masjuan, Alejandro Miranda, Pablo Roig
|
2023-10-21T20:04:50Z
|
http://arxiv.org/abs/2310.14102v2
|
# Tau data-driven evaluation of the Hadronic Vacuum Polarization 1
###### Abstract
Windows in Euclidean time have become a standard tool for comparing lattice QCD and data-driven computations of the hadronic vacuum polarization (HVP) contribution to the muon \(g-2\). Here we review our results, obtained using isospin-rotated \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) data instead of \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) measurements, and compare them to other approaches. Consistency of the tau-based and lattice results hints to underestimated uncertainties in the \(e^{+}e^{-}\) data. If that is the case, the theory prediction of the muon \(g-2\) would only lie at \(\sim 2\sigma\) from its measured value.
keywords: HVP, Data-driven, Semileptonic tau decays, Isospin-Breaking corrections +
Footnote †: journal: Nuclear and Particle Physics Proceedings
## 1 Introduction
The measurement of the anomalous magnetic moment of the muon, \(a_{\mu}=(g_{\mu}-2)/2\), is becoming more and more precise, thanks to the recent FNAL data [1; 2] and the legacy BNL result [3], all agreeing remarkably. The experimental world average is
\[a_{\mu}^{\rm Exp}=116592059(22)\times 10^{-11}\;. \tag{1}\]
The corresponding Standard Model (SM) prediction, at a similar accuracy, is much more challenging, as we briefly review in the following.
The QED [4; 5] and Electroweak [6; 7] contributions are known with uncertainties negligible compared to that in Eq. (1). The problem is on the hadronic pieces, specifically on the dominant HVP one, with an error that -according to the White Paper (WP) [8]- basically doubles the experimental one, Eq. (1) 2.
Footnote 2: We will not cover here the other hadron contribution, given by the hadronic light-by-light piece, with an uncertainty smaller than \(d_{\mu}^{\rm Exp}\). Its SM prediction in the WP [8] is based on Refs. [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22], with a precision matched by the most recent lattice QCD computations [23; 24]. Later developments, mainly for the most difficult contributions, coming from axial-vector mesons and remaining short-distance constraints, are covered in Refs. [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40]. See our recent account [41].
Traditionally, \(a_{\mu}^{\rm HVP}\) was obtained from \(\sigma(e^{+}e^{-}\to\mbox{hadrons})\), via a dispersive integral with a kernel peaked at low energies, as the cross-section itself is (apart from resonances) [8]. This makes that \(\sim 73\%\) of \(a_{\mu}^{\rm HVP}\) comes from the \(\pi^{+}\pi^{-}\) contribution [8; 42; 43; 44; 45; 46; 47], with \(\sim 80\%\) of the overall uncertainty stemming from this channel. Then, the disagreement between the \(e^{+}e^{-}\) data-driven \(a_{\mu}^{\rm SM}\) prediction and \(a_{\mu}^{\rm Exp}\) (always \(a_{\mu}^{\rm Exp}>a_{\mu}^{\rm SM}\)) comes mostly from the \(\pi^{+}\pi^{-}\) contribution. In this channel, the long-standing discrepancy between BaBar [48; 49] and KLOE [50; 51; 52; 53; 54] data was still marginally acceptable for the White Paper combination [8] (yielding a \(5.0\sigma\) discrepancy with \(a_{\mu}^{\rm Exp}\)), but the recent CMD-3 measurement [55; 56] has decreased the overall compatibility (see also Refs. [57; 58; 59]). CMD-3 alone would give \(a_{\mu}^{\rm SM}\)
less than one \(\sigma\) away from \(a_{\mu}^{\rm Exp}\).
There is only one lattice QCD computation of \(a_{\mu}^{\rm HVP}\) with competitive precision to the data-based evaluations, the one reported by the BMW collaboration [60], which agrees at less than \(2\sigma\) with \(a_{\mu}^{\rm Exp}\).
In this unclear situation for \(a_{\mu}^{\rm SM}\), it is worth recalling that an alternative data-driven evaluation is possible, replacing \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) by isospin-rotated \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) data, as was pioneered in Ref. [61] and later on pursued in several other analyses [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74], including Ref. [75], which computed the isospin-breaking corrections (see also [76]) our work [77] reported here is mainly based upon.
## 2 Method and results
The leading order (LO) contribution to \(a_{\mu}^{\rm HVP}\) is traditionally calculated as
\[a_{\mu}^{\rm HVP,LO}=\frac{1}{4\pi^{3}}\int_{s_{\rm thr}}^{\infty}{\rm d}s\,K( s)\,\sigma_{e^{+}e^{-}\to{\rm hadrons}(\gamma)}^{0}(s)\,, \tag{2}\]
where the upper-index zero signals that the bare cross-section 2 is used.
Footnote 2: \(\sigma^{0}\) is obtained from the dressed cross section by applying mass-dependent corrections for vacuum polarization and by adding back the effects of final-state radiation (which should belong to \(a_{\mu}^{\rm HVP,NLO}\) but are included for convenience in Eq. (2) instead).
An alternative data-driven evaluation is viable, replacing \(\sigma^{0}(e^{+}e^{-}\to\pi^{+}\pi^{-})\) with the spectrum of the \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) decays (\({\rm d}\Gamma_{\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}(\gamma)}/{\rm d}s\)) [78; 79; 80; 81]. In this way,
\[\sigma_{e^{+}e^{-}\to{\rm hadrons}(\gamma)}^{0}(s)=\left[\frac{K_{\sigma}(s)}{K_ {\Gamma}(s)}\frac{{\rm d}\Gamma_{\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}(\gamma)} }{{\rm d}s}\frac{R_{\rm IB}(s)}{S_{\rm EW}}\right]\,, \tag{3}\]
where the ratio of \(K\) functions depends on the kinematics and absorbs global constants, and \(S_{\rm EW}\) is the universal short-distance electroweak correction factor [82], \(S_{\rm EW}=1.019(1)\)[61]. The remaining isospin-breaking corrections are encapsulated in [63; 69]
\[R_{\rm IB}(s)=\frac{{\rm FSR}(s)\,\beta_{\pi^{+}\pi^{-}}^{3}}{G_{\rm EM}(s)\, \beta_{\pi^{-}\pi^{0}}^{3}}\Bigg{|}\frac{F_{V}(s)}{f_{+}(s)}\Bigg{|}^{2}. \tag{4}\]
Two contributions to \(R_{\rm IB}(s)\) are easy to evaluate: the ratio of \(\beta\) functions and the final-state radiation term, \({\rm FSR}(s)\). A non-negligible model dependence is presently associated to the neutral-to-charged current form factor ratio \(\left(\frac{F_{V}(s)}{f_{+}(s)}\right)\) and to the long-distance electromagnetic corrections for the di-pion tau decays (\(G_{\rm EM}(s)\)).
The effect of the \(S_{\rm EW}\) correction on \(a_{\mu}\) is \(-103.2\times 10^{-11}\), while the phase space correction yields \(-74.5\times 10^{-11}\), both with negligible uncertainties. The FSR effect amounts to \(+45.5(4.6)\times 10^{-11}\)[75], in agreement with [69].
The ratio of the form factors is challenging and depends on the different \(\rho\) masses and widths according to their charge, as well as on the \(\rho-\omega\) mixing present only in the neutral channel. In Ref. [75] we followed both the proposals of Refs. [64; 69] to compute this correction, resulting in the contributions \(+77.6(24.0)\times 10^{-11}\) and \(+40.9(48.9)\times 10^{-11}\), compatible with these references. This is currently the dominant error of the tau-based prediction of \(a_{\mu}^{\rm HVP,\pi\pi}\) and could be reduced with improved measurements of the \(\rho^{\pm 0}\) pole positions and of the \(\Gamma(\rho\to\pi\pi\gamma)\) channels.
We use the \(G_{\rm EM}(s)\) correction computed [75] in Chiral Perturbation Theory [83; 84; 85] with resonances [86; 87]3, as first done in Ref. [64]. We have:
- Included the operators of the original Lagrangian [86; 87], as in Ref. [64], and first evaluated its associated uncertainty including the subleading terms in the chiral expansion [90; 91] that are fixed by QCD short-distance constraints [90; 91; 92]. We will name this approach \({\cal O}(p^{4})\) (since it includes all operators that -upon resonances integration- contribute to the chiral low-energy constants at this chiral order) from now on, and yields our reference results.
- Included additional operators, which are suppressed at low energies [90; 91] and estimated those unrestricted by perturbative QCD plus phenomenology by chiral counting [90; 91; 92]. In this last case there are so many free couplings that the uncertainties are artificially large, with a shift in the central value that seems to be overestimated. This approximation will be called \({\cal O}(p^{6})\) in the following and is given for completeness.
Our results for the \(G_{\rm EM}(s)\) correction to \(a_{\mu}\) in these two cases are \(\left(-15.9^{+.5}_{-16.0}\right)\times 10^{-11}\) (consistent with Refs. [64; 69; 93]) and \((-76\pm 46)\times 10^{-11}\), respectively.
Fig. 1 displays the di-pion contribution to \(a_{\mu}^{\rm HVP,\,LO}\) in the \(\rho\) resonance region obtained using either \(\sigma(e^{+}e^{-}\to{\rm hadrons})\) (top part of the plot, with mean in yellow) or the \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) spectrum (bottom of the plot, with mean in green). A larger value of \(\sim 10\times 10^{-10}\) is obtained with tau data. The CMD-3 data point is a clear outlier among the \(e^{+}e^{-}\) results (in excellent agreement with the tau-based ones) and so it was not averaged with the rest.
The tau-based evaluation of \(a_{\mu}^{\rm HVP,\,LO}\) yields \(\left(705.7^{+4.1}_{-4.0}\right)\times 10^{-10}\)\(\left(\left(700.7^{+6.1}_{-5.2}\right)\times 10^{-10}\) including subleading operators) [75], in good agreement with the BMW Collaboration lattice QCD result (\(\left(707.5\pm 5.5\right)\times 10^{-10}\)) and with \(a_{\mu}^{\rm Exp}\) (as well as with the CMD-3-based prediction). It is then interesting to scrutinize further this accord in different energy regions or, as it has become conventional in the lattice computations, using different windows in Euclidean time.
For this, we will employ the weight functions in center-of-mass energy \(\tilde{\Theta}(s)\)[94], related to those in Euclidean time by [95]
\[\Theta_{SD}(t) = 1-\Theta(t,t_{0},\Delta), \tag{5}\] \[\Theta_{win}(t) = \Theta(t,t_{0},\Delta)-\Theta(t,t_{1},\Delta),\] (6) \[\Theta_{LD}(t) = \Theta(t,t_{1},\Delta),\] (7) \[\Theta(t,t^{\prime},\Delta) = \frac{1}{2}\left(1+\tanh\frac{t-t^{\prime}}{\Delta}\right), \tag{8}\]
defining the short-distance (\(SD\)), intermediate (\(win\)) and long-distance (\(LD\)) windows (\(t_{0}=0.4\) fm, \(t_{1}=1.0\) fm, \(\Delta=0.15\) fm). We note that \(LD\) dominates up to \(\sqrt{s}\sim 0.9\) GeV, \(SD\) from \(\sqrt{s}\sim 2.3\) GeV on, and \(win\) in between (we will also use \(int\) for this one), see Fig. 1 in ref. [94].
We have evaluated the di-pion tau-based [78; 79; 80; 81] contribution to \(a_{\mu}^{\rm HVP,LO}\) using the windows explained above, with the quoted values of \(t_{0,1}\) and \(\Delta\). In Ref. [77] we provide tables separating the corrections from each source of isospin-breaking in Eqs. (3) and (4) and showing the results for every experiment separately. We plot the results for the three different window contributions to \(a_{\mu}^{\rm HVP}\) (note that they scale as \(\sim 1:10:25\)) in Figs. 2 and 3. These graphs nicely display the consistency among the different tau measurements. In the \(SD\) and \(int\) windows, \(e^{+}e^{-}\) data-based results (from Ref. [94]) and tau values disagree markedly. Only in the last plot of Fig. 3 (for the \(LD\) window), the big errors on the \(G_{\rm EM}\) correction yield compatibility at one \(\sigma\) between both (which is not the case for the reference \({\cal O}(p^{4})\) result in the last plot of Fig. 2).
Fig. 4 magnifies the comparison between the IB-corrected \(\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}\) and the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) spectral functions using the ISR measurements from BABAR [49] and KLOE [53] (top panel) and the energy-scan measurements from CMD-3 [55] (bottom panel). Colored bands show the weighted average of the uncertainties coming from the data sets in each figure. Although it may seem that an enhanced form factors' ratio correction could improve agreement between tau and \(e^{+}e^{-}\) CMD-3 data (see e.g. figs. 16 and 17 in ref. [75]), further studies are needed to fully understand this (even more so in the comparison with BaBar and KLOE data).
We cannot contrast directly our tau-based results with the lattice outcomes. For this we need to supplement ours with \(e^{+}e^{-}\) data where needed. We have done this in two ways, to estimate the corresponding uncertainty. First, we have subtracted the contribution from the \(2\pi\) channel below \(1.0\) GeV to the values reported in Table 1 of Ref. [94] ('\(<1\) GeV' stands for this procedure) and replaced it by our corresponding mean values. Second, we have rescaled the contributions from the \(2\pi\) channel using the full evaluation of \(a_{\mu}^{\rm HVP,\,LO}[\pi\pi,e^{+}e^{-}]\) in Refs. [46; 47], removed it from the total contribution and substituted it by our values (without adding any symbol to represent this proceeding).
Our results are displayed in Fig. 5 for the intermediate window, where the blue band shows the weighted average of the lattice results, \(a_{\mu}^{int}=235.8(6)\cdot 10^{-10}\), excluding those from RBC/UKQCD 2018 [95] and ETMC 2021 [96] collaborations. Tau-data based contributions in the intermediate window are significantly closer to the lattice QCD values than to the \(e^{+}e^{-}\) ones. Thus, the \(\sim 4.3\sigma\) discrepancy between the \(e^{+}e^{-}\) data-driven and lattice evaluations shrinks to \(\sim 1.5\sigma\) using \(\tau\) data for the \(2\pi\) channel. There is only one lattice result for the short-distance window [98] which seems to agree with both data-driven HVP evaluations (although more closely with the tau-based). See Table 1 and Fig. 5.
Figure 3: Analog to Fig. 2 but at \(\mathcal{O}(p^{6})\).
Figure 2: Window quantities (\(SD\) top, _int_ medium and \(LD\) bottom) for the \(2\pi\) contribution below 1.0 GeV to \(a_{\mu}^{\rm HVP}\) at \(\mathcal{O}(p^{4})\). The blue region shows the experimental average from \(\tau\) data. The \(e^{+}e^{-}\) number is taken from Ref. [94].
the precision for \(a_{\mu}^{\rm HVPLO}\) and thus for the whole \(a_{\mu}^{\rm SM}\) in this data-driven way. At the moment there is only one lattice QCD evaluation (by BMW) at a competitive precision in the whole range, which agrees with CMD-3 and \(a_{\mu}^{\rm Exp}\).
In order to understand this situation, comparisons between lattice QCD and \(e^{+}e^{-}\)-based results for \(a_{\mu}^{\rm HVP}\) have become standard by using windows in Euclidean time.
We recall that alternative data-driven evaluations are possible and worth, using di-pion tau decay data instead of the corresponding \(e^{+}e^{-}\) measurements (accounting for the required isospin-breaking corrections, including structure-dependence). These were extremely useful before the very precise KLOE and BaBar measurements and have traditionally been closer to \(a_{\mu}^{\rm Exp}\). After reviewing our tau-based analysis [75] for the \(\pi\pi\) contribution to \(a_{\mu}^{\rm HVP,LO}\), we summarize our recent work [77]. We verify that our agreement with lattice results extends from the whole integrated effect to the three considered windows, which reinforces agreement of \(a_{\mu}^{\rm SM}\) with \(a_{\mu}^{\rm Exp}\) at \(\lesssim 2\sigma\). Further work seems needed to reach compatibility between \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) measurements by the different experiments. Our results can also be valuable for lattice efforts [101] addressing the computation of the relevant IB-corrections needed to use di-pion tau decay data as illustrated here.
## Acknowledgements
A. Miranda thanks the organizing committee of QCD23 for this interesting conference. P. M. has been supported by the European Union's Horizon 2020 Research and Innovation Programme under grant 824093
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{\(a_{\mu}^{\rm HVPLO}\)} \\ \hline & SD & int & LD & Total \\ \hline \(\tau\)-data \(\bar{O}(p^{+})\)\(\leq\) 1 GeV & 69.0(5.5) & 234.4(1.2) & 403.6(3.3) & 707.05.0 \\ \(\tau\)-data \(\bar{O}(p^{+})\)\(\leq\) 1 GeV & 68.9(5) & 233.7(1.4) & 399.5(4.5) & 702.2(4.5) \\ \hline \(\tau\)-data \(\bar{O}(p^{+})\) & 70.0(5.5) & 237.8(1.2) & 399.7(2.3) & 707.5(4.1) \\ \(\tau\)-data \(\bar{O}(p^{+})\) & 69.9(5) & 237.0(1.5) & 395.6(4.5) & 702.4(4.5) \\ \hline RBC/UKQCD 2018 [95] & – & 231.9(1.5) & – & 715.4(18.7) \\ ETMC 2012 [96] & – & 231.7(2.8) & – & – \\ BMW 2020 [60] & – & 236.7(1.4) & – & 707.5(5.5) \\ Mainy/CLS 2022 [97] & – & 237.30(1.46) & – & – \\ ETMC 2022 [98] & 69.33(29) & 235.0(1.1) & – & – \\ RBC/UKQCD 2023 [99] & – & 235.56(82) & – & – \\ \hline WP 18 & – & – & – & 693.1(4.0) \\ BMW 2020/KNT [43; 60] & – & 229.7(1.3) & – & – \\ Colangelo et al. 2022 [94] & 68.4(5) & 229.4(1.4) & 395.1(2.4) & 693.0(3.9) \\ Davier et al. 2023 [100] & – & 229.2(1.4) & – & 694.0(4.0) \\ \hline \end{tabular}
\end{table}
Table 1: Window quantities for \(a_{\mu}^{\rm HVPLO}\) in units of \(10^{-10}\). The first and second pairs of rows differ in the way tau data is complemented with \(e^{+}e^{-}\) measurements (as explained in the main text). The rows 5-10 are the lattice results [95; 96; 97; 98; 99]. The last three rows are the evaluations obtained using \(e^{+}e^{-}\) data, and the WP number [8] is shown for reference. See Fig. 11 in Ref. [99] for more details.
Figure 4: Comparison between the \(\tau\) (after IB corrections) and the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) spectral functions using the ISR measurements from BABAR [49] and KLOE [53] (top) and the energy-scan measurements from CMD-3 [55] (bottom).
Figure 5: Comparison of the total intermediate window contribution to \(a_{\mu}^{\rm HVP,LO}\) according to lattice QCD, \(e^{+}e^{-}\) and \(\tau\) data-driven evaluations. The blue band corresponds to the weighted average of the lattice results excluding RBC/UKQCD 2018 [95] and ETMC 2021 [96].
(H2020-INFRAIA- 2018-1), the Ministerio de Ciencia e Innovacion under grant PID2020-112965GB-I00, and by the Secretaria d'Universitats i Recerca del Departament d'Empresa i Coneixement de la Generalitat de Catalunya under grant 2021 SGR 00649. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. A. M. is also supported by MICINN with funding from European Union NextGenerationEU (PRTR-C17.I1) and by Generalitat de Catalunya. P. R. is funded by Conahcyt and Cinvestav.
|
2301.02021
|
Dynamic Sizing of Frequency Control Ancillary Service Requirements for a
Philippine Grid
|
Sizing frequency control ancillary service (FCAS) requirements is crucial for
the reliable operation of power systems amid a continuous influx of variable
renewable energy (VRE) generation. Reserve sizing is especially pertinent for
the Philippine grids due to an expected transition to new FCAS classifications
established by its Grid Code. In lieu of the existing deterministic
formulation, this work proposes a dynamic approach for sizing secondary and
tertiary reserves that accounts for the stochasticity and variability of load
demand and VRE. We propose a method where historical power imbalances were
calculated and clustered according to the time and day of week they occurred.
The conditional probabilities of forecast and noise errors were characterized
using kernel density estimation. Recursive convolution was performed to obtain
the total reserve requirement probability distribution. The method was tested
on Visayas grid's historical system operation data and used target reliability
levels on the error distributions to size upward and downward reserve needs.
Finally, the methodology was extended to demonstrate through a numerical
experiment that sizing FCAS at temporal resolutions higher than one-hour, e.g.,
five-minute, provides the benefit of shrinking the required capacities by as
much as 86.2\% compared to current deterministic FCAS sizing.
|
Elgar John S. del Rosario, Jordan Rel C. Orillaza
|
2023-01-05T11:38:55Z
|
http://arxiv.org/abs/2301.02021v1
|
# Dynamic Sizing of Frequency Control Ancillary Service Requirements for a Philippine Grid
###### Abstract
Sizing frequency control ancillary service (FCAS) requirements is crucial for the reliable operation of power systems amid a continuous influx of variable renewable energy (VRE) generation. Reserve sizing is especially pertinent for the Philippine grids due to an expected transition to new FCAS classifications established by its Grid Code. In lieu of the existing deterministic formulation, this work proposes a dynamic approach for sizing secondary and tertiary reserves that accounts for the stochasticity and variability of load demand and VRE. We propose a method where historical power imbalances were calculated and clustered according to the time and day of week they occurred. The conditional probabilities of forecast and noise errors were characterized using kernel density estimation. Recursive convolution was performed to obtain the total reserve requirement probability distribution. The method was tested on Visayas grid's historical system operation data and used target reliability levels on the error distributions to size upward and downward reserve needs. Finally, the methodology was extended to demonstrate through a numerical experiment that sizing FCAS at temporal resolutions higher than one-hour, e.g., five-minute, provides the benefit of shrinking the required capacities by as much as 86.2% compared to current deterministic FCAS sizing.
frequency control ancillary services, dynamic sizing, secondary reserves, tertiary reserves, kernel density estimation, convolution, power system reliability
## I Introduction
Continuous and precise frequency control is critical in maintaining power system reliability. Frequency deviations are counteracted by deploying active power control reserves, also known as frequency control ancillary services (FCAS) [1]. The adequate sizing of reserves is crucial as, on the one hand, undersizing can have serious negative impact, including load shedding, renewable energy curtailment, equipment damage, and in the worst scenario, blackouts. Reserve procurement costs, on the other hand, will rise if reserves are oversized. The Philippine electric power industry is also about to activate a reserve market, hence FCAS sizing is more critical [2].
Owing to the increasing grid integration of variable renewable energy (VRE) sources, such as wind and solar PV generation, quantifying reserve requirements considering VREs has received considerable attention in the last decade [3]. While VRE growth promotes the development of a clean and sustainable grid, VREs are also nonsynchronous, highly fluctuating and difficult to predict, and could induce higher levels of FCAS requirements [4]. Dynamic methods of sizing FCAS requirements due to the increasing integration of VREs using kernel density estimation [5], k-nearest neighbors and k-means clustering [6, 7], and dynamic Bayesian belief networks [8] have been proposed in the literature. In contrast with deterministic approaches that are typically based on rules of thumb, these dynamic probabilistic approaches vary reserve requirements depending on expected system conditions, considering the severity and probabilities of a range of potential power imbalances.
In the Philippines, reserves are still sized based on deterministic rules. Presently the Philippine system operator sizes three types of FCAS on a day-ahead basis: regulating, contingency, and dispatchable reserves. Regulating reserves are set at 4% of the hourly forecast demand in each hourly dispatch interval, which is based on intra-hour load variations in 2010 [9]. Meanwhile, contingency and dispatchable reserve levels are determined by the highest and second highest generating power outputs expected to be online, in accordance with the N-1 and N-1-1 criteria, respectively.
The current reserve classifications lack requirements for primary frequency response, often resulting in the activation of automatic load dropping (ALD) schemes as a first line of defense against large disturbances [10]. To address this gap, a revision of the Philippine Grid Code of 2016 prescribes new FCAS classifications according to the hierarchy of frequency control. In the new FCAS classification, primary reserves must provide primary frequency response via governor control, secondary reserves must operate to restore the system frequency to 60 Hz either under automatic generation control (AGC) or manually upon the command of the system operator, and tertiary reserves aim to replenish depleted secondary reserves [10]. This development in FCAS framework necessitates a new approach in sizing reserves.
Furthermore, current practices have yet to determine the incremental reserve needs introduced by the growing VRE penetration. As a country indigenously rich in VRE resources, the Philippines targets to achieve 35% renewable energy penetration by 2030 and 50% by 2040 [11]. In NREL's Greening the Grid report, high renewable energy scenarios for the Luzon-Visayas system of the Philippines by 2030 were investigated and the issue of adequate reserve provision was raised [12]. However, the said study retained an assumption of deterministic reserve rules that govern system operations until
now due to the absence of detailed reserve requirements.
The increasingly stochastic nature of system operations demand dynamic, data-driven and probabilistic approaches that consider the additional variability and uncertainty introduced by VREs. Considering this fact, this work proposes a dynamic probabilistic approach as an appropriate methodology for sizing reserves for frequency restoration, particularly, secondary and tertiary reserves, in the context of the Philippines. The method is based on clustering imbalances according to the hour and day they happened and kernel density estimation.
Additionally, a recent enhancement of the Philippine Whole-sale Electricity Spot Market (WESM) shifts the energy markets in the Luzon and Visayas grids from one-hour to five-minute intervals [13]. While reserve schedules continue to be based on hourly requirements, the Department of Energy tasks the system operator to conduct studies on the determination of required reserves vis-a-vis the 5-minute dispatch interval implementation [2]. This work further extends the aforementioned dynamic reserve sizing methodology to perform numerical experiments for subhourly sizing.
In Section 2, the proposed methodology is discussed. Section 3 provides the results and discussion on scenario simulations done on real Philippine grid operations data. Section IV concludes with recommendations for further research.
## II Proposed FCAS Sizing Methodology
An overview of the proposed reserve sizing process is shown in Fig. 1. Probability density estimation of various drivers for power imbalance is done based on historical values of power imbalances, which are mainly due to the discrepancy between the forecast and actual values of demand, wind and solar generation, as well as forced (unplanned) outages of power plants or interconnectors. The estimation process must be continuously done to incorporate new data and make the FCAS sizing process more accurate. On top of the estimated probability distribution functions (PDFs), load and VRE forecasts are employed as basis for sizing. The status quo reserve sizing is done on the day-ahead for sizing reserves for each hour of the next day, but sizing and scheduling reserves closer to real time (\(n\)-hours ahead) may prove to benefit forecast accuracy, especially when reserves are sized for subhourly intervals. The result of FCAS sizing shall then be the basis of FCAS procurement and the separate upward and downward capacities reserved for each intra-day interval.
An overview of the methodology for reserve sizing is shown in Fig. 2. The necessary data are collected for the calculation of historical imbalances. The historical imbalances are used to estimate the power imbalance PDFs. The convolution of these power imbalance PDFs produce PDFs for total and secondary reserve needs. The reserve requirements are determined by applying a target reliability level, referring to the acceptable percentage of time that there could be a shortage of reserves. The tertiary reserve requirement is taken as the difference between the derived total and secondary reserves. The following sections expound on these steps.
### _Estimation of Error Probability Distributions_
#### Ii-A1 Forecast and Noise Errors
Historical time series of load, wind and solar PV forecast and actual values are used to calculate forecast and noise errors. The load forecast error is the deviation of the mean value of load in a time interval from the forecast value, whereas the load noise error (or "load oscillation") is the deviation of the actual value from the interval mean [17]. Calculating forecast and noise errors are depicted in Figs. 2(a) and 2(b).
To avoid making strong assumptions on the shape of the underlying probability distributions, this work uses kernel density estimation to estimate the PDFs of wind, solar and load forecast and noise errors as in [6]. Kernel density estimation has been extensively used to
Fig. 1: Proposed framework for dynamic FCAS sizing in the Philippines
Fig. 3: Calculation of Forecast and Noise Errors
Fig. 2: Overview of Proposed Methodology
distributions and wind forecast errors in the literature [14]. Given the forecast and noise error values, \(\epsilon_{i,t}\), the probability distribution function \(f_{i}(u)\) is given by:
\[f_{i}(u)=\frac{1}{Th_{i}}\sum_{t=1}^{T}K\left(\frac{u-\epsilon_{i,t}}{h_{i}}\right) \tag{1}\]
where \(K(\cdot)\) is the _kernel smoothing function_; \(u\), the evaluation point for function \(f_{i}\); \(i\in[1,24\times 7]\) denotes an index to a cluster, which, in this work, is the hour and day of week of the error; \(t\), an index for time step; and \(T\), the maximum time step. The kernel smoothing function is assumed to be a normal probability density function [5, 6]. The optimal bandwidth is proportional to the standard deviation of the cluster \(i\), \(\sigma_{i}\), and can be calculated from the empirical data as [15]:
\[h_{i}=\left(\frac{4}{3T}\right)^{0.2}\sigma_{i}. \tag{2}\]
From kernel density estimation, we get six sets of PDFs, representing forecast and noise errors for load, wind and solar PV generation for each hour of the week: \(f_{y}^{x}\) where \(x\in\{\text{``forecast'', ``noise''}\}\) and \(y\in\text{``load'', ``wind'', ``solar''}\).
#### Iii-A2 Power Plant Outages
The forced outage rate, \(FOR\), of a single generating plant or unit is calculated from the number of hours it _was_ on forced outage, \(N_{outage}\), and the total number of hours in the period considered, \(N_{hours}\):
\[FOR=\frac{N_{outage}}{N_{hours}}. \tag{3}\]
The forced outage probability, \(FOP\), reflects the probability of generation capacity _going_ out, instead of _being_ out [16]:
\[FOP=\frac{FOR}{MTTR} \tag{4}\]
where \(MTTR\) refers to the mean time to repair. In determining the total capacity outage distribution, the FOP of each generating plant or unit is first calculated. The error distribution of a single plant is a piecewise-defined function
\[f_{j}^{\text{outage}}(X)=\begin{cases}FOP_{j},&\text{if }X=0\\ 1-FOP_{j},&\text{if }X=P_{j}^{\text{rated}}\end{cases} \tag{5}\]
where \(j\) is an index for a generating station and \(P_{j}^{\text{rated}}\) is the generating capacity of station \(j\). Then the error distributions of all power stations are recursively convolved to create a single error distribution [5], the total capacity outage distribution, \(f_{\text{total}}^{\text{outage}}\):
\[f_{\text{total}}^{\text{outage}}=f_{1}^{\text{outage}}*f_{2}^{\text{outage}}*...*f_{N_{\text{gen}}-1}^{\text{outage}}*f_{N_{\text{gen}}}^{\text{outage}}. \tag{6}\]
where \(N_{\text{gen}}\) is the total number of generating stations and \(*\) is a shorthand operator for convolution. Convolution is defined for two random variables **X** and **Y** and their sum \(\textbf{Z}=\textbf{X}+\textbf{Y}\), with their respective PDFs \(f_{X},f_{Y},f_{Z}\):
\[f_{Z}=f_{X}*f_{Y} \tag{7a}\] \[f_{Z}(z)=\sum_{k=-\infty}^{\infty}f_{X}(k)\cdot f_{Y}(z-k) \tag{7b}\]
### _Determination of Reserve Requirements_
#### Iii-B1 Convolution of Error PDFs
As each power imbalance driver is treated as a random variable, convolution is employed to determine the PDF of the sum of these imbalance drivers. The total reserve PDF is derived from convolution of the PDFs of all the defined power imbalance drivers:
\[\begin{split} f_{\text{reserve}}^{\text{total}}=f_{\text{load} }^{\text{forecast}}*f_{\text{load}}^{\text{noise}}*f_{\text{wind}}^{\text{ forecast}}*f_{\text{wind}}^{\text{noise}}\\ *f_{\text{solar}}^{\text{forecast}}*f_{\text{solar}}^{\text{noise}}*f_{ \text{total}}^{\text{outage}}.\end{split} \tag{8}\]
Figure 4 depicts the convolution of seven individual PDFs. Next, the secondary reserve PDF is the convolution of the PDFs of only the power imbalance drivers that need to be handled by secondary control, namely noise errors (fast fluctuations of demand and VRE) and capacity outages [17]:
\[f_{\text{reserve}}^{\text{secondary}}=f_{\text{load}}^{\text{noise}}*f_{ \text{wind}}^{\text{noise}}*f_{\text{solar}}^{\text{outage}}*f_{\text{total}}^{ \text{outage}}. \tag{9}\]
#### Iii-B2 Scaling of Error PDFs
The predicted demand, wind and solar PV capacities for future scenarios are used to determine the reserve requirements. The PDFs of the power imbalance drivers are scaled by a growth factor equivalent to the ratio of the forecast quantity, \(P_{i}^{\text{future}}\) [MW] to its historical peak value \(P_{i}^{\text{base}}\) [MW]. Furthermore, to account for future improvements in forecast performance, a forecast improvement factor, \(k_{i}^{\text{forecast}}\), can be multiplied to the error values to get the scaled error values \(\epsilon_{i}^{\text{future}}\).
\[\epsilon_{i}^{\text{future}}=\epsilon_{i}^{\text{base}}\cdot k_{i}^{\text{ forecast}}\cdot\frac{P_{i}^{\text{future}}}{P_{i}^{\text{base}}} \tag{10}\]
Fig. 4: Power imbalance driver PDFs convolved to derive the PDF of total reserve needs. Data based on 2018-2019 Visayas system operations.
#### Ii-A3 Application of Reliability Margins
The reliability margin \(\rho_{margin}\), also referred to as the security level, is a predefined target that represents the fraction of time, typically over a one-year period, in which reserves are adequate to serve the demand [4, 6]. The deficit (\(\rho_{\text{deficit}}\)) and surplus (\(\rho_{\text{surplus}}\)) probabilities correspond to the the fractions of time that shortage of downward and upward reserves, respectively, are acceptable [17], and they are set to be equal in this work:
\[\rho_{\text{deficit}}=\rho_{\text{surplus}}=\frac{100\%-\rho_{\text{margin}}} {2}. \tag{11}\]
The cumulative distribution functions \(F\) of the total and secondary reserve are derived as follows:
\[F_{\text{reserve}}(z)=\int_{-\infty}^{z}f_{\text{reserve}}(z)dz. \tag{12}\]
The reserve requirements \(R\) are determined by seeking the values of \(F\) that satisfy the following inequality conditions with respect to the set values of \(\rho_{\text{deficit}}\) and \(\rho_{\text{surplus}}\):
\[F_{\text{reserve}}\left(R_{\text{up}}\right)\leq 1-\rho_{\text{surplus}} \tag{13a}\] \[F_{\text{reserve}}\left(R_{\text{down}}\right)\geq\rho_{\text{deficit}} \tag{13b}\]
Equations 13a and 13b apply to both total and secondary reserve requirements. Finally, tertiary reserves are taken as the difference between the total and secondary reserve requirements:
\[R^{\text{ tertiary}}=R^{\text{total}}-R^{\text{secondary}}. \tag{14}\]
Equation 14 applies to both upward (\(R_{\text{up}}\)) and downward (\(R_{\text{down}}\)) reserve requirements. The proposed separation scheme between secondary (fast-response) and tertiary (slow-response) reserves is considered cost-effective because it reduces secondary reserves, which are generally more expensive than tertiary reserves [18].
## III Results and Discussion
The Philippines is comprised of three synchronous grids, Luzon, Visayas, and Mindanao, each with a regional system operator with the National Grid Corporation of the Philippines (NGCP). Reserve requirements are determined for each region separately at present. This section demonstrates the reserve sizing using two years' worth of data for the Visayas grid, which has the highest VRE share among the three grids. In 2021, solar accounted for 12.5% share of total capacity and 4.01% share of total gross generation, whereas wind accounted for a 2.4% share of total capacity and 1.07% share of total gross generation in the Visayas grid [19].
A one-week representative scenario is developed from real Visayas system operations data. The forecast values for each hour of the week are set to be equivalent to the maximum historical values of forecast demand and wind and solar PV power generation that were observed in the same hour of the week in years 2018-2019. \(k_{i}^{\text{forecast}}=1\) is assumed, which implies that the error distributions did not change from the historical period used. The following historical data from the NGCP for the Visayas grid were used:
1. hourly forecast and per-minute actual system demand from 1 January 2018 to 31 December 2019.
2. hourly forecast VRE generation from 2 wind and 11 solar PV plants from 1 Jan. 2018 to 1 Dec. 2019.
3. per-minute actual VRE generation from 1 Jan. 2018 to 31 Dec. 2019.
4. forced outages from 1 Jan. 2016 to 31 Dec. 2019.
### _Static and Dynamic Secondary Reserve vs. Status Quo RR Requirements_
In this section, we compare the secondary reserve requirement with status quo regulating reserve requirement and the results of a static method. Secondary reserves and regulating reserves (RR) as currently analogous in Philippine practice as they are both intended to be operated under automatic generation control to restore the frequency to its nominal value. The RR requirement is set at 2% of the forecast demand for upward regulation and 2% of the forecast demand for downward regulation [20].
On the other hand, static and dynamic methods require the same types of error distributions, but differ in the forecast data used to estimate the PDF and size the reserves. The static method is based on the peak forecast in the whole period for which reserves will be sized, and all historical values of an imbalance driver are included in the PDF estimation, regardless of its time of occurrence. The dynamic method is based on hourly forecasts. Every hour of the week has a distinct PDF for each imbalance driver, wherein only historical values that occurred in a similar hour of the week are included.
The results of dynamic secondary reserve sizing is shown in Figure 5. The mean dynamic reserve requirements are lower than the static reserve requirements, as likewise observed in previous studies [5, 6]. Furthermore, both static and mean dynamic reserve requirements are larger than the current \(\pm\)2% requirement for regulating reserves. Throughout the one-week scenario, it appears that most hours require levels higher than the current regulating reserve requirements. Hours 1-7 and 20 provide exceptions as they are well within the status quo requirement on most days. Based on a 99.0% reliability margin for a one-week Visayas grid scenario, the study reveals a need for asymmetric upward and downward reserves, which at some hours may be greater than the levels determined by existing practice.
### _Subhourly Reserve Sizing_
To contribute to the discussions on reserve requirements in view of the subhourly dispatch regime, we apply the proposed methodology to quantify the potential effect of varying the reserve lengths from one-hour to higher resolutions of 30-minute, 15-minute, and 5-minute.
As subhourly forecasts were not yet available in 2018-2019, we synthesized subhourly forecasts of demand and VRE generation using historical actual measurements. At the end of each dispatch interval, we assumed that subhourly forecast values are equivalent to the actual measurements. Thus, only the power imbalances within the interval are allocated reserves.
Table I shows the results of subhourly sizing for total reserves. Compared to hourly reserves, more granular reserves are considerably reduced. In particular, five-minute reserve sizing can reduce the total reserve requirements by as much as 86.2% on average. The results suggest economic benefits from the decreased reserve need while still upholding the required level of system reliability. More research is needed to determine the necessary adjustments in operations, as well as regulations, to ensure that dynamic reserve sizing of five-minute lengths, or subhourly lengths, is indeed feasible.
## IV Conclusions and Recommendations
A dynamic reserve sizing methodology is proposed as the Philippine grids transition to a new set of reserve classification and face increasing levels of VRE penetration. In comparison with the current allocation method, the proposed method indicate a larger capacities for reserves to achieve a 99.0 percent reliability. As the reserve requirements are anchored to a predetermined reliability target, further study is required to determine the optimal reliability level to aspire to, ideally considering the prices of reservation and activation.
Numerical experiments also revealed that dynamic five-minute reserve sizing can reduce reserve requirements by as much as 86.2% compared to hourly requirements. New five-minute forecasts from recent operations and higher resolution actual data would be preferred in order to assess subhourly sizing more precisely. Validating the effectiveness of the proposed method through pilot implementation and the evaluation of its effect on frequency quality can also be put forward.
|
2308.01023
|
Regular Variation in Hilbert Spaces and Principal Component Analysis for
Functional Extremes
|
Motivated by the increasing availability of data of functional nature, we
develop a general probabilistic and statistical framework for extremes of
regularly varying random elements $X$ in $L^2[0,1]$. We place ourselves in a
Peaks-Over-Threshold framework where a functional extreme is defined as an
observation $X$ whose $L^2$-norm $\|X\|$ is comparatively large. Our goal is to
propose a dimension reduction framework resulting into finite dimensional
projections for such extreme observations. Our contribution is double. First,
we investigate the notion of Regular Variation for random quantities valued in
a general separable Hilbert space, for which we propose a novel concrete
characterization involving solely stochastic convergence of real-valued random
variables. Second, we propose a notion of functional Principal Component
Analysis (PCA) accounting for the principal `directions' of functional
extremes. We investigate the statistical properties of the empirical covariance
operator of the angular component of extreme functions, by upper-bounding the
Hilbert-Schmidt norm of the estimation error for finite sample sizes. Numerical
experiments with simulated and real data illustrate this work.
|
Stephan Clémençon, Nathan Huet, Anne Sabourin
|
2023-08-02T09:12:03Z
|
http://arxiv.org/abs/2308.01023v1
|
# Regular Variation in Hilbert Spaces and
###### Abstract
Motivated by the increasing availability of data of functional nature, we develop a general probabilistic and statistical framework for extremes of regularly varying random elements \(X\) in \(L^{2}[0,1]\). We place ourselves in a Peaks-Over-Threshold framework where a functional extreme is defined as an observation \(X\) whose \(L^{2}\)-norm \(\|X\|\) is comparatively large. Our goal is to propose a dimension reduction framework resulting into finite dimensional projections for such extreme observations. Our contribution is double. First, we investigate the notion of Regular Variation for random quantities valued in a general separable Hilbert space, for which we propose a novel concrete characterization involving solely stochastic convergence of real-valued random variables. Second, we propose a notion of functional Principal Component Analysis (PCA) accounting for the principal 'directions' of functional extremes. We investigate the statistical properties of the empirical covariance operator of the angular component of extreme functions, by upper-bounding the Hilbert-Schmidt norm of the estimation error for finite sample sizes. Numerical experiments with simulated and real data illustrate this work.
###### Contents
* 1 Introduction
* 2 Background and Preliminaries
* 2.1 Regular Variation in Euclidean and Metric Spaces
* 2.2 Probability and Weak Convergence in Hilbert Spaces
* 2.3 Principal Component Analysis of \(\mathbb{H}\)-valued Random Elements
* 3 Regular Variation in Hilbert Spaces
* 3.1 Finite-dimensional Characterizations of Regular Variation in \(\mathbb{H}\)
* 3.2 Regular Variation in \(L^{2}[0,1]\)_vs_ Regular Variation in \(\mathcal{C}[0,1]\)
* 4 Principal Component Analysis of Extreme Functions
* 4.1 The Pre-asymptotic Covariance Operator and its Eigenspaces
* 4.2 Empirical Estimation: Consistency and Concentration Results
* 5 Illustrative Numerical Experiments
* 5.1 Pattern Identification of functional extremes
* 5.2 Optimal reconstruction of functional extremes on the electricity demand dataset
* A Proofs for Section 3
* B Proofs for Section 4
## 1 Introduction
The increasing availability of data of functional nature and various applications that could now possibly rely on such observations, such as predictive maintenance of sophisticated systems (_e.g._ energy networks, aircraft fleet) or environmental risk assessment (_e.g._ air quality monitoring), open new perspective for Extreme Value Analysis. In particular, massive measurements sampled at an ever finer granularity offer the possibility of observing extreme behaviors, susceptible to carry relevant information for various statistical tasks, _e.g._ anomaly detection or generation of synthetic extreme examples.
The main purpose of this paper is to develop a general probabilistic and statistical framework for the analysis of extremes of regularly varying random functions in the space \(L^{2}[0,1]\), the Hilbert space of square-integrable, real-valued functions over \([0,1]\), with immediate possible generalization to other compact domains, _e.g._ spatial ones. A major feature of the proposed framework is the possibility to project the observations onto a finite-dimensional functional space, _via_ a modification of the standard functional Principal Component Analysis (PCA) which is suitable for heavy-tailed observations, for which second (or first) moments may not exist.
Recent years have seen a growing interest in the field of Extreme Value Theory (EVT) towards high dimensional problems, and modern applications involving ever more complex datasets. A particularly active line of research concerns unsupervised dimension reduction for which a variety of methods have been proposed over the past few years, some of them assorted with non asymptotic statistical guarantees relying on suitable concentration inequalities. Examples of such strategies include identification of a sparse support for the limiting distribution of appropriately rescaled extreme observations (Goix et al. (2017); Simpson et al. (2020); Meyer and Wintenberger (2021); Drees and Sabourin (2021); Cooley and Thibaud (2019); Medina et al. (2021)), graphical modeling and causal inference based on the notion of tail conditional independence (Hitz and Evans (2016); Segers (2020); Gnecco et al. (2021)), clustering (Chautru (2015); Janssen and Wan (2020); Chiapino et al. (2019)), see also the review paper Engelke and Ivanovs (2021). In these works, the dimension of the sample space, although potentially high, is finite, and dimension reduction is a key step, if not the main purpose, of the analysis. On the other hand, functional approaches in EVT have a long history and are still the subject of recent development in spatial statistics, see _e.g._ the recent review from Huser and Wadsworth (2022). For statistical applications, typically for spatial extremes, strong parametric assumptions must be made to make up for the infinite-dimensional nature of the problem. Dimension reduction is then limited to choosing a parametric model of appropriate complexity and it is not clear how to leverage dimension reduction tools recently developed for multivariate extremes in this setting. The vast majority of existing works in functional extremes consider the continuous case, following in the footsteps of seminal works on Max-stable processes (De Haan (1984); De Haan and Ferreira (2006)): the random objects under study are random functions in the space \(\mathcal{C}[0,1]^{d}\), \(d\in\mathbb{N}^{*}\), of continuous functions on the product unit interval, endowed with the supremum norm. Some exceptions exist, _e.g._ the functional Skorokhod space \(\mathbb{D}[0,1]^{d}\) equipped with the \(J_{1}\)- topology has been considered in several works (see Davis and Mikosch (2008); Hult and Lindskog (2005) and the references therein), and upper-semicontinuous functions equipped with the Fell topology are considered in Resnick and Roy (1991); Molchanov and Strokorb (2016); Sabourin and Segers
(2017); Samorodnitsky and Wang (2019). Again, it is not clear how to perform dimension reduction in these functional spaces.
In the present paper we place ourselves in the Peaks-Over-Threshold (POT) framework: the focus is on the limit distribution of rescaled observations, conditioned upon the event that their norm exceeds a threshold, as this threshold tends to infinity. In the continuous case, an extreme observation is declared so whenever its supremum norm is large, _i.e._ above a high quantile. The limiting process arising in this context is a Generalized Pareto process. In the standard POT framework, the definition of an extreme event depends on the choice of a norm which may be of crucial importance in applications. As an example, in air quality monitoring for public health matters, it may be more relevant to characterize extreme concentration of pollutants through an integrated criterion over a full 24-hours period, rather than through the maximum hourly record. This line of thoughts is the main motivation behind the work of Dombry and Ribatet (2015), which consider alternative definitions of extreme events by means of an homogeneous cost functional, which gives rise to \(r\)-Pareto processes. However the observations are still assumed to be continuous stochastic processes and the framework is not better suited for dimension reduction than those developed in the previously cited works. A standard hypothesis underlying the POT approach is regular variation (RV), which, roughly, may be seen as an assumption of approximate radial homogeneity regarding the distribution of the random object \(X\) under study, conditionally on an excess of the norm \(\|X\|\) of this object above a high radial threshold. An excellent account of regular variation of multivariate random vectors is given in the monographs Resnick (1987, 2007). In Hult and Lindskog (2006) regular variation is extended to measures on arbitrary complete, separable metric spaces and involves \(M_{0}\)-convergence of measures associated to the distribution of rescaled random objects. One characterization of regular variation in this context is _via_ weak convergence of the pseudo angle \(\Theta=\|X\|^{-1}X\) and regular variation of the (real-valued) norm \(\|X\|\). Namely the law of \(\Theta\) given that \(\|X\|>t\) (\(t>0\)), \(\mathcal{L}(\Theta\,|\,\|X\|>t)\), which we denote by \(P_{\Theta,t}\), must converge weakly as \(t\to\infty\), towards a limit probability distribution \(P_{\Theta,\infty}\) on the unit sphere (see _e.g._ Segers et al. (2017); Davis and Mikosch (2008)). In the present work we place ourselves in the general regular variation context defined through \(M_{0}\)-convergence in Hult and Lindskog (2006), and we focus our analysis on random functions valued in the Hilbert space \(L^{2}[0,1]\), which has received far less attention (at least in EVT) than the spaces of continuous, semi-continuous or _cad-lag_ functions. One main advantage of the proposed framework, in addition to allowing for rough function paths, is to pave the way for dimension reduction of the observations _via_ functional PCA of the _angular_ component \(\Theta\). In this respect the dimension reduction strategy that we propose may be seen as an extension of Drees and Sabourin (2021), who worked in the finite-dimensional setting and derived finite sample guarantees regarding the eigenspaces of the empirical covariance operator for \(\Theta\). However their techniques of proof cannot be leveraged in the present context because they crucially rely on the compactness of the unit sphere in \(\mathbb{R}^{d}\), while the unit sphere in an infinite-dimensional Hilbert space is not compact.
Several questions arise. First, when dealing with functional observations, the choice of the norm (thus of a functional space) is not indifferent, since not all norms are equivalent. In particular, their is no reason why regular variation in one functional space (say, \(\mathcal{C}[0,1]\)) would be equivalent to regular variation in a larger space such as \(L^{2}[0,1]\). Also a recurrent issue in the context of weak convergence of stochastic processes is to verify tightness conditions in addition to weak convergence of finite-dimensional projections, in order to ensure weak convergence of the process as a whole. The case of Hilbert valued random variables makes no exception (see _e.g._ Chapter 1.8 in Vaart and Wellner (1996)). A natural question to ask is then: 'What concrete conditions regarding the angular and
radial components in a RV/POT framework, which may be verified in practice on specific generative examples or even on real data, are sufficient in order to ensure tightness?'. Regarding the PCA of the angular distribution, one may wonder whether the eigen functions associated with the angular covariance operator above finite levels \(t>0\) indeed converge to the eigen functions of the covariance operator associated with the limit distribution \(P_{\Theta,\infty}\) under the RV conditions alone, and whether the results of Drees and Sabourin (2021) regarding concentration of the empirical eigen spaces indeed extend to the infinite-dimensional Hilbert space setting.
Extreme Value Analysis of functional PCA with \(L^{2}\)-valued random functions have already been considered in the literature, from a quite different perspective however, leaving the above questions unanswered. In Kokoszka and Xiong (2018), the authors assume regular variation of the scores of a principal component decomposition, (_i.e._ the random coordinates of the observations projected onto a \(L^{2}\)-orthogonal family) and they investigate the extremal behavior of their empirical counterparts. In Kokoszka et al. (2019) and Kokoszka and Kulik (2023), regular variation is assumed and various convergence results regarding the empirical covariance operators of the random function \(X\) (not the angular component \(\Theta\)) are established, under the condition that the regular variation index belongs to some restricted interval, respectively \(2<\alpha<4\) and \(0<\alpha<2\). In contrast in the present work the value of the regular variation index is unimportant as the PCA that we consider is that of the _angular component_\(\Theta\) of the random functions. As \(\Theta\) belongs to a bounded subset of \(L^{2}[0,1]\), existence of moments of any order is automatically granted. Also in the existing works above mentioned, regular variation in \(L^{2}[0,1]\), in the sense of Hult and Lindskog, is taken for granted and no attempt is made to translate the general, abstract definition from Hult and Lindskog (2006) into concrete, finite-dimensional conditions. In Kim and Kokoszka (2022), extremal dependence between the scores of the functional PCA of \(X\) is investigated. They prove on this occasion (see Proposition 2.1 therein) that regular variation in \(L^{2}[0,1]\) implies multivariate regular variation of finite-dimensional projections of \(X\). However, the reciprocal statement is not investigated.
The contribution of the present article is twofold. \((i)\) We provide a comprehensive description of the notion of regular variation in a separable Hilbert space which fits into the framework of Hult and Lindskog (2006). In Section 3, we formulate specific characterizations involving finite-dimensional projections and moments of the angular variable \(\Theta\), and we discuss the relationships between regular variation in \(\mathcal{C}[0,1]\) and in \(L^{2}[0,1]\). It turns out that the former implies the latter, whereas the converse is not true. We provide several examples and counter-examples illustrating our statements. \((ii)\) We make a first step towards bridging the gap between dimension reduction approaches and functional extremes by considering the functional PCA of the angular variable \(\Theta\). In Section 4, we investigate the convergence of the non asymptotic covariance operator associated with distribution \(P_{\Theta,t}\). In the situation where \(n\geq 1\) independent realizations of the random function \(X\) can be observed, we additionally provide statistical guarantees regarding empirical estimation of the sub asymptotic covariance operator associated to a radial threshold \(t_{n,k}\), the \(1-k/n\) quantile of the radial variable \(\|X\|\), in the form of concentration inequalities regarding the Hilbert-Schmidt norm of the estimation error, which leading terms involve the number \(k\leq n\) of extreme order statistics considered to compute the estimator. These bounds, combined with regular variation of the observed random function \(X\) and the results from the preceding section ensure in particular the consistency of the empirical estimation procedure. In Section 5 we present experimental results involving real and simulated data illustrating the relevance of the proposed dimension reduction framework. Certain technical details are deferred to the Appendix.
For clarity, we start off by recalling some necessary background regarding probability and weak convergence in Hilbert spaces, functional PCA and regular variation.
Background and Preliminaries
As a first go, we recall some key facts about regular variation in metric spaces, probability in Hilbert spaces and Principal Component Analysis of random elements of a Hilbert space. Here and throughout, the indicator function of any event \(\mathcal{E}\) is denoted by \(\mathbb{1}\left\{\mathcal{E}\right\}\). The Dirac mass at any point \(a\) is written as \(\delta_{a}\) and the integer part of any real number \(u\) by \(\lfloor u\rfloor\). By \(L^{2}[0,1]\) is meant the Hilbert space of square integrable, real-valued functions \(f:[0,1]\to\mathbb{R}\) equipped with its standard inner product \(\langle f,\ g\rangle=\int_{0}^{1}f(s)g(s)ds\) and the \(L^{2}\)-norm \(\|f\|=(\int_{0}^{1}f(s)ds)^{1/2}\). Our results are valid in any arbitrary separable Hilbert space \(\mathbb{H}\), for which we subusively use the same notations regarding the scalar product and the norm as in the special case of \(L^{2}[0,1]\) when it is clear from the context. Also we write \(\mathbb{H}_{0}=\mathbb{H}\setminus\{0\}\). Finally the arrow \(\xrightarrow{w}\) stands for weak convergence of Borel probability measures: \(P_{n}\xrightarrow{w}P\)_i.f._ we have \(\int f\,\mathrm{d}P_{n}\to\int f\,\mathrm{d}P\) as \(n\to+\infty\) for any bounded, continuous function \(f\) defined on the same space as \(P_{n}\) and \(P\).
### Regular Variation in Euclidean and Metric Spaces
We recall here the main features of Regular Variation in metric spaces, a framework introduced by Hult and Lindskog (2006) as a generalization of the Euclidean case documented _e.g._ in Resnick (1987); Bingham et al. (1989).
Let \((E,d)\) be a complete separable metric space, endowed with a multiplication by nonnegative real numbers \(t>0\), such that the mapping \((t,x)\in\mathbb{R}_{+}\times E\mapsto tx\) is continuous. One must assume the existence of an _origin_\(\mathbf{0}\in E\), such that \(0x=\mathbf{0}\) for all \(x\in E\). In the present work we shall take \(E=\mathbb{H}\), a separable, real Hilbert space, and \(\mathbf{0}\) will simply be the zero of \(\mathbb{H}\). Let \(E_{0}=E\setminus\{\mathbf{0}\}\). For any subset \(A\subset E\), and \(t>0\), we write \(tA=\{tx:\ x\in A\}\). Denote by \(\mathcal{C}_{0}\) the set of bounded and continuous real-valued functions on \(E_{0}\) which vanish in some neighborhood of \(\mathbf{0}\) and let \(M_{0}\) be the class of Borel measures on \(E_{0}\), which are finite on each Borel subset of \(E_{0}\) bounded away from \(\mathbf{0}\). Then the sequence \(\mu_{n}\)_converges to \(\mu\) in \(M_{0}\)_ and we write \(\mu_{n}\xrightarrow{M_{0}}\mu\), if \(\int fd\mu_{n}\to\int fd\mu\) for any \(f\in\mathcal{C}_{0}\).
A measurable _function_\(f:\mathbb{R}\to\mathbb{R}\) is called _regularly varying_ with index \(\rho\), and we write \(f\in\mathrm{RV}_{\rho}\), whenever for any \(x>0\) the ratio \(f(tx)/f(t)\to x^{\rho}\) as \(t\to\infty\). A _measure_\(\nu\) in \(M_{0}\) is _regularly varying i.f.f._ there exists a nonzero measure \(\mu\) in \(M_{0}\) and a regularly varying function \(b\) such that
\[b(n)\nu(n\,\cdot\,)\xrightarrow{M_{0}}\mu(\,\cdot\,)\text{ as }n\to+\infty. \tag{2.1}\]
The limit measure is necessarily homogeneous, for all \(t>0\) and Borel subset \(A\) of \(E_{0}\), \(\mu(tA)=t^{-\alpha}\mu(A)\) for some \(\alpha>0\). Then we say that \(\nu\) is regularly varying with index \(-\alpha\) and we write \(\nu\in\mathrm{RV}_{-\alpha}\). A _random element_\(X\) valued in \(E\) (that is, a Borel measurable map from some probability space \((\Omega,\mathcal{A},\mathbb{P})\) to \(E\)) is called regularly varying with index \(\alpha>0\) if its probability distribution is in \(\mathrm{RV}_{-\alpha}\). In this case, one writes \(X\in RV_{-\alpha}(E)\). A convenient characterization of regular variation of a random element \(X\) is obtained through a polar decomposition. Let \(r(x)=d(x,\mathbf{0})\) for \(x\in E\). For simplicity, and because it is true in the Hilbert space framework that is our main concern, we focus on the case where the distance to \(\mathbf{0}\) is homogeneous, although this assumption can be relaxed, as in Segers et al. (2017). Notice that in \(\mathbb{H}\), \(r(x)=\|x\|\). Introduce a pseudo-angular variable, \(\Theta=\theta(X)\) where for \(x\in E_{0}\), \(\theta(x)=r(x)^{-1}x\) and let \(R=r(X)\). Denote by \(\mathbb{S}\) the unit sphere in \(E\) relative to \(r\), \(\mathbb{S}=\{x\in E:r(x)=1\}\), equipped with the trace Borel \(\sigma\)-field \(\mathcal{B}(\mathbb{S})\). The map \(T:E_{0}\to\mathbb{R}_{+}^{*}\times\mathbb{S}:x\mapsto(r(x),\theta(x))\) is the polar decomposition. A key quantity throughout this
work will be the conditional distribution of the angle given that \(R>t\) for which we introduce the notation
\[P_{\Theta,t}(\,\cdot\,)=\mathbb{P}\left(\Theta\in\,\cdot\,\left|\,R>t\right. \right). \tag{2.2}\]
Several equivalent characterizations of regular variation of \(X\) have been proposed in Segers et al. (2017) in terms of the pair \((R,\Theta)\) where \(R=r(X)\), thus extending classical characterizations in the multivariate setting, see Resnick (2007). In particular the next statement shall prove to be useful in the subsequent analysis.
**Proposition 2.1** (Proposition 3.1 in Segers et al. (2017)).: _A random element \(X\) in \(E\) is regularly varying with index \(\alpha>0\) i.f.f. conditions (i) and (ii) below are simultaneously satisfied:_
* _The radial variable_ \(R\) _is regularly varying in_ \(\mathbb{R}\) _with index_ \(\alpha\)_;_
* _There exists a probability distribution_ \(P_{\Theta,\infty}\) _on the sphere_ \((\mathbb{S},\mathcal{B}(\mathbb{S}))\) _such that_ \(P_{\Theta,t}\xrightarrow{w}P_{\Theta,\infty}\) _as_ \(t\to\infty\)_._
### Probability and Weak Convergence in Hilbert Spaces
Most of the background gathered in this section may be found with detailed proofs, references and discussions in the monograph Hsing and Eubank (2015), which provides a self-contained introduction to mathematical foundations of functional data analysis. Other helpful resources regarding probability and measure theory in Banach spaces and Bochner integrals include Vakhania et al. (2012) or Mikusinski (1978).
Probability in Hilbert spacesConsider a separable Hilbert space \((\mathbb{H},\langle\cdot,\cdot\rangle)\) and denote by \(\|\cdot\|\) the associated norm. Let \((e_{i})_{i\geq 1}\) be any orthonormal basis of \(\mathbb{H}\). Since a separable Hilbert space is a particular instance of a Polish space it follows from basic measure theory in (see _e.g._ Vakhania et al. (2012), Theorem 1.2) that the Borel \(\sigma\)-field \(\mathcal{B}(\mathbb{H})\) is generated by the family of mappings \(\{h^{*}:x\mapsto\langle x,h\rangle,\;h\in\mathbb{H}\}\), or in other words, by the class of cylinders
\[\mathcal{C}=\{(h^{*})^{-1}(B),B\in\mathcal{B}(\mathbb{R})\}.\]
In addition, since the countable family \((e_{i}^{*})_{i\geq 1}\) separates points in \(\mathbb{H}\), it also generates the Borel \(\sigma\)-field, see Proposition 1.4 and its corollary in Vakhania et al. (2012). In other words, if we denote by \(\pi_{N}\) the projection from \(\mathbb{H}\) to \(\mathbb{R}^{N}\) onto the first \(N\geq 1\) basis vectors, \(\pi_{N}(x)=(\langle x,e_{1}\rangle,\ldots,\langle x,e_{N}\rangle)\), the family of cylinder sets
\[\tilde{\mathcal{C}}=\left\{\pi_{N}^{-1}(A_{1}\times\cdots\times A_{N}),A_{j} \in\mathcal{B}(\mathbb{R}),j\leq N,N\in\mathbb{N}^{*}\right\}\]
also generates \(\mathcal{B}(\mathbb{H})\). We call \(\mathbb{H}\)_-valued random element_ (or variable) any Borel-measurable mapping \(X\) from a probability space \((\Omega,\mathcal{A},\mathbb{P})\) to \(\mathbb{H}\). A mapping \(X:\Omega\to\mathbb{H}\) is Borel-measurable _i.f.f._ the real-valued projections \(\langle X,h\rangle\) are Borel-measurable for all \(h\in\mathbb{H}\), and the distribution of \(X\) is entirely characterized by the distributions of these univariate projections, see Lemma 1.8.3. in Vaart and Wellner (1996) or Theorem 7.1.2 in Hsing and Eubank (2015). Since the family \(\tilde{C}\) of cylinder sets is a \(\pi\)-system generating \(\mathcal{B}(\mathbb{H})\), it is also true that the distributions of all finite dimensional projections \((\pi_{N}(X),N\in\mathbb{N})\) onto a given basis also determine the distribution of \(X\). Integrability conditions for random elements in \(\mathbb{H}\) are understood here in the Bochner sense. A random element \(X:(\Omega,\mathcal{A},\mathbb{P})\to\mathbb{H}\) is Bochner integrable _i.f.f._\(\mathbb{E}[\|X\|]<\infty\). Then the expectation
of \(X\), denoted by \(\mathbb{E}[X]\), is the unique element of \(\mathbb{H}\) such that \(\langle\mathbb{E}[X],h\rangle=\mathbb{E}[\langle X,h\rangle]\) for all \(h\in\mathbb{H}\). A key property of the classic expectation is linearity, and is also satisfied by the expectation defined in the Bochner sense. Namely if \(T\) is a bounded, linear operator from \(\mathbb{H}_{1}\) to \(\mathbb{H}_{2}\), two Hilbert spaces, and if \(X\) is a Bochner-integrable random element in \(\mathbb{H}_{1}\) then \(T(X)\) is also Bochner-integrable in \(\mathbb{H}_{2}\) and \(T(\mathbb{E}\left[X\right])=\mathbb{E}\left[T(X)\right]\), see Theorem 3.1.7 in Hsing and Eubank (2015). Many other properties of the classic expectation of a real-valued random variables are preserved, _e.g._ dominated convergence theorem. In particular, a version of Jensen's inequality can be formulated for \(\mathbb{H}\)-valued random variables, see _e.g._ pp. 42-43 in Ledoux and Talagrand (1991).
Weak convergence of \(\mathbb{H}\)-valued random elementsAs our main concern in Section 3 is to characterize regular variation in Hilbert spaces in terms of weak convergence of appropriately rescaled variables, we recall some basic facts regarding weak convergence in Hilbert spaces. Most of the material recalled next for the sake of completeness can be found in Chapter 1.8 of Vaart and Wellner (1996) and Chapter 7 of Hsing and Eubank (2015) in a more detailed way.
By definition a sequence \((X_{n})_{n\in\mathbb{N}}\) of \(\mathbb{H}\)-valued random variables _weakly converges_ (or _converges in distribution_) to a \(\mathbb{H}\)-valued random variable \(X\), and we write \(X_{n}\stackrel{{ w}}{{\longrightarrow}}X\) (or equivalently, \(\mu_{n}\stackrel{{ w}}{{\longrightarrow}}\mu\) if \(\mu_{n}\) denotes the probability distribution of \(X_{n}\) and \(\mu\), that of \(X\)), _i.f.f._, for every bounded, continuous function \(f:\mathbb{H}\to\mathbb{R}\), we have \(\mathbb{E}[f(X_{n})]\to\mathbb{E}[f(X)]\) This abstract definition may be difficult to handle for verifying weak convergence in specific examples. However, weak convergence in \(\mathbb{H}\) may equivalently be characterized _via_ weak convergence of one-dimensional projections and an asymptotic tightness condition, as described next. Notice that, because \(\mathbb{H}\) is separable and complete, the Prokhorov Theorem applies, _i.e._ uniform tightness and relative compactness of a family of probability measures are equivalent. Recall that a sequence of probability measures is uniformly tight if for every \(\varepsilon>0\), there exists a compact set \(K\subset\mathbb{H}\) such that \(\inf_{n\in\mathbb{N}}\mu_{n}(K)\geq 1-\varepsilon\). Notice that, because \(\mathbb{H}\) is separable and complete, any single random element valued in \(\mathbb{H}\) is tight, see Lemma 1.3.2 in Vaart and Wellner (1996).
_Remark 2.1_ (On measurability and tightness).: Before proceeding any further, in order to clear out any potential confusion, we emphasize that measurability of the considered maps \(X_{n}:\Omega\to\mathbb{H}\) is not required in Vaart and Wellner (1996), while it is assumed in the present work, in which we follow common practice in functional data analysis focusing on Hilbert-valued observations (as _e.g_ in Hsing and Eubank (2015)). Notice also that the notion of tightness employed in Vaart and Wellner (1996) as a criterion for relative compactness of a family of random variables \((X_{n},n\in\mathbb{N})\), is _asymptotic tightness_, that is: for all \(\varepsilon>0\), there exists a compact subset \(K\) of \(\mathbb{H}\), such that for every \(\delta>0\), \(\liminf_{n\to\infty}\mathbb{P}\left(X_{n}\in K^{\delta}\right)>1-\varepsilon\). Here, \(K^{\delta}\) denotes the \(\delta\)-enlargement of the compact set \(K\), that is, \(\{x\in\mathbb{H}:\inf_{y\in K}\|x-y\|<\delta\}\). This is seemingly at odds with other presentations (Prokhorov (1956); Hsing and Eubank (2015)) where the argument is organized around the standard notion of uniform tightness, recalled above. However in a Polish space such as \(\mathbb{H}\), the two notions of tightness (asymptotic or uniform) are equivalent (Vaart and Wellner (1996), Problem 1.3.9), so that the presentations of Vaart and Wellner (1996) and Hsing and Eubank (2015) are in fact closer together than they may appear at first view.
A convenient criterion which is the main ingredient to ensure tightness (hence relative compactness) of a family of random \(\mathbb{H}\)-valued random variables is termed _asymptotically finite-dimensionality_ in Vaart and Wellner (1996) and seems to originate from Prokhorov (1956). A sequence of \(\mathbb{H}\)-valued random variables is _asymptotically finite-dimensional_ if, given a Hilbert basis \((e_{i},i\geq 1)\) as above,
for all \(\varepsilon,\ \delta>0\), there exists a finite subset \(I\subset\mathbb{N}^{*}\) such that
\[\limsup_{n}\mathbb{P}\left(\sum_{i\notin I}\langle X_{n},e_{i}\rangle^{2}> \delta\right)<\varepsilon. \tag{2.3}\]
It should be noticed that the above property is independent from the specific choice of a Hilbert basis. Asymptotic finite-dimensionality combined with uniform tightness of all univariate projections of the kind \(\langle X_{n},h\rangle,h\in\mathbb{H}\), is sufficient conditions for uniform tightness of the family of random variables \((X_{n})_{n\in\mathbb{N}}\) (see Hsing and Eubank (2015), Theorem 7.7.4). Also, since knowledge of the distributions of all univariate projections characterizes the distribution of a random Hilbert-valued variable \(X\), asymptotic-finite dimensionality combined with weak convergence of univariate projections (or of finite dimensional ones on a fixed basis) are sufficient to prove weak convergence of a family of random elements in \(\mathbb{H}\), as summarized in the next statement.
**Theorem 2.1** (Characterization of weak convergence in \(\mathbb{H}\)).: _A net of \(\mathbb{H}\)-valued random variables \((X_{t})_{t\in\mathbb{R}}\) converges in distribution to a random variable \(X\) if and only if it is asymptotically finite-dimensional and either one of the two conditions below holds:_
1. _The net_ \((\langle X_{t},h\rangle)_{t\in\mathbb{R}^{*}_{+}}\) _converges in distribution to_ \(\langle X,h\rangle\) _for any_ \(h\in\mathbb{H}\)_;_
2. _The net_ \((\pi_{N}(X_{t}))_{\in\mathbb{R}^{*}_{+}}\) _converges in distribution to_ \(\pi_{N}(X)\) _for all_ \(N\in\mathbb{N}^{*}\)_._
Proof.: The fact that asymptotic finite-dimensionality together with Condition 1. in the statement imply weak convergence results from Theorem 1.8.4 in Vaart and Wellner (1996) in the case where all mappings are measurable. To see that Condition 1. may be replaced with Condition 2. in order to prove weak convergence, note that asymptotic finite-dimensionality implies uniform tightness in the case of a Hilbert space (see Remark 2.1 above). Hence, weak convergence occurs if the two subsequential limits coincide. It is so because the family of cylinder sets \(\tilde{\mathcal{C}}\) is a measure-determining class.
### Principal Component Analysis of \(\mathbb{H}\)-valued Random Elements
We recall the necessary definitions and mathematical background underlying principal component decomposition of \(\mathbb{H}\)-valued random elements. A self-contained exposition of the topic may be found in Hsing and Eubank (2015), Chapter 7. In the sequel we use indifferently the terminology _principal component decomposition_ or _principal component analysis_ (functional PCA or PCA in short). Because of its optimality properties in terms of \(L^{2}\)-error when \(\mathbb{H}=L^{2}[0,1]\), functional PCA is widely used for a great variety of statistical purposes in functional data analysis. A standard reference on this topic is the monograph Ramsay and Silverman (2005).
On \(\mathbb{H}\) a separable real Hilbert space as above, and for \((f,g)\in\mathbb{H}^{2}\) the tensor product \(f\otimes g\) is the linear operator on \(\mathbb{H}\) defined by \(f\otimes g(h)=\langle f,h\rangle g\). Direct calculations show that \(f\otimes g\) is a Hibert Schmidt operator with Hilbert-Schmidt norm \(\|f\otimes g\|_{\mathrm{HS}(\mathbb{H})}=\|f\|\|g\|\). We recall that a linear operator \(T\) on \(\mathbb{H}\) is _Hilbert-Schmidt_ if, given a Hilbert basis \((e_{i})_{i\geq 1}\), we have \(\sum_{i\in\mathbb{N}^{*}}\|Te_{i}\|^{2}<\infty\). The latter quantity is then the Hilbert-Schmidt norm of \(T\), denoted by \(\|T\|_{\mathrm{HS}(\mathbb{H})}\) and does not depend on the choice of the Hilbert basis. Hilbert-Schmidt operators are compact and the space \(\mathrm{HS}(\mathbb{H})\) of Hilbert-Schmidt operators on \(\mathbb{H}\), equipped with \(\langle\,\cdot\,,\,\cdot\,\rangle_{\mathrm{HS}(\mathbb{H})}\) the scalar product associated with the \(\mathrm{HS}(\mathbb{H})\) norm, is itself a separable Hilbert space.
Let \(X\) be a \(\mathbb{H}\)-valued random element as above and assume that \(\mathbb{E}\|X\|^{2}<\infty\). Then also \(\mathbb{E}\left[\|X\otimes X\|_{\mathrm{HS}(\mathbb{H})}\right]<\infty\) so that the tensor product inside the expectation is Bochner integrable and one may define the (non-centered) _covariance operator_
\[C=\mathbb{E}\left[X\otimes X\right]. \tag{2.4}\]
By construction \(C\) is self-adjoint and \(C\in\mathrm{HS}(\mathbb{H})\), thus \(C\) is compact. Also by linearity of Bochner integration, for any \((h,g)\in\mathbb{H}^{2}\), we have:
\[Ch=\mathbb{E}\left[\langle h,X\rangle X\right]\ \ \text{and}\ \langle Ch,g\rangle= \mathbb{E}\left[\langle h,X\rangle\langle X,g\rangle\right].\]
A key result in functional PCA is the eigen decomposition of the covariance operator (see Theorem 7.2.6 from Hsing and Eubank (2015) regarding the centered covariance operator, which is also valid for the non-centered one):
\[C=\sum_{i=1}^{\infty}\lambda_{i}\varphi_{i}\otimes\varphi_{i}, \tag{2.5}\]
where \(\lambda_{1}\geq\lambda_{2}\geq\dots\) are the eigenvalues sorted by decreasing order and the \(\varphi_{i}\)'s are orthonormal eigenvectors. The set of non zero eigenvalues \(\lambda_{i}\) is either finite, or a sequence of nonnegative numbers converging to zero. The non zero eigenvalues have finite multiplicity. The eigen functions \(\varphi_{i}\) form a Hilbert basis of \(\overline{\mathrm{Im}(C)}\). As it is the case for the centered version of \(C\), the decomposition (2.5) immediately derives from the spectral theorem for compact, self-adjoint operators and the fact that \(C\) is nonnegative definite.
A useful property of the eigen functions \((\varphi_{i})_{i\geq 1}\) is that they allow perfect signal reconstruction, since almost-surely, \(X\) may be decomposed as
\[X=\sum_{i=1}^{\infty}\langle X,\varphi_{i}\rangle\varphi_{i}, \tag{2.6}\]
see Theorem 7.2.7 in Hsing and Eubank (2015). The _scores_\(Z_{i}=\langle X,\varphi_{i}\rangle\) satisfy \(\mathbb{E}\left[Z_{i}^{2}\right]=\lambda_{i}\) and \(\mathbb{E}\left[Z_{i}Z_{j}\right]=0\), so that the expansion (2.6) is called _bi-orthogonal_. For all \(N\geq 1\), the truncated expansion \(\sum_{i\leq N}\langle X,\ \varphi_{i}\rangle\varphi_{i}\) is _optimal_ in the sense that it minimizes the integrated mean-squared error
\[\mathbb{E}\Big{(}\big{\|}X-\sum_{i=1}^{N}\langle X,\ u_{i}\rangle u_{i}\big{\|} ^{2}\Big{)} \tag{2.7}\]
over all orthonormal collections \((u_{1},\dots,u_{N})\) of \(\mathbb{H}\). The tail behavior of the (summable) eigenvalue sequence \((\lambda_{i})_{i\geq 1}\) describes the optimal \(N\)-term approximation error, insofar as
\[\sum_{i>N}\lambda_{i}=\mathbb{E}\Big{(}\big{\|}X-\sum_{i=1}^{N}\langle X,\ \varphi_{i}\rangle\varphi_{i}\big{\|}^{2}\Big{)}.\]
Notice that in the present paper we consider non centered covariance operator, mainly for the purpose of alleviating notations. We refer the reader to Cadima and Jolliffe (2009) for a comparison of centered and uncentered PCA.
_Remark 2.2_ (Functional PCA and Karhunen-Loeve expansion).: The _functional PCA_ framework is closely related to the celebrated _Karhunen-Loeve expansion_ in the case where \(\mathbb{H}=L^{2}[0,1]\)
however both terms refer to subtly different frameworks, which deserves an explanation. The former framework (which is the one preferred in this work) relies on a \(\mathbb{H}\)-valued random element \(X\), with standard results concerning convergence of the expansions of \(X\) and its covariance operator in the Hilbert norm and Hilbert-Schmidt norm, respectively, recalled above. Then \(X\)'s trajectories are in fact equivalence classes of square-integrable functions and the specific value \(X_{s}(\omega)\) of a realisation \(X(\omega)\) at \(s\in[0,1]\) is only defined almost everywhere. In contrast, the latter (Karhunen-Loeve) framework relies on a second order stochastic process \(X=(X_{s},s\in[0,1])\), that is, a collection of random variables, which is continuous is quadratic mean with respect to the index \(s\). Then one must impose additional joint measurability conditions of the mapping \((\omega,s)\mapsto X_{s}(\omega)\) in order to ensure that the process \(X\) is indeed a \(\mathbb{H}\)-valued random element. In such a case the mean functions and the covariance operators defined both ways coincide. Also, the celebrated Karhunen-Loeve Theorem (Loeve (1978)) ensures convergence in quadratic mean of the expansion of \(X_{s}\), uniformly over \(s\in[0,1]\). In order to avoid another layer of technicality, and because our main interest indeed lies in the eigenspaces of covariance operators rather than in pointwise reconstruction of the functions, we adopt in the present work the view where \(X\) is a \(\mathbb{H}\)-valued random element, although additional joint measurability assumptions may be imposed in order to fit into the Karhunen-Loeve framework.
## 3 Regular Variation in Hilbert Spaces
As a warm up we discuss a classic example in EVT, a multivariate multiplicative model within the framework of the multiplicative Breiman's lemma (Basrak et al. (2002), Proposition A.1) for which RV may be easily proved using existing general characterizations such as Equation (2.1). This example will serve as a basis for our simulated data example in Section 5.
**Example 3.1**.: _Let \(Z=(Z_{1},\ldots,Z_{d})\in\mathbb{R}^{d}\) be regularly varying with index \(\alpha>0\) and limit measure \(\mu\), and let \(A=(A_{1},\ldots,A_{d})\) be a random vector of \(\mathbb{H}\)-valued variables \(A_{i}\), independent of \(Z\), such that \(\mathbb{E}\left[(\sum_{j=1}^{d}\|A_{j}\|_{\mathbb{H}}^{2})^{\gamma/2}\right]<\infty\) for some \(\gamma>\alpha\). Then,_
\[X=\sum_{j=1}^{d}Z_{j}A_{j}\]
_is regularly varying in \(\mathbb{H}\) with limit measure \(\tilde{\mu}(\,\cdot\,)=\mathbb{E}\left[\mu\{x\in\mathbb{R}^{d}:\sum_{j=1}^{d} A_{j}x\in(\,\cdot\,)\}\right]\)._
Proof.: In their Proposition A.1, Basrak et al. (2002) consider the case where \(A_{j}\in\mathbb{R}^{q}\) and \(\mathbf{A}=(A_{1},\ldots,A_{d})\) is a \(q\times d\) matrix. In the proof, they use the operator norm for \(A\), but because all norms are equivalent in that case, their argument remains valid with the finite-dimensional Hilbert-Schmidt norm. In this finite-dimensional context, \(\|A\|\) is equal to \((\sum_{j=1}^{d}\|A_{j}\|_{2}^{2})^{1/2}\), where \(\|\,\cdot\,\|_{2}\) is the Euclidean norm. An inspection of the arguments in their proof shows that they also apply to the case where \(A_{j}\in\mathbb{H}\), up to replacing \(\|A_{j}\|_{2}\) with \(\|A_{j}\|_{\mathbb{H}}\) and \(\|A\|\) with \((\sum_{j=1}^{d}\|A_{j}\|_{\mathbb{H}}^{2})^{1/2}\). In particular Pratt's lemma is applicable because Fatou's Lemma is valid for nonnegative Hilbert space valued functions.
The remainder of this section aims at providing some insight on specific properties of RV in \(\mathbb{H}\), as compared with RV in general separable metric spaces as introduced by Hult and Lindskog
(2006) or, at the other end of the spectrum, RV in a Euclidean space. On the one hand, we focus on possible finite-dimensional characterizations of RV in \(\mathbb{H}\), with a view towards statistical applications in which abstract convergence conditions in an infinite dimensional space cannot be verified on real data, while finite-dimensional conditions may serve as a basis for statistical tests. Although we do not go as far as proposing such rigorous statistical procedures, we do suggest in the experimental section some convergence diagnostics relying on the results gathered in this section. On the other hand we discuss the relationships existing between RV in \(\mathcal{C}[0,1]\) and RV in \(\mathbb{H}=L^{2}[0,1]\).
### Finite-dimensional Characterizations of Regular Variation in \(\mathbb{H}\)
RV random elements in \(\mathbb{H}\) have been present in the literature for a long time, due to strong connections between RV and domains of attraction of stable laws in general and in separable Hilbert spaces in particular. As an example Kuelbs and Mandrekar (1974) show (through their Lemma 4.1 and their Theorem 4.11) that a random element in \(\mathbb{H}\) which is in the domain of attraction of a stable law with type \(0<\alpha<2\) is regularly varying. However this connection does not yield any finite-dimensional characterization which are our main focus here.
As a first go we recall Proposition 2.1 from Kim and Kokoszka (2022) making a first connection between regular variation in \(\mathbb{H}\) and regular variation of finite dimensional (_fidi_ in abbreviated form) projections. Let \((e_{i},i\in\mathbb{N})\) be a complete orthonormal system in \(\mathbb{H}\). For \(\mathcal{I}=(i_{1},\ldots i_{N})\) a finite set of indices with cardinality \(N\geq 1\), denote by \(\pi_{\mathcal{I}}\) the 'coordinate projection' on the associated finite family, \(\pi_{\mathcal{I}}(x)=(\langle x,e_{i_{1}}\rangle,\ldots,\langle x,e_{i_{N}} \rangle),x\in\mathbb{H}\). In particular we denote by \(\pi_{N}:\mathbb{H}\to\mathbb{R}^{N}\) the projection onto the \(N\) first elements of the basis \((e_{i},i\in\mathbb{N})\).
**Proposition 3.1** (RV in \(\mathbb{H}\) implies multivariate RV of _fidi_ projections).: _If \(X\) a random element of \(\mathbb{H}\) is regularly varying with index \(\alpha>0\) then also for all finite index set \(\mathcal{I}\) of size \(N\geq 1\), \(\pi_{\mathcal{I}}X\) is multivariate RV in \(\mathbb{R}^{N}\)._
One natural question to ask is whether the reciprocal of Proposition 3.1 is true. We answer in the negative in Proposition 3.2 below.
**Proposition 3.2** (Multivariate RV of _fidi_ projections does not imply RV in \(\mathbb{H}\)).: _The reciprocal of Proposition 3.2 is not true. In particular there exists a random element \(X\) in \(\mathbb{H}\) which is not RV, while_
1. _for all_ \(N\in\mathbb{N}^{*}\)_,_ \(\pi_{N}X\) _is multivariate RV in_ \(\mathbb{R}^{N}\) _with same index_ \(\alpha>0\) _;_
2. _the norm of_ \(X\) _is RV in_ \(\mathbb{R}\) _with index_ \(\alpha\)_._
Sketch of proof.: We construct a random element \(X\) in \(\mathbb{H}\) in such a way that the probability mass of its angular component \(\Theta\), given the radial component \(R\), escapes at infinity as \(R\) grows. Here, _at infinity_ must be understood as \(\text{span}(e_{i},i\geq M)\) as \(M\to\infty\). Namely let \(X:=R\Theta\) with radial component \(R=\|X\|\sim Pareto(\alpha)\) on \([1,+\infty[\) (_i.e._\(\forall t\geq 1,\mathbb{P}\left(R_{0}\geq t\right)=t^{-\alpha}\)) and define the conditional distribution of \(\Theta\) given \(R\) as the mixture of Dirac masses:
\[\mathcal{L}(\Theta|R)=\frac{1}{\sum_{l=1}^{\lfloor R\rfloor}1/l}\sum_{i=1}^{ \lfloor R\rfloor}\frac{1}{i}\delta_{e_{i}}.\]
In other words, for \(i\leq R\), we have \(\Theta=e_{i}\) with probability proportional to \(1/i\). The remaining of the proof, deferred to the Appendix, consists in verifying that \((i)\) all finite-dimensional projections
of \(X\) are RV; \((ii)\) asymptotic finite-dimensionality (see Equation (2.3)) of the family of conditional distributions \(P_{\Theta,t}\) does not hold, hence it may not converge to any limit distribution, so that Condition (ii) from Proposition 2.1 does not hold and \(X\) may not be RV.
The counter-example above suggests that the missing assumption to obtain RV in \(\mathbb{H}\) is some relative compactness criterion. This is partly confirmed in the next example where the angular variables \(\Theta_{t}\) is again a mixture model supported by the \(e_{i}\)'s but where the probability mass for the conditional distribution of \(\Theta\) given \(\|X\|\) concentrates around finite-dimensional spaces. The proof, postponed to the Appendix, proceeds by verifying both conditions from Proposition 2.1.
**Example 3.2**.: _Let \(R\sim Pareto(\alpha)\) on \([1,+\infty[\) and define \(\Theta\) through its conditional distribution given \(R=r\), for \(r\geq 1\),_
\[\mathcal{L}(\Theta|R=r)=\frac{1}{\sum_{l=1}^{\lfloor r\rfloor}1/l^{2}}\sum_{i =1}^{\lfloor r\rfloor}\frac{1}{i^{2}}\delta_{e_{i}}. \tag{3.1}\]
_In words, \(\Theta\in\{e_{1},e_{2},...\}\) and \(\forall r\geq 1,\forall j\in\mathbb{N}^{*}\) such that \(j\leq r\), we have \(\mathbb{P}\left(\Theta=e_{j}|R=r\right)=\frac{1/j^{2}}{\sum_{l=1}^{\lfloor r \rfloor}1/l^{2}}\)._
_Then, the random element \(X=R\Theta\) is regularly varying in \(\mathbb{H}\) with index \(\alpha\) with limit angular random variable \(\Theta_{\infty}\) given by_
\[\mathbb{P}\left(\Theta_{\infty}=e_{j}\right)=\frac{6}{(\pi j)^{2}}, \tag{3.2}\]
_for \(j\in\mathbb{N}^{*}\)._
The next proposition confirms the intuition built up by the above examples that asymptotic finite dimensionality is a necessary additional assumption to RV of finite dimensional projections and of the norm.
**Proposition 3.3**.: _Let \(X\) be a \(\mathbb{H}\)-valued random element. The three conditions below are equivalent._
1. \(X\) _is regularly varying in_ \(\mathbb{H}\) _with index_ \(\alpha>0\)_, limit measure_ \(\mu\) _and normalizing sequence_ \(b_{n}>0\)_,_ i.e.__\(\mu_{n}=b_{n}\mathbb{P}\left(X\in n\,\cdot\,\right)\stackrel{{ M_{0}}}{{\longrightarrow}}\mu(\,\cdot\,)\)_._
2. _The family of measures_ \((\mu_{n})_{n\geq 1}\) _is relatively compact in_ \(M_{0}(\mathbb{H})\)_-topology, and for all_ \(N\in\mathbb{N}\)_,_ \(\pi_{N}X\) _is regularly varying in_ \(\mathbb{R}^{N}\) _with limit measure_ \(\mu_{N}=\mu\circ\pi_{N}^{-1}\)_, index_ \(\alpha\) _and scaling sequence_ \(b_{n}\)_._
3. _The family of measures_ \((\mu_{n})_{n\geq 1}\) _is relatively compact in_ \(M_{0}(\mathbb{H})\)_-topology, and for all_ \(h\in\mathbb{H}_{0}\)_,_ \(\langle x,h\rangle\) _is regularly varying in_ \(\mathbb{R}\) _with limit measure_ \(\mu_{h}=\mu\circ(h^{*})^{-1}\)_, index_ \(\alpha\) _and scaling sequence_ \(b_{n}\)_, where_ \(h^{*}(x)=\langle h,x\rangle\)_._
Proof.: **1. \(\Rightarrow\) 2. and 3.**
If \(X\) is RV as in the statement 1., then \((\mu_{n})_{n\geq 1}\) converges in the \(M_{0}(\mathbb{H})\) topology and the family is of course relatively compact. Also fix \(N\geq 1\) and notice that \(\pi_{N}\) is a continuous mapping from \((\mathbb{H},\|\,\cdot\,\|)\) to \(\mathbb{R}^{N}\) endowed with the Euclidean norm. The same is true for the bounded linear functional \(h^{*}\). The continuous mapping theorem in \(M_{0}\) (see Hult and Lindskog (2006), Theorem 2.5) ensures that \(\mu_{n}\circ\Pi_{N}^{-1}\stackrel{{ M_{0}}}{{\longrightarrow}}\mu \circ\pi_{N}^{-1}\) in \(\mathbb{R}^{N}\) and that \(\mu_{n}\circ(h^{*})^{-1}\stackrel{{ M_{0}}}{{\longrightarrow}}\mu \circ(h^{*})^{-1}\) in \(\mathbb{R}\).
**2. \(\Rightarrow\) 1.** If \(\mu_{n}\) is relatively compact, the sequence \(\mu_{n}\) converges in \(M_{0}(\mathbb{H})\)_i.f.f._ any two subsequential limits \(\mu^{1},\mu^{2}\) coincide. However it follows form the previous implication that in such a case, the finite dimensional projections of \(\mu^{1}\) and \(\mu^{2}\) coincide, namely \(\mu^{1}\circ\pi_{N}^{-1}=\mu^{2}\circ\pi_{N}^{-1}=\mu\circ\pi_{N}^{-1}\), for all integer \(N\). Consider the family of cylinder sets of \(\mathbb{H}\) with measurable base, \(\mathcal{C}=\{\pi_{N}^{-1}(A),A\in\mathcal{B}(\mathbb{R}^{N}),N\in\mathbb{N}^ {*}\}\). On \(\mathcal{C}\) the measures \(\mu,\mu^{1},\mu^{2}\) coincide. The cylinder sets family \(\mathcal{C}\) is a \(\pi\)-system which generates the Borel \(\sigma\)-field, because it is associated to the family of bounded linear functional \((e_{i}^{*},i\in\mathbb{N})\) which separates points. Thus \(\mu,\mu^{1},\mu^{2}\) coincide on every Borelian set, and the proof is complete.
**3. \(\Rightarrow\) 1.** As above, it is enough to show that two subsequential limits \(\mu^{1},\mu^{2}\) coincide. In this case it is obviously so, because it is known that the Borel \(\sigma\)-field on \(\mathbb{H}\) is generated by the mappings \(h^{*}\) (_e.g._Hsing and Eubank (2015), Theorem 7.1.1.).
The line of thought of Proposition 3.3 may be pursued further by characterizing the property of relative compactness of a family \((\nu_{n})_{n\in\mathbb{N}}\in M_{0}(\mathbb{H})\) through asymptotic finite-dimensionality (see Equation (2.3)), following the lines of the proof of Theorem 4.3 in Hult and Lindskog (2006), relying in particular on Theorem 2.6 of the cited reference. However it is also possible to rely on known characterizations of relative compactness for probability measures, coupled with the polar characterization of RV (Proposition 2.1). We propose in this spirit the following simple characterization solely based on weak convergence of univariate and finite-dimensional projections, together with regular variation of the norm, without additional requirements regarding asymptotic finite-dimensionality.
**Theorem 3.1**.: _Let \(X\) be a random element in \(\mathbb{H}\) and let \(\Theta_{t}\) be a random element in \(\mathbb{H}\) distributed on the sphere \(\mathbb{S}\) according to the conditional angular distribution \(P_{\Theta,t}\). Let \(P_{\Theta,\infty}\) denote a probability measure on \((\mathbb{S},\mathcal{B}(\mathbb{S}))\) and let \(\Theta_{\infty}\) be a random element distributed according to \(P_{\Theta,\infty}\). The following statements are equivalent._
1. \(X\) _is regularly varying with index_ \(\alpha\) _with limit angular measure_ \(P_{\Theta,\infty}\)_, so that_ \(P_{\Theta,t}\xrightarrow{w}P_{\Theta,\infty}\)_._
2. \(\|X\|\) _is regularly varying in_ \(\mathbb{R}\) _with index_ \(\alpha\)_, and_ \[\forall h\in\mathbb{H},\langle\Theta_{t},h\rangle\xrightarrow{w}\langle\Theta_ {\infty},h\rangle\quad\text{ as }t\to\infty.\]
3. \(\|X\|\) _is regularly varying in_ \(\mathbb{R}\) _with index_ \(\alpha\)_, and_ \[\forall N\in\mathbb{N},\pi_{N}(\Theta_{t})\xrightarrow{w}\pi_{N}(\Theta_{ \infty})\quad\text{ as }t\to\infty.\]
Proof.: The fact that 1 implies 2 and 3 is a direct consequence of the polar characterization of RV (Proposition 2.1) and of the continuous mapping theorem applied to the bounded linear mappings \(h^{*}\), \(h\in\mathbb{H}\) and \(\pi_{N}\), \(N\in\mathbb{N}\).
For the reverse implications (3\(\Rightarrow\) 1) and (2 \(\Rightarrow\) 1), in view of Proposition 2.1, we only need to verify that for any sequence \(t_{n}>0\) such that \(t_{n}\to\infty\), \(\Theta_{t_{n}}\xrightarrow{w}\Theta_{\infty}\) in \(\mathbb{H}\). From Theorem 2.1, if either Condition 2 or Condition 3 holds true, then it will be so _i.f.f._ the family \(P_{\Theta,t_{n}},n\in\mathbb{N}\) is asymptotically finite-dimensional.
We use the fact, stated and proved in Tsukuda (2017), that if \((Z_{n},n\in\mathbb{N})\) and \(Z\) are \(\mathbb{H}\)-valued random elements such that, as \(n\to\infty\),
\[\mathbb{E}[\|Z_{n}\|^{2}]\to\mathbb{E}[\|Z\|^{2}], \tag{3.3}\]
and for all \(j\in\mathbb{N}^{*}\)
\[\mathbb{E}[\langle Z_{n},e_{j}\rangle^{2}]\to\mathbb{E}[\langle Z,e_{j}\rangle^{2}], \tag{3.4}\]
then the sequence \((Z_{n})_{n\in\mathbb{N}}\) is asymptotically finite-dimensional.
With \(Z_{n}=\Theta_{t_{n}}\) and \(Z=\Theta_{\infty}\), Condition (3.3) above is immediately satisfied since \(\|\Theta_{t_{n}}\|=\|\Theta_{\infty}\|=1\) almost surely. For the same reason \(\mathbb{E}\left[\langle\Theta_{t_{n}},e_{j}\rangle^{2}\right]=\mathbb{E} \left[\varphi(\langle\Theta_{t_{n}},e_{j}\rangle)\right]\), where \(\varphi\) is the bounded, continuous function \(\varphi(z)=\min(z^{2},1)\). Thus, weak convergence of the projections \(\langle\Theta_{t_{n}},e_{j}\rangle\) (Condition 2 or 3 from the statement) together with the continuous mapping theorem imply (3.4), which concludes the proof.
### Regular Variation in \(L^{2}[0,1]\)_vs_ Regular Variation in \(\mathcal{C}[0,1]\)
Turning to the case where \(\mathbb{H}=L^{2}[0,1]\), we discuss the relationships between the notions of regular variation in \(L^{2}[0,1]\) and in \(\mathcal{C}[0,1]\), the space of continuous functions on \([0,1]\). Indeed, any continuous stochastic process \((X_{t},t\in[0,1])\) is also a random element in \(\mathbb{H}=L^{2}[0,1]\), as proved in Hsing and Eubank (2015), Theorem 7.4.1, or 7.4.2. It is thus legitimate to ask whether regular variation with respect to one norm implies regular variation for the other norm for such stochastic processes.
**Proposition 3.4**.: _Let \(X\) be a continuous process over \([0,1]\). Assume that \(X\in RV_{-\alpha}(\mathcal{C}[0,1])\), with \(\mathcal{L}(X/\|X\|_{\infty}\|\|X\|_{\infty}>t)\to\mathcal{L}(\Theta_{\infty, \infty})\), as \(t\to+\infty\), where \(\Theta_{\infty,\infty}\) is the angular limit process w.r.t. the sup-norm \(\|\cdot\|_{\infty}\). Then, \(X\in RV_{-\alpha}(L^{2}[0,1])\), and the angular limit process \(\Theta_{\infty,2}\) (w.r.t. the \(L^{2}\) norm \(\|\cdot\|\)) has distribution given by_
\[\mathbb{P}\left(\Theta_{\infty,2}\in B\right)=\frac{\mathbb{E}\big{[}\|\Theta _{\infty,\infty}\|^{\alpha}\mathbbm{1}\{\Theta_{\infty,\infty}/\|\Theta_{ \infty,\infty}\|\in B\}\big{]}}{\mathbb{E}\big{[}\|\Theta_{\infty,\infty}\|^{ \alpha}\big{]}}, \tag{3.5}\]
_where \(B\in\mathcal{B}(\mathbb{S}_{2})\)._
Proof.: Since \(\|\cdot\|\) is continuous w.r.t. \(\|\cdot\|_{\infty}\) in \(\mathcal{C}[0,1]\), Theorem 3 in Dombry and Ribatet (2015) applies (upon chosing \(\ell(X)=\|X\|\) with the notations of the cited reference), which yields regular variation of \(X\) in \(L^{2}[0,1]\), together with the expression given in (3.5) for the angular measure associated with the \(L^{2}\) norm \(\|\,\cdot\,\|\).
One may wonder whether the reciprocal is also true, _i.e._ if \(X\in\mathcal{C}[0,1]\), and \(X\in RV_{-\alpha}(L^{2}[0,1])\), is it necessarily the case that \(X\in RV_{-\alpha}(\mathcal{C}[0,1])\)? A counter-example is given in the next proposition.
**Proposition 3.5**.: _The reverse statement of Proposition 3.4 is not true in general. There exists a sample-continuous stochastic process over \([0,1]\) which is regularly varying in \(L^{2}[0,1]\) but not in \(\mathcal{C}[0,1]\)._
Proof.: We construct a'spiked' continuous process with controlled \(L^{2}\) norm, while the sup-norm is super-heavy tailed. Let \(Z\) follow a Pareto distribution with parameter \(\alpha_{Z}>0\), and define a sample-continuous stochastic process
\[Y(t)=\Big{(}1-\frac{t}{3Z^{2}\exp(-2Z)}\Big{)}\exp(Z)\mathbbm{1}\{[0,3Z^{2} \exp(-2Z)[\}.\]
Straightforward computations yield \(\|Y\|_{\infty}=\exp(Z)\) and \(\|Y\|_{2}=Z\). Let \(\rho\) be another independent Pareto-distributed variable with index \(0<\alpha_{\rho}<\alpha_{Z}\). Finally, define \(X=\rho Y\). Then \(X\) is a sample-continuous stochastic process over \([0,1]\). We have \(\|X\|_{\infty}=\rho\exp(Z)\), which is clearly not regularly
varying because (see _e.g._ Mikosch (1999), Proposition 1.3.2) \(\mathbb{E}[\|X\|_{\infty}^{\delta}]=+\infty\) for all \(\delta>0\). Thus, \(X\) is not regularly varying in \((\mathcal{C}[0,1],\|\cdot\|_{\infty})\).
On the other hand, the pair \((\rho,Y)\) satisfies the assumptions of Example 3.1 with \(d=1\). Hence, \(X=\rho Y\) is regularly varying in \(\mathbb{H}=L^{2}[0,1]\).
Propositions 3.5 and 3.4 together show that the framework of \(L^{2}\)- regular variation encompasses a wider classes of continuous processes than standard \(\mathcal{C}[0,1]\) regular variation. This opens a road towards applications of EVT in situations where the relevant definition of an extreme event has to be understood in terms of 'energy' of the (continuous) trajectory, as measured by the \(L^{2}\) norm, rather than in terms of sup-norm.
## 4 Principal Component Analysis of Extreme Functions
This section gathers the main results of the paper. Motivated by dimension reduction purposes, our goal is to construct a finite-dimensional representation of extreme functions. In other words our primary purpose is to learn a finite-dimensional subspace \(V\) of \(\mathbb{H}=L^{2}[0,1]\) such that the orthogonal projections of extreme functions onto \(V\) are optimal in terms of angular reconstruction error. Throughout this section we place ourselves in the setting of regular variation introduced in Section 3 and consider a regularly varying random element \(X\) in \(\mathbb{H}\) as in Theorem 3.1, with the same notations. Our focus is thus on building a low-dimensional representation of the angular distribution of extremes \(P_{\Theta,\infty}\) introduced in Section 2.1. We consider the eigen decomposition of the associated covariance operator
\[C_{\infty}=\mathbb{E}\left[\Theta_{\infty}\otimes\Theta_{\infty}\right]=\sum_ {j\in\mathbb{N}}\lambda_{\infty}^{j}\varphi_{j}^{\infty}\otimes\varphi_{j}^{ \infty},\]
where \(\Theta_{\infty}\sim P_{\Theta,\infty}\), and the \(\varphi_{j}^{\infty}\)'s and \(\lambda_{\infty}^{j}\)'s are eigenfunctions and eigenvalues of \(C_{\infty}\) following the notations of Section 2.3. If \(P_{\Theta,\infty}\) is sufficiently concentrated around a finite-dimensional subspace of moderate dimension \(p\), a reasonable approximation of \(P_{\Theta,\infty}\) is provided by its image measure _via_ the projection onto \(V_{\infty}^{p}=\operatorname{Vect}(\varphi_{j}^{\infty},j\leq p)\). Independently from such sparsity assumptions, the space \(V_{\infty}^{p}\) minimizes the reconstruction error (2.7) of the orthogonal projection relative to \(\Theta_{\infty}\). It is also the unique minimizer as soon as \(\lambda_{\infty}^{p}>\lambda_{\infty}^{p+1}\), as discussed in the background section 2.3.
Our main results bring finite sample guarantees regarding an empirical version of \(V_{\infty}^{p}\) constructed using the \(k\ll n\) largest observations. In this respect our work may be seen as an extension of Drees and Sabourin (2021), who consider finite dimensional observations \(X\in\mathbb{R}^{d}\), to an infinite dimensional ambient space. However our proof techniques are fundamentally different from the cited reference. Indeed their analysis relies on Empirical Risk Minimzation arguments relative to the reconstruction risk at infinity, \(R_{\infty}(V)=\lim_{t\to\infty}\mathbb{E}\left[\Theta-\Pi_{V}\Theta\mid R>t\right]\) where \(\Pi_{V}\) denotes the orthogonal projection onto \(V\). The main ingredients of their analysis are \((i)\) the fact that \(V_{\infty}^{p}\) minimizes the risk at infinity \((ii)\) compactness of the unit sphere (or of any bounded, closed subset of \(\mathbb{R}^{d}\)). In the present setting such compactness properties do not hold and we follow an entirely different path, as we investigate the convergence of an empirical version of \(C_{\infty}\) in the Hilbert-Schmidt norm, and then rely on perturbation theory for covariance operators in order to control the deviations of its eigenspaces. We thus consider the pre-asymptotic covariance operator
\[C_{t}=\mathbb{E}\left[\Theta\otimes\Theta\mid R>t\right]=\mathbb{E}\left[ \Theta_{t}\otimes\Theta_{t}\right]. \tag{4.1}\]
In the sequel, the discrepancy between finite dimensional linear subspaces of \(\mathbb{H}\) is measured in terms of the Hilbert-Schmidt norm of the difference between orthogonal projections, namely we define a distance \(\rho\) between finite dimensional subspaces \(V,W\) of \(\mathbb{H}\), by
\[\rho(V,W)=\|\Pi_{V}-\Pi_{W}\|_{HS(\mathbb{H})}.\]
It should be noticed that Drees and Sabourin (2021) denote by \(\rho\) the operator norm of the difference between the projections, which is coarser than the Hilbert-Schmidt one.
We show in Section 4.1 that the first \(p\) eigenfunctions of the pre-asymptotic operator \(C_{t}\) generate a vector space \(V_{t}^{p}\) converging to \(V_{\infty}^{p}\) whenever \(\lambda_{\infty}^{p}>\lambda_{\infty}^{p+1}\). Second, we establish in Section 4.2 the consistency of the empirical subspace \(\widehat{V}_{t}^{p}\) (the one generated by the first \(p\) eigenfunctions of an empirical version of \(C_{t}\)) and we derive nonasymptotic guarantees for its deviations, based on concentration inequalities regarding the empirical covariance operator.
### The Pre-asymptotic Covariance Operator and its Eigenspaces
Since perturbation theory allows to control the deviations of eigenvectors and eigenvalues of a perturbed covariance operator, a natural first step in our analysis is to ensure that the pre-asymptotic operator \(C_{t}\) introduced in (4.1) may indeed be seen as a perturbed version of the asymptotic operator \(C_{\infty}\), as shown next.
**Theorem 4.1** (Convergence of the pre-asymptotic covariance operator).: _In the setting of Theorem 3.1, as \(t\to\infty\), the following convergence in the Hilbert-Schmidt norm holds true,_
\[\|C_{t}-C_{\infty}\|_{HS(\mathbb{H})}\to 0\,.\]
Proof.: Recall from Proposition 2.1 that RV of \(X\) implies weak convergence of the net \(\Theta_{t}\) towards \(\Theta_{\infty}\). Using the fact that the mapping \(h\in\mathbb{H}\mapsto h\otimes h\in HS(\mathbb{H})\) is continuous, also \(\Theta_{t}\otimes\Theta_{t}\) converges weakly towards \(\Theta_{\infty}\otimes\Theta_{\infty}\). Let \((t_{n})_{n\in\mathbb{N}}\) be a nondecreasing sequence of reals converging to infinity.
Since the separability of \((\mathbb{H},\langle\cdot,\cdot\rangle)\) implies the separability of \((HS(\mathbb{H}),\langle\cdot,\cdot\rangle_{HS(\mathbb{H})})\) (see Blanchard et al. (2007), Section 2.1), we may apply the Skorokhod's Representation theorem to the weakly converging sequence \(\Theta_{t_{n}}\otimes\Theta_{t_{n}}\). Thus there is a probability space \((\Omega^{\prime},\mathcal{F},\mathbb{P}^{\prime})\), and random elements \(Y_{n}\), \(n\geq 1\) and \(Y_{\infty}\) in \(HS(\mathbb{H})\) defined on the probability space \((\Omega^{\prime},\mathcal{F},\mathbb{P}^{\prime})\), such that \(\Theta_{t_{n}}\otimes\Theta_{t_{n}}\stackrel{{ d}}{{=}}Y_{n}\), \(\Theta_{\infty}\otimes\Theta_{\infty}\stackrel{{ d}}{{=}}Y_{\infty}\) and \(Y_{n}\) converges to \(Y_{\infty}\) almost surely with respect to \(\mathbb{P}^{\prime}\).
A Jensen's type inequality in Hilbert spaces (see Ledoux and Talagrand (1991), pp. 42-43) yields \(\|C_{t_{n}}-C_{\infty}\|_{HS(\mathbb{H})}\leq\mathbb{E}[\|Y_{n}-Y_{\infty}\|_ {HS(\mathbb{H})}]\). The dominated convergence theorem applied to the vanishing sequence of random variables \(\|Y_{n}-Y_{\infty}\|_{HS(\mathbb{H})}\) (which are bounded by the constant 2) completes the proof.
_Remark 4.1_.: An alternative way to obtain the weak convergence of \(\Theta_{t}\otimes\Theta_{t}\), which is key in the proof of Theorem 4.1, is to leverage Proposition 3.2 in Kokoszka et al. (2019), which ensures that the operator \(X\otimes X\) is regularly varying in \(HS(\mathbb{H})\). Since \(\Theta\otimes\Theta\) is indeed the angular component of \(X\otimes X\), the result follows by an application of Proposition 2.1.
The next result concerns the convergence of eigenspaces and is obtained by combining tools from operator perturbation theory with the result from Theorem 4.1. In order to avoid additional technicalities we consider in the next statement an integer \(p\) such that \(\lambda_{\infty}^{p}>\lambda_{\infty}^{p+1}\geq 0\), that is, a
positive the spectral gap. Notice that such a \(p\) necessarily exists since \(\|C_{\infty}\|_{HS(\mathbb{H})}^{2}=\sum_{j=1}^{\infty}(\lambda_{\infty}^{j})^{2}<\infty\).
**Corollary 4.1** (Convergence of pre-asymptotic eigen spaces).: _Let \(p\in\mathbb{N}^{*}\) be such that \(\lambda_{\infty}^{p}>\lambda_{\infty}^{p+1}\). Then, as \(t\) tends to infinity,_
\[\rho(V_{t}^{p},V_{\infty}^{p})\to 0.\]
Proof.: According to Theorem 3 in Zwald and Blanchard (2005), for \(A\) and \(B\) two Hilbert-Schmidt operators on a separable Hilbert space, and an integer \(p\) such that the ordered eigenvalues of \(A\) satisfy \(\lambda^{p}(A)>\lambda^{p+1}(A)\), if \(\|B\|_{HS(\mathbb{H})}<\gamma^{p}:=\frac{\lambda^{p}(A)-\lambda^{p+1}(A)}{2}\) is such that \(A+B\) is still a positive operator, then following inequality holds
\[\rho(V^{p},W^{p})\leq\frac{\|B\|_{HS(\mathbb{H})}}{\gamma^{p}},\]
where \(V^{p}\) and \(W^{p}\) are respectively the eigen spaces spanned by the first \(p\) eigenvectors of \(A\) and \(A+B\). From Theorem 4.1, the operators \(A=C_{\infty}\) and \(B=C_{\infty}-C_{t}\) satisfy the required assumptions stated above for \(t\) sufficiently large, and \(\|B\|_{HS(\mathbb{H})}\) may be chosen arbitrarily small, which concludes the proof.
_Remark 4.2_ (Convergence of eigenvalues and choice of \(p\)).: Even though the eigenvalues of \(C_{\infty}\) are not the main focus of our work, they are involved in the conditions of Corollary 4.1 through the requirement of a positive spectral gap. Of course these eigen values are unknown, however Weyl's inequality (see Hsing and Eubank (2015), Theorem 4.2.8) ensures that \(\sup_{j\geq 1}|\lambda_{t}^{j}-\lambda_{\infty}^{j}|\leq\|C_{t}-C_{\infty}\|_{HS (\mathbb{H})}\). Identification of an integer \(p\) for which the eigen gap is positive may thus be achieved using consistent estimates of the \(\lambda_{t}^{j}\)'s for \(t\) large enough.
### Empirical Estimation: Consistency and Concentration Results
We now turn to statistical properties of empirical estimates of \(C_{t}\) and its eigen decomposition based on an independent sample \(X_{1},...,X_{n}\) distributed as \(X\). Following standard practice in Peaks-Over-Threshold analysis, we consider a fixed number of excesses \(k\) above a random radial threshold chosen as the empirical \(1-k/n\) quantile of the norm, with \(k\ll n\). Even though our main results are of non asymptotic nature, letting \(k,n\to\infty\) with \(k/n\to 0\) yields consistency guarantees such as Corollary 4.3 below. Denote by \(X_{(1)},\ldots X_{(n)}\) the permutation of the sample such that \(\|X_{(1)}\|\geq\|X_{(2)}\|\geq...\geq\|X_{(n)}\|\), and accordingly, let \(\Theta_{(i)},R_{(i)}\) denote the angular and radial components of \(X_{(i)}\). Then \(\|X_{(k)}\|=R_{(k)}\) is an empirical version of the \((1-k/n)\) quantile of the norm \(R\), which we shall sometimes denote by \(\widehat{t}_{n,k}\).
With these notations an empirical version of \(C_{t_{n,k}}\) is
\[\widehat{C}_{k}:=\frac{1}{k}\sum_{i=1}^{k}\Theta_{(i)}\otimes\Theta_{(i)}. \tag{4.2}\]
_Remark 4.3_.: (Choice of \(k\)) Choosing the number \(k\) of observations considered as extreme, is an important but difficult topic in EVT. A wide variety of methods have been proposed in univariate problems (Caeiro and Gomes (2016); Scarrott and MacDonald (2012)), some rule of thumbs exist in multivariate settings based on visual inspection of angular histograms (Coles and Tawn (1994) or stability under rescaling of the radial distribution (Starica (1999)) with little theoretical foundations.
We leave this question outside our scope, although visual diagnostics are proposed in our numerical study based on Hill plots and convergence checking based on the finite-dimensional characterizations of RV stated in Theorem 3.1.
Our analysis of the statistical error \(\|\widehat{C}_{k}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\) involves the intermediate pseudo empirical covariance
\[\overline{C}_{t}:=\frac{1}{\mathbb{P}\left(\|X_{1}\|\geq t\right)}\frac{1}{n} \sum_{i=1}^{n}\Theta_{i}\otimes\Theta_{i}\mathbb{1}\{R_{i}\geq t\}.\]
evaluated at \(t=t_{n,k}\). Since \(t_{n,k}\) is unknown, \(\overline{C}_{t_{n,k}}=k^{-1}\sum_{i=1}^{n}\Theta_{i}\otimes\Theta_{i} \mathbb{1}\{R_{i}\geq t_{n,k}\}\) is not observable, although its deviation from \(\widehat{C}_{k}\) may be controlled by the classical Bernstein inequality (Proposition 4.2). Our point of departure is the following decomposition of the statistical error,
\[\|\widehat{C}_{k}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\leq\|\overline{C}_{t_{n,k}}- C_{t_{n,k}}\|_{HS(\mathbb{H})}+\|\widehat{C}_{k}-\overline{C}_{t_{n,k}}\|_{ HS(\mathbb{H})}. \tag{4.3}\]
We analyze separately the two terms in the right-hand side of (4.3) in the next two propositions.
**Proposition 4.1**.: _Let \(\delta\in(0,1)\). With probability larger than \(1-\delta/2\), we have_
\[\|\overline{C}_{t_{n,k}}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\leq\frac{1+4\sqrt{ \log(2/\delta)}}{\sqrt{k}}+\frac{8\log(2/\delta)}{3k}\]
_sketch of proof._ A Bernstein-type concentration inequality from McDiarmid (1998) which is applicable to arbitrary functions of \(n\) variables with controlled conditional variance and conditional range (Theorem 3.8 of the reference, recalled in Lemma B.1 from the Appendix) ensures that
\[\mathbb{P}\big{(}\|\overline{C}_{t_{n,k}}-C_{t_{n,k}}\|_{HS(\mathbb{H})}- \mathbb{E}[\ \|\overline{C}_{t_{n,k}}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\ ]\geq\varepsilon\big{)}\leq\exp\Big{(}\frac{-k \varepsilon^{2}}{4(1+\varepsilon/3)}\Big{)}.\]
In order to control the expected deviation \(\mathbb{E}\left[\|\overline{C}_{t_{n,k}}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\right]\) in the left-hand side, we use the fact that, if \(A_{1},...,A_{n}\) are independent centered \(\mathbb{H}\)-valued random elements, \(\mathbb{E}[\big{\|}\sum_{i=1}^{n}A_{i}\big{\|}^{2}]=\sum_{i=1}^{n}\mathbb{E} \left[\|A_{i}\|^{2}\right]\) (Lemma B.3 in the Appendix). We apply this result to \(A_{i}\) chosen as the deviation of the operator \(\Theta_{i}\otimes\Theta_{i}\mathbb{1}\{R_{i}\geq t_{n,k}\}\) from its expectation, which yields
\[\mathbb{E}[\|\overline{C}_{t_{n,k}}-C_{t_{n,k}}\|_{HS(\mathbb{H})}]\leq 1/ \sqrt{k}\]
(Lemma B.4) and finishes the proof, as detailed in Appendix B.
We now turn to the second term \(\|\widehat{C}_{h}-\overline{C}_{t_{n,k}}\|_{HS(\mathbb{H})}\) in the error decomposition (4.3).
**Proposition 4.2**.: _Let \(\delta\in(0,1)\). With probability larger than \(1-\delta/2\), we have_
\[\|\widehat{C}_{k}-\overline{C}_{t_{n,k}}\|_{HS(\mathbb{H})}\leq\sqrt{\frac{8 \log(4/\delta)}{k}}+\frac{4\log(4/\delta)}{3k}\;.\]
Proof.: First, the triangle inequality yields
\[\|\widehat{C}_{k}-\overline{C}_{t_{n,k}}\|_{HS(\mathbb{H})} =\frac{1}{k}\ \Big{\|}\ \sum_{i=1}^{n}\Theta_{i}\otimes\Theta_{i}( \mathbb{1}\{R_{i}\geq t_{n,k}\}-\mathbb{1}\{R_{i}\geq\widehat{t}_{n,k}\})\ \Big{\|}_{HS(\mathbb{H})}\] \[\leq\frac{1}{k}\ \sum_{i=1}^{n}|\ \mathbb{1}\{R_{i}\geq t_{n,k}\}- \mathbb{1}\{R_{i}\geq R_{(k)}\}\ |.\]
The number of non-zero terms inside the sum in the above display is the number of indices \(i\) such that'\(R_{i}<R_{(k)}\) and \(R_{i}\geq t_{n,k}\)', or the other way around, thus
\[\|\widehat{C}_{k}-\overline{C}_{t_{n,k}}\|_{HS(\mathbb{H})}\leq\frac{1}{k}\ \Big{|}\ \sum_{i=1}^{n}\mathbb{1}\{R_{i}\geq t_{n,k}\}-k\ \Big{|}.\]
Notice that \(\sum_{i=1}^{n}\mathbb{1}\{R_{i}\geq t_{n,k}\}\) follows a Binomial distribution with parameters \((n,k/n)\). The (classical) Bernstein's inequality as stated _e.g._ in McDiarmid (1998), Theorem 2.7, yields
\[\mathbb{P}\left(\|\widehat{C}_{k}-\overline{C}_{t_{n,k}}\|_{HS(\mathbb{H})} \geq\varepsilon\right)\leq\mathbb{P}\Big{(}\ \Big{|}\sum_{i=1}^{n}\mathbb{1}\{R_{i}\geq t_{n,k}\}-k\Big{|}\geq k \varepsilon\ \Big{)}\leq 2\exp\Big{(}\frac{-k\varepsilon^{2}}{2(1+\varepsilon/3)}\Big{)}.\]
Solving for \(\varepsilon\) and using the fact that \(\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}\) for any nonnegative numbers \(a,b\), we obtain the upper bound in the statement.
We are now ready to state a non-asymptotic guarantee regarding the deviations (in the HS-norm) of the empirical covariance operator.
**Theorem 4.2**.: _Let \(\delta\in(0,1)\). With probability larger than \(1-\delta\), we have_
\[\|\widehat{C}_{k}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\leq\frac{1+4\sqrt{\log(2/ \delta)}+\sqrt{8\log(4/\delta)}}{\sqrt{k}}+\frac{8\log(2/\delta)+4\log(4/ \delta)}{3k}\]
Proof.: Observe that the following inclusion between adverse events holds true because of (4.3),
\[\big{\{}\|\widehat{C}_{k}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\geq\varepsilon_{1}+ \varepsilon_{2}\big{\}}\ \subset\big{\{}\|\widehat{C}_{k}-\overline{C}_{t_{n,k}}\|_{HS(\mathbb{H})} \geq\varepsilon_{1}\big{\}}\ \cup\ \big{\{}\|\overline{C}_{t_{n,k}}-C_{t_{n,k}}\|_{HS(\mathbb{H})}\geq \varepsilon_{2}\big{\}},\]
for all \(\varepsilon>0\). A union bound and Propositions 4.1, 4.2 conclude the proof.
_Remark 4.4_ (Tightness of the upper bound, asymptotics).: The bound obtained in Theorem 4.2 constitutes a minimal guarantee regarding covariance estimation of the extremes. By no means do we claim optimality regarding the multiplicative constants, which we have not tried to optimize, as revealed by an inspection fo the proof where the decomposition of the adverse event into two events of same probability may be sub-optimal. However the leading term of the error as \(k\to\infty\) is an explicit, moderate constant and the rate of convergence is \(1/\sqrt{k}\), which matches known asymptotic rates in the literature of tail empirical processes in the univariate or multivariate case (see _e.g._ Einmahl and Mason (1988) or Aghbalou et al. (2023), Theorem 3). We leave to further research the question of the asymptotic behaviour of \(\widehat{C}_{k}-C_{t_{n,k}}\) as \(k,n\to\infty\), \(k/n\to 0\), a problem which could be attacked by means of Lindeberg central limit theorems in Hilbert spaces (Kundu et al. (2000)).
Combining Theorems 4.1 and 4.2, the following consistency result is immediate.
**Corollary 4.2** (Consistency).: _The empirical covariance of extreme angles \(\widehat{C}_{k}\) is consistent, i.e. as \(n,k\to\infty\) with \(k/n\to 0\),_
\[\|\widehat{C}_{k}-C_{\infty}\|_{HS(\mathbb{H})}\to 0\text{ in probability.}\]
Theorem 4.2 also provides a control of the deviations of the empirical eigenspaces, with a proof paralleling the one of Corollary 4.1. In the following statement we denote by \(\widehat{V}_{k}^{p}\) such an eigenspace, that is, the linear space generated by the first \(p\) eigen functions of \(\widehat{C}_{k}\).
**Corollary 4.3** (Deviations of empirical eigenspaces).: _Let \(p\in\mathbb{N}^{*}\) satisfying the same positive eigen gap assumption as in Corollary 4.1, that is \(\gamma_{\infty}^{p}:=(\lambda_{\infty}^{p}-\lambda_{\infty}^{p+1})/2>0\). Denote the pre-asymptotic eigen gap by_
\[\gamma_{t}^{p}=\frac{\lambda_{t}^{p}-\lambda_{t}^{p+1}}{2}.\]
_Let \(n,k\) be large enough so that \(\gamma_{t_{n,k}}^{p}>0\) (see Remark 4.2 for the fact that \(\gamma_{t_{n,k}}^{p}\to\gamma_{\infty}^{p}>0\)). For \(\delta\in(0,1)\), with probability larger than \(1-\delta\), we have_
\[\rho(\widehat{V}_{k}^{p},V_{t_{n,k}}^{p})\leq\frac{B(n,k,\delta)}{\gamma_{t_{ n,k}}^{p}},\]
_where \(B(n,k,\delta)\) is the upper bound on the deviations of \(\widehat{C}_{k}\) stated in Theorem 4.2. In particular, we have the following consistency result as \(n,k\to\infty\) while \(k/n\to 0\),_
\[\rho(\widehat{V}_{k}^{p},V_{\infty}^{p})\to\;0\text{ in probability.}\]
## 5 Illustrative Numerical Experiments
Two possible applications of PCA for functional extremes are considered here. In both contexts, our goal is to assess the usefulness of the proposed functional PCA method for extremes by comparing it with the closest alternative, namely functional PCA of the full sample (not only extremes). On the one hand, a typical objective is to identify likely profiles of extreme events, by which we mean a finite dimensional subspace of \(\mathbb{H}\) with basis given by the eigenfunctions of \(C_{\infty}\) with the highest eigenvalue. In this context, extreme functional PCA serves as a pattern identification tool for a qualitative interpretation. This line of thoughts is illustrated in Section 5.1 on a toy simulated dataset in the multiplicative model of Example 3.1.
On the other hand, functional PCA of extremes may be viewed as a data compression tool allowing to represent functional extremes in a finite dimensional manner, with optimal reconstruction properties which would not be achieved by standard functional PCA. The relevance of this approach is demonstrated in Section 5.2 with an electricity demand dataset which is publicly available on the CRAN network. On this occasion we also propose visual diagnostics for functional regular variation according to finite-dimensional characterizations proposed in Section 3.
The electricity demand dataset thursdaydemand considered in Section 5.2 is available in the R package fds. It contains half-hourly electricity demands on thursdays in Adelaide between 6/7/1997 and 31/3/2007. It is made of \(n=508\) observations \(X_{i}\), each of them being represented as a vector of size 48, indicating the recorded half-hour demand on day \(i\). Here an 'angle' is in practice the profile of the half-hour records over one day, _i.e._ the original curve rescaled by its \(L^{2}\)-norm.
In our toy example (Section 5.1) we generate a functional regularly varying dataset of same dimension \(d=48\) with larger sample size \(n=10e+3\), according to Example 3.1. With the notations of the latter example, we choose \(Z\in\mathbb{R}^{4}\) with independent components, with \(Z_{1}\sim\text{Pareto}(0.5)\), \(Z_{2}\sim 0.8*\text{Pareto}(0.5)\), \(Z_{3}\sim\mathcal{N}m=0,\sqrt{\sigma^{2}}=20)\), \(Z_{4}\sim\mathcal{N}m=0,\sqrt{\sigma^{2}}=0.8*20)\), \(Z_{5}\sim\mathcal{N}m=0,\sqrt{\sigma^{2}}=0.6*20)\), \(Z_{4}\sim\mathcal{N}m=0,\sqrt{\sigma^{2}}=0.4*20)\), where \(\mathcal{N}m,\sqrt{\sigma^{2}}\) is the normal distribution with mean \(m\) and variance \(\sigma^{2}\). The first two components have a heavier tail than the last four, which may be considered at noise above sufficiently high level. The angular measure on the sphere of \(\mathbb{R}^{4}\) is concentrated on the canonical basis vectors \((e_{1},e_{2})\).
The \(L^{2}[0,1]\) functions \(A_{j}\)'s are chosen deterministically for simplicity, namely \(A_{j}(x)=\sin(2\pi\omega_{j}x)\), \(j\in\{1,3,5\}\) and \(A_{j}(x)=\cos(2\pi\omega_{j}x),j\in\{2,4,5\}\), with \((\omega_{1},\ldots,\omega_{6})=(2,3,1,4,5,6)\). In this setting the angular measure of extremes in \(L^{2}[0,1]\) is concentrated on a two-dimensional subspace, namely the one generated by \((A_{1},A_{2})\).
From a numerical perspective, all scalar products in \(L^{2}[0,1]\) are approximated in this work by the Euclidean scalar product in \(\mathbb{R}^{48}\), which corresponds to a Riemann midpoint rule. For simplicity, and because the choice of the unit scale is also arbitrary, we dispense with standardizing by the half-hour width between records. Several numerical solutions exist to perform the eigendecomposition of the empirical covariance operator. However the considered datasets are moderately high dimensional and because all observations are regularly sampled in time we may use the simplest strategy, which is to perform the eigendecomposition of second moments matrix \(\mathbf{X}^{\top}\mathbf{X}\in\mathbb{R}^{48\times 48}\) where \(\mathbf{X}_{i,j}\) is the \(j^{th}\) time record on the \(i^{th}\) day. In practice we rely on the svd function in R issuing the singular value decomposition of \(X\) based on a LAPACK routine. This boils down to choosing as a basis for \(L^{2}[0,1]\) a family of indicator functions centered at the obervation times. Alternative orthonormal families in \(L^{2}[0,1]\) (typically, the Fourier basis or wavelet basis) may be preferred in higher dimensional contexts or with irregularly sampled observations.
### Pattern Identification of functional extremes
With the synthetic dataset described above, we compare the output of functional PCA applied to extreme angular data, to the one obtained using all possible angles, _i.e._ the eigen decomposition of \(\widehat{C}_{k}\) with that of \(\widehat{C}_{n}\). The scree-plot (_i.e._ the graph of ordered eigen values, normalized by their sum) for both operators is displayed in Figure 1. The gap between the first two eigenvalues and the remaining ones is more pronounced with \(\widehat{C}_{k}\) than with \(\widehat{C}_{n}\), indicating that the method we promote is able to uncover a sparsity pattern at extreme levels. The limit measure of extremes is indeed concentrated on a two-dimensional subspace, as opposed to the distribution of the full dataset which support has dimension four. In addition the 'true' extreme angular pattern, which is a superposition of two periodic signals with frequencies \((1,7)\), is easily recognized on the first two eigenfunctions of the extreme covariance \(\widehat{C}_{k}\) (solid lines, first two panels of the second row in Figure 1) while these frequencies are perturbed by shorter tailed 'noise' with the full covariance \(\widehat{C}_{n}\) (dotted lines). The discrepancy between extreme and non-extreme eigen functions vanishes for the third eigen function, which may be considered as 'noise' as far as extremes are concerned.
### Optimal reconstruction of functional extremes on the electricity demand dataset
Here we investigate the \(L^{2}\) reconstruction error when projecting new (test) angular observations on the eigenspaces issued from the spectral decomposition of the empirical covariance operator
\(\widehat{C}_{k}\). Another important goal of this section is to provide guidelines and graphical diagnostic tools allowing to check whether functional regular variation in \(L^{2}\) may reasonably be assumed for a given functional dataset. We choose to consider the component-wise square root of the records so that the (squared) \(L^{2}\) norm of each vector \(X_{i}\) is an approximation of the integrated demand over a full day, which seems meaningful from an industrial perspective. For simplicity we ignore in this illustrative study any temporal dependence from week to week.
As a first step, regular variation must be checked and an appropriate number \(k\) of extreme observations should be selected for estimating \(C_{\infty}\) with \(\widehat{C}_{k}\). A Gaussian QQ-plot (not shown) suggests that the radial quantile is potentially heavy-tailed. In view of Theorem 3.3, 2., one should check regular variation of the radial variable and weak convergence of univariate projections \(\langle\Theta_{t},h\rangle\). Regarding the radial variable \(R=\|X\|\), we propose to inspect a Hill plot and a Pareto quantile plot (Beirlant et al. (2006), Chapter 2). Visual inspection (Figure 2) suggests a stability region for the Hill estimator of \(\gamma=1/\alpha\) (left panel) between \(k=50\) and \(k=200\). Choosing \(k=100\) corresponds to an empirical quantile level \(1-k/n\approx 0.7\), for which the Pareto quantile plot (right panel) is reasonably linear. For \(k=100\) the estimated regular variation index with the Hill estimator \(\hat{\gamma}\) is \(\hat{\alpha}=1/\hat{\gamma}=22.5\) (\(0.95\) CI: \([18.8-27.9]\)).
The condition of weak convergence of projections \(\langle\Theta_{t},h\rangle\) is obviously difficult to check in practice, in particular because it must hold for any \(h\). As a default strategy we propose to check convergence of the (absolute value of) the first moment, namely convergence of \(\mathbb{E}|\langle\Theta_{t},h\rangle|\) as \(t\to\infty\), for a finite number of 'appropriate' functions \(h_{j},j\in\{1,\ldots,J\}\). The context of daily records suggests a periodic family, namely we choose \(h_{j}(x)=\sin(2\pi jx)\), for \(j\in\{1,2,3,4,6,8\}\). Figure 3 display the six plots of empirical conditional moment \(\frac{1}{k}\sum_{i=1}^{k}|\langle\Theta_{(i)},h_{j}\rangle|\). The plots confirm the existence of a relative stability region around \(k=100\).
Turning to performance assessment, we consider the reconstruction squared error of a validation subsample of extreme angles \(\mathcal{V}\subset\{\Theta_{(1)}\ldots,\Theta_{(k)}\}\) (\(k=100\)), after projection on the principal eigenspaces of dimension \(p\) corresponding to three variants of the empirical uncentered angular covariance operator. In this experiment we choose \(p=2\). Namely we consider uncentered covariances \((i)\)\(\widetilde{C}_{k}\), built from an extreme training set \(\mathcal{T}=\{\Theta_{(1)}\ldots,\Theta_{(k)}\}\setminus\mathcal{V}\) ; \((ii)\)\(\widetilde{C}_{n}\), incorporating all angles (including non-extreme ones) except from the validation set, \(\{\Theta_{1},\ldots,\Theta_{n}\}\setminus\mathcal{V}\); \((iii)\)\(\widetilde{C}_{n,k}\), built from a subsample of \(\{\Theta_{1},\ldots,\Theta_{n}\}\setminus\mathcal{V}\) of same size as \(\mathcal{T}\). The left panel of Figure 4 displays the boxplots of the cross-validation error obtained over \(300\) independent experiments where a validation set \(\mathcal{V}\) of size \(30\) is randomly chosen among \(\{\Theta_{(1)}\ldots,\Theta_{(k)}\}\). The right panel displays the out-of-sample error over a tail region, namely the validation set \(\mathcal{V}\) is composed of the most extreme data \(\{\Theta_{(1)}\ldots,\Theta_{(30)}\}\), and the boxplots represent the variability of the reconstruction error over the validation set. The conclusion is the same for both panels, performing functional PCA on the fraction of the angular data corresponding to the most extreme angles significantly reduces the reconstruction error, despite the reduced size of the training set. Comparison between the second and the third boxplot of each panel illustrates the negative impact of the reducing the training sample size, while comparing the first and the third boxplots shows the bias reduction achieved by localizing on the tail region. On this particular example the bias-variance trade-off favors our approach.
Figure 1: Simulated data: Scree plots and first three eigenfunctions. Diamond shaped dots and dashed lines: angular functional PCA of extremes (\(\widehat{C}_{k}\)). Round dots and solid lines: angular functional PCA of the full dataset (\(\widehat{C}_{n}\)). Dotted lines on the first two plots, bottom left: (normalized) functions \(A_{1}\), \(A_{2}\), _i.e._ support of the angular measure for extremes.
Figure 3: Air quality data: first moment of \(|\langle\Theta,h_{j}\rangle|\) conditioned upon \(R\geq R_{(k)}\), as a function of \(k\), for \(h_{j}(x)=\sin(2\pi jx)\), \(x\in[0,1]\)
Figure 2: Hill plot (left) and Pareto quantile plot (right) for the radial variable of the air quality dataset. Dotted vertical lines on the Hill plot: stability region.
Figure 4: Cross-validation and Extrapolation error of extreme and non-extreme angular functional PCA
|
2304.02827
|
DITTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model
|
The increasing demand for high-quality 3D content creation has motivated the
development of automated methods for creating 3D object models from a single
image and/or from a text prompt. However, the reconstructed 3D objects using
state-of-the-art image-to-3D methods still exhibit low correspondence to the
given image and low multi-view consistency. Recent state-of-the-art text-to-3D
methods are also limited, yielding 3D samples with low diversity per prompt
with long synthesis time. To address these challenges, we propose DITTO-NeRF, a
novel pipeline to generate a high-quality 3D NeRF model from a text prompt or a
single image. Our DITTO-NeRF consists of constructing high-quality partial 3D
object for limited in-boundary (IB) angles using the given or text-generated 2D
image from the frontal view and then iteratively reconstructing the remaining
3D NeRF using inpainting latent diffusion model. We propose progressive 3D
object reconstruction schemes in terms of scales (low to high resolution),
angles (IB angles initially to outer-boundary (OB) later), and masks (object to
background boundary) in our DITTO-NeRF so that high-quality information on IB
can be propagated into OB. Our DITTO-NeRF outperforms state-of-the-art methods
in terms of fidelity and diversity qualitatively and quantitatively with much
faster training times than prior arts on image/text-to-3D such as DreamFusion,
and NeuralLift-360.
|
Hoigi Seo, Hayeon Kim, Gwanghyun Kim, Se Young Chun
|
2023-04-06T02:27:22Z
|
http://arxiv.org/abs/2304.02827v1
|
# DITTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model
###### Abstract
The increasing demand for high-quality 3D content creation has motivated the development of automated methods for creating 3D object models from a single image and/or from a text prompt. However, the reconstructed 3D objects using state-of-the-art image-to-3D methods still exhibit low correspondence to the given image and low multi-view consistency. Recent state-of-the-art text-to-3D methods are also limited, yielding 3D samples with low diversity per prompt with long synthesis time. To address these challenges, we propose DITTO-NeRF, a novel pipeline to generate a high-quality 3D NeRF model from a text prompt or a single image. Our DITTO-NeRF consists of constructing high-quality partial 3D object for limited in-boundary (IB) angles using the given or text-generated 2D image from the frontal view and then iteratively reconstructing the remaining 3D NeRF using inpainting latent diffusion model. We propose progressive 3D object reconstruction schemes in terms of scales (low to high resolution), angles (IB angles initially to outer-boundary (OB) later), and masks (object to background boundary) in our DITTO-NeRF so that high-quality information on IB can be propagated into OB. Our DITTO-NeRF outperforms state-of-the-art methods in terms of fidelity and diversity qualitatively and quantitatively with much faster training times than prior arts on image/text-to-3D such as DreamFusion, and NeuralLift-360.
## 1 Introduction
Recent advancements in virtual reality and augmented reality have led to a rapid increase in demand for 3D content. Nevertheless, traditionally the creation of high-quality 3D objects has been a time-consuming and costly process that requires human experts. The challenge of the high cost of making 3D objects has motivated to development of methods for synthesizing diverse 3D objects from simplified source inputs. The methods of generating 3D objects from a single image (image-to-3D) [10, 11, 12, 26, 30, 50, 57, 61, 66, 4, 73, 68] have been developed. The creation of a 3D object from a single image is a challenging task that is hindered by the insufficiency of available information. Recent image-to-3D studies such as DietNeRF [25] and NeuralLift-360 [68] have shown the impressive results leveraging the prior knowledge of pre-trained CLIP (Contrastive Language-Image Pre-training) [40] models or text-to-image diffusion models. Other approaches are the methods of generating 3D objects from a text prompt (text-to-3D) [39, 34, 32, 55] by leveraging text-to-image model to create 3D representation.
Recent image-to-3D methods still suffer low correspondence with the 2D input image, yielding unsatisfactory 3D output and modern text-to-3D methods also suffer low diversity in generated 3D objects from the same input text and high computation complexity. To mitigate these limitations in 3D object generation, we propose DITTO-NeRF, a diffusion-based iterative text to omni-directional 3d model, which utilizes the high diversity and high fidelity images generated by the latent diffusion model in response to a given text prompt. Specifically, our DITTO-NeRF incorporates a monocular depth estimation model [42] to predict the depth corresponding to the image and subsequently builds a high-quality partial 3D object for limited angles. NeRF is then trained using inpainting-SDS (Score Distillation Sampling) [47] loss for diffusion model to create an image corresponding to text prompt to fill the remaining part of the 3D representation. For better reconstruction of the 3D object in the early stage of training, we propose progressive global view sampling. Lastly, with the refinement stage, we could minimize discrepancies among generated parts. By utilizing these novel techniques, our method has the ability to construct 3D objects from text-generated images or given images.
We demonstrated the effectiveness of our method in both image-to-3D and text-to-3D tasks. In the image-to-3D task, our method outperformed those of the current state-of-the-art (SOTA) baselines in terms of both multi-view consistency and source image correspondence visually and in user studies. As such, DITTO-NeRF's effectiveness is extended beyond that of previous models. In the text-to-3D task, our method excelled those of the current SOTA baselines in terms of both output fidelity and diversity. Importantly, these improvements were achieved while requiring reasonable training time and computation resources. Here is the summary of our contributions:
* Proposing a novel pipeline for generating a 3D object model from a single image or text prompt, called DITTO-NeRF, that iteratively propagates high-quality partial 3D model on in-boundary (IB) angles to the remaining 3D model on outer-boundary (OB) angles.
* Proposing progressive global view sampling (PGVS) from IB to OB, reliability-guidance masking for IB, and multi-scale consistency refinement for all IB and OB.
* Outperforming prior arts on text/image-to-3D [67, 25, 68], achieving remarkable results in terms of diversity/quality and speed/fidelity, respectively.
## 2 Related Works
### Text-to-image guidance
In recent years, generative models have made significant strides in the field of image generation, an area that was once thought to be the exclusive domain of humans. Numerous studies have explored the use of generative models, including Variational AutoEncoder (VAE) [7, 46, 44], flow-based models [8, 9], and GAN [2, 18, 27]. However, the SOTA models are based on the denoising diffusion probabilistic model [6, 53, 20, 21, 19] (hereinafter referred to as the diffusion model). The diffusion model is comprised with forward diffusion steps that adds pre-defined noise based on time steps and reverse generative steps that denoise noisy images. These diffusion models have achieved exceptional image quality, but their applicability has been limited due to their high computational resource requirements. The latent diffusion model (LDM) [48] has emerged as a promising approach for addressing the computational resource requirements associated with the forward/reverse process in image generation. This model leverages the encoding process of the VAE to process the forward and reverse processes in the latent space, rather than the RGB space, thereby significantly reducing the amount of computation required. By passing the final denoised latent vector through the VAE decoder, high-quality images can be obtained with reduced computational effort. The development of LDM has led to the emergence of several novel applications in image generation, such as inpainting models that incorporate additional inputs, such as masks, to manipulate the generated images based on the surrounding image context.
In this study, we leveraged an inpainting model based on LDM to achieve the goal of creating a 3D object that matches the user-input or generated image.
### Image-to-3D
Various methods exist for representing 3D objects, including point clouds [1, 33, 36, 70, 75], meshes [15, 74], and voxels [14, 52, 62]. However, these methods require significant storage capacity to represent high-quality 3D objects. Neural Radiance Fields (NeRF) [35] offers a novel solution for it by learning a neural network from images of certain viewpoints. NeRF has been criticized for its long training times, rendering times, and dependency on a large number of camera positions and images. Consequently, much research has focused on optimizing training and rendering times [69, 13, 72, 37, 22, 5, 58, 45, 45, 16, 3]. Instant-NGP [37] using a multi-resolution hash grid encoding shows promising results. Additionally, several studies have explored the potential for achieving high-quality 3D representations with a limited number of images [5, 3, 73, 66, 25, 68] by leveraging the prior knowledge of pre-trained models.
By combining the LDM approach with Instant-NGP, we were able to efficiently create a high-quality, multi-view consistent 3D representation from a single image.
### Text-to-3D
In recent years, the diffusion model has shown outstanding performance in text-to-image tasks, and several attempts have been made to extend it to text-to-3D object tasks. DreamFusion has achieved remarkable results by leveraging that the noise residual value obtained from one step of the denoising generative process in the Diffusion Denoising Probabilistic Model [6] alters image to match the prompt. The model adds Gaussian noise to the image rendered by NeRF in a forward process and backpropagates the noise residual obtained through the reverse process as a gradient to align the rendered image with the input text prompt. By performing this process iteratively for random views, the 3D object corresponding to the input text prompt can be obtained.
In this paper, we propose DITTO-NeRF by adopting the main idea of DreamFusion. However, since DreamFusion uses Imagen [49], a proprietary diffusion model, we utilize Latent NeRF [34] which was based on Stable diffusion [48] as the fundamental model.
## 3 Methods
This chapter delineates the pipeline of our DITTO-NeRF, which comprises four distinct components: 1) The process of generating partial 3D objects from an image using LDM through an input text or a given image by a user is executed and the resulting objects are referred to as in-boundary objects (IB 3D objects). 2) NeRF employs progressive global view sampling (PGVS) and reliability-guided (\(\mathcal{L}_{R}\)) to effectively learn the frontal view, using the IB 3D object created through the aforementioned process. 3) \(\mathcal{L}_{R}\) and inpainting SDS loss (\(\mathcal{L}_{iSDS}\)) are utilized to enhance the fidelity of the remaining frontal view. 4) A refinement stage is introduced to enhance the overall quality of the representation. The complete pipeline of DITTO-NeRF is presented in Fig. 2.
Figure 2: Our DITTO-NeRF pipeline for training from a 2D single image (given/generated). (Step 1) 2D latent \(z_{r}\)’s are sampled from latent NeRF at angles \(\theta\) and \(\phi\) using PGVS. (Step 2-1) Inpainting LDM with the given text and additional direction text yields the residual \(\hat{\epsilon}-\epsilon\) from latent \(z_{r}\) on OB or latent \(z_{p}\) on IB. (Step 2-2) Latent \(z_{r}\) on IB will be suppressed for outer-mask initially (so that latent \(z_{p}\) will be relatively well-preserved for in-mask initially) and then progressively preserved for all area later with more reliable estimates.
### IB partial 3D object and pre-rendering
The diffusion model described previously can generate diverse images that correspond to a given text prompt. To further enhance the capabilities of the model for learning 3D representations that match the quality and diversity of the image generation model, we introduced a new process of creating a 3D object with a single image, and the object is named with IB 3D object. This process involves the creation of a partial 3D object that is utilized to aid NeRF in learning a 3D representation that aligns with the generated or user-given image.
IB 3D object.A relative depth map was first extracted from the image using a monocular depth estimation model called MiDaS [42]. Using this depth map, a 3D object can be composed with the form of a point cloud. It has been observed that in the point cloud constructed in this step, extraneous points such as backgrounds or floors may be inadvertently included in the point cloud. This can result in unwanted points being incorporated into the generated mesh during the conversion of the point cloud to mesh. To circumvent this, the method identifies outlier vertices based on their proximity to other vertices and subsequently removes them using a predetermined standard deviation. The values of these thresholds were determined heuristically via experimentation. Following this, Poisson surface reconstruction [28] is employed to generate the mesh from the completed point cloud. After the Poisson surface reconstruction process, the 3D mesh of the desired object is acquired by removing the parts with less density than a certain quantile.
Setting IB and pre-rendering.Rendering a 3D mesh in real-time for various viewpoints and using it for learning is computationally demanding. To mitigate this, the proposed method pre-renders the IB 3D object by sampling N views with uniform distribution within a limited angle range, which we call _in-boundary_ (IB). Specifically, RGB images, depth, and latent vectors are rendered using the VAE encoder of the LDM.
### Training NeRF using IB 3D object
Progressive global view sampling (PGVS).To enable the acquisition of the IB 3D object that was previously established and ensure coherence between the partial representation and subsequent portions generated through the diffusion model, it is imperative to initially sample numerous views of the in-boundary part. As the training progresses, views of the _outer-boundary_ (OB) part need to be sampled to facilitate the generation of the overall 3D object. To accomplish this, we employ the beta distribution as the probability density function to sample the viewing position.
\[f(x;t)=\left\{\begin{array}{ll}\frac{x^{\alpha(t)-1}(1-x)^{\beta(t)-1}}{B( \alpha(t),\beta(t))}&\text{if }t<t_{u}\\ U(0,1)&\text{if }t\geq t_{u}\end{array}\right. \tag{1}\]
with \(\alpha(t)=\alpha_{0}+1-\frac{\alpha_{0}}{t_{u}}t\) and \(\beta(t)=\beta_{0}+1-\frac{\beta_{0}}{t_{u}}t\).
The point which is named the _uniform point_\(t_{u}\), is where \(\alpha\) and \(\beta\) decay to 1, after which uniform sampling is performed for all directions in the subsequent training steps which is illustrated in Fig. 3. Afterward, the process of obtaining the angle we want from the corresponding pdf is expressed by the following formula:
\[\theta,\phi=(\int f(x;t)dx)^{-1}(U_{\theta,\phi}(0,1))\cdot range(\theta,\phi) \tag{2}\]
If the sampled camera position lies within the in-boundary region, the nearest pre-rendered position is selected instead of rendering a random view each time to avoid the unnecessary computational burden. Since the camera view is sampled progressively, we call this sampling Progressive global view sampling (PGVS).
### Matching IB and OB 3D object
Inpainting SDS loss.Simultaneously applying the previously rendered prior images and the \(\lambda_{iSDS}\) from the diffusion model can result in a conflict that depreciates the consistency of the IB 3D object and \(\mathcal{L}_{iSDS}\) generated parts as the diffusion model may prioritize creating an object that corresponds to the built-in prior model. This can impede the goal of generating a 3D object that closely resembles the desired image. To address this, we employ a fine-tuned
Figure 3: Examples of our PGVS over iterations based on varying Beta distribution. Initial camera view samples are heavily concentrated on IB (a) and then they are gradually spread to OB over iterations (b). Finally, camera view samples are uniformly distributed after uniform point.
diffusion model \(\epsilon_{\phi}\)[48] for the inpainting task, rather than using a general diffusion model.
\[\mathcal{L}_{iSDS}(\phi,g(\theta))=\mathbf{E}_{t,\epsilon}[||\epsilon_{\phi}(x_ {t};y,t,\mathcal{M})-\epsilon_{t}||_{2}^{2}] \tag{3}\]
where \(y\) is text embedding, \(\mathcal{M}\) is the binary mask, \(\epsilon_{t}\) is actual noise for time step \(t\), and \(x_{t}\) is noise image for each time step \(t\). When the sampled viewpoint is in-boundary, the rendered latent vectors and pre-rendered latent images are optimized with \(\mathcal{L}_{R}\). In addition, sparsity loss (\(\mathcal{L}_{sp}\)) was introduced, which was to obtain a cleaner representation by suppressing the lump caused by learning NeRF. For further details for \(\mathcal{L}_{sp}\), see the supplementary material. The total loss is formulated as shown in the accompanying equation.
\[\mathcal{L}_{total}=\lambda_{iSDS}\mathcal{L}_{iSDS}+\delta_{R}\left(\theta, \phi\right)\mathcal{L}_{R}+\lambda_{sp}\mathcal{L}_{sp} \tag{4}\]
\(\delta_{R}(\theta,\phi)\) here has a value of 1 within the IB area and a value of 0 otherwise.
Reliability-guided loss.When the camera position is sampled in the IB area, we optimize the loss between the rendered latent image in NeRF and the pre-rendered latent image. However, due to the difference in consistency and size between the IB 3D object and the generated object, we introduce a novel loss, which takes into account the differences between the part of the image corresponding to the desired object (foreground) and the remaining region (background), called the reliability-guided loss.
\[\mathcal{L}_{R}=[\zeta\cdot\mathcal{M}+\eta(t)\cdot(1-\mathcal{M})]\odot||z_{ r}-z_{p}||_{1} \tag{5}\]
with \(\eta(t)=e^{-t/\lambda_{\eta}}\). \(\lambda_{\eta}\) and \(\zeta\) are constants that found with experiments. \(z_{r}\) is rendered latent vector from NeRF and \(z_{p}\) is pre-rendered latent vector.
The background of pre-rendered images is initialized as white. However, learning the latent ground truth as it leads to suboptimal inpainting results in the IB region, as the \(\mathcal{L}_{iSDS}\) is smaller than the \(\mathcal{L}_{R}\). On the other hand, if the diffusion model generates the background and the camera position is sampled from the back of the object, the model may not recognize the IB 3D object, resulting in inconsistent objects with varying sizes. Therefore, to suppress the unreliable background area at the beginning of training, we also train the white part of the background at the beginning. After that, learning proceeds, and since the IB 3D object is sufficiently represented in NeRF, the part generated by the \(\mathcal{L}_{iSDS}\) becomes reliable, and the weight of the part against the background is exponentially reduced to generate the rest part.
Specifically, during the calculation of the \(\mathcal{L}_{R}\), the loss is divided into foreground and background parts, with the foreground part given a higher weight and the background part given a relatively lower weight to initially create a white background. Subsequently, during training, the background weight is gradually decayed, allowing the \(\mathcal{L}_{iSDS}\) for inpainting to have a more significant impact if the in-boundary region is learned after a certain amount of training. Through this process, we were able to train a NeRF model that created a 3D object that resembles an image prior with a constant size. This whole process is illustrated in Fig. 4 along with rendered training images for each step.
### Refining details
Random patch refinement.Although the part generated by the \(\mathcal{L}_{iSDS}\) and IB 3D object part continue semantically seamless, there is a noticeable discontinuity in the overall texture and color. This is attributed to the prior inside the diffusion model. To address this challenge and enable the created 3D object to exhibit continuity of texture and color while retaining the image prior of the front part as much as possible, a _refinement step_ was introduced. This refinement step constitutes 10% of the entire training process, and if in-boundary is sampled during this step, the \(\mathcal{L}_{R}\) is excluded, and a random patch is given to the foreground to refine the corresponding part with \(\mathcal{L}_{iSDS}\). This process results in the color and texture of the generated 3D object becoming more similar to the part generated by the \(\mathcal{L}_{iSDS}\), while maintaining the overall shape and content of the image prior. Through the refinement step, the desired objective of achieving continuity of texture and color while maintaining the image prior is attained. The corresponding method is shown in Fig. 4
Figure 4: This figure illustrates how does the \(\mathcal{L}_{R}\) and refinement works. At the beginning of the training process, \(\mathcal{L}_{R}\) suppresses the unreliable part so that IB 3D object could be represented. As the training proceeds reliability of \(\mathcal{L}_{iSDS}\) increases creating outer-boundary parts. In refinement procedure, random patches are applied to mask for enhancing overall quality.
Figure 5: Qualitative comparison with other image-to-3D models [25; 68]. The outputs of the Neurallift-360 [68] were obtained and cropped from the Neurallift-360 website for fair comparison. Our model used the reference images and the corresponding texts on the left.
Figure 6: Qualitative comparison with other text-to-3D models [54; 34]. The last column is our model’s zoomed-in results to show our excellent details. The number of iterations for the text-to-3D model to generate outputs was set to the default value in the baseline.
Dimension refinement.In addition, it was observed that the pipeline resulted in the appearance of jagged edges in the generated object, known as the _jaggies_ phenomenon. This issue arises due to the training process, which involves feeding a relatively high-resolution image into a relatively low-resolution latent space. To address this, we introduced a refinement step that linearly doubles the NeRF rendering resolution, resolving the issue of jagged edges. Through this process, we were able to generate an object with clear edges while maintaining the overall shape and content of the IB 3D object. The color and texture of the generated object were also found to be similar to those produced by the \(\mathcal{L}_{iSDS}\). Check the supplementary for the details.
## 4 Experiments
In this section, we aim to evaluate the effectiveness of our proposed method and compare it with other existing models in the domains of image-to-3D and text-to-3D. For image-to-3D, we perform a comparative analysis between our method and existing models with respect to source-generated 3D object correspondence and fidelity. Similarly, for text-to-3D, we compare the performance of our method with other open-sourced models in terms of diversity, computational efficiency, and fidelity. Additionally, we investigate the impact of various factors that we introduced on method section. The user study's respondents were asked to rate each item on a 5-point scale. 3150 questionnaires were collected from a total of 210 people. All subsequent experiments were done on a single NVIDIA A100 GPU.
### Generating 3D objects from a single image
In the present section, a comparative analysis is carried out between our proposed method and existing single-image NeRF models. To evaluate the effectiveness of our approach, a questionnaire survey is conducted to obtain N responses on source-generated object correspondence and fidelity. The survey responses provide insight into the performance of our method relative to the SOTA models in terms of its ability to accurately reconstruct objects and maintain correspondence with the source.
User study.A survey was carried out to evaluate the effectiveness of our proposed method in comparison to existing image-to-3D generative models, namely SinNeRF [67], DietNeRF [25] and Neurallift-360 [68]. The survey was designed to gather user feedback on the fidelity of the generated object and source image-object correspondence. The results in Table 1 indicate that our method outperforms the existing models in both categories, establishing a new SOTA in image-to-3D generative models.
\begin{table}
\begin{tabular}{c|c c} \hline Method & Corr. \(\uparrow\) & Fidelity \(\uparrow\) \\ \hline \hline DietNeRF & 2.29 & 2.194 \\ SinNeRF & 2.084 & 2.031 \\ NeuralLift-360 & 3.221 & 3.24 \\
**DITTO-NeRF** & & **3.928** & **4.019** \\ \hline \end{tabular}
\end{table}
Table 1: Mean opinion score for evaluating image-to-3D models.
\begin{table}
\begin{tabular}{c|c c c c} \hline Method & Diversity \(\uparrow\) & Fidelity\(\uparrow\) & Corr.\(\uparrow\) & Time (m) \(\downarrow\) \\ \hline \hline Stable & & & & \\ DreamFusion & 3.704 & 2.6 & 2.959 & 60 \\ Latent-NeRF & 2.995 & 3.094 & 3.362 & **10** \\
**DITTO-NeRF** & & & & \\ (**Ours**) & **3.966** & **4.158** & **4.242** & 25 \\ \hline \end{tabular}
\end{table}
Table 2: Mean opinion score and training time for evaluating text-to-3D models.
Figure 7: Qualitative comparison with other text-to-3D models [54, 34] in terms of diversity. All seeds were taken randomly.
### Generating 3D objects from a text prompt
This section presents a comparative analysis of our proposed method with existing Stable diffusion-based text-to-3D generative models [54, 34]. The evaluation is conducted through a questionnaire survey that captures respondents' feedback on the diversity and fidelity of the generated 3D objects. The survey responses are scored on a 5-point scale for each item. Furthermore, to investigate the computational efficiency of the models, we report the time spent on the baseline's default setting.
Diversity with a single prompt.Table 2 presents the results of our comparative analysis, indicating that our proposed method exhibits higher diversity in generated 3D objects as compared to the Stable diffusion-based text-to-3D generative models. An example of this diversity is illustrated in Fig. 7, where multiple outputs are generated for some given prompts.
Fidelity and text correspondence.As observed in previous studies, the Stable diffusion-based text-to-3D generative models produce unrealistic, blurry, or unclear 3D objects. In contrast, our proposed method generates relatively realistic 3D objects, as evidenced by the results presented in Table 2, supported by users. Similarly, the correspondence between the input text and the generated object is observed to be more accurate and consistent in our method, owing to its superior performance in accurately capturing the semantic meaning of the input text.
Computational efficiency.Stable-DreamFusion which is based on DreamFusion, suffered from the issue of requiring a long training time to achieve results with satisfactory quality. Latent-NeRF, on the other hand, addressed this issue by training the model on the latent space; however, it exhibited limitations in terms of diversity and fidelity. In contrast, our proposed method is capable of learning a 3D representation of satisfactory quality within a reasonable training time, as illustrated in Table 2. While our method may have some limitations in fidelity when compared to models such as Magic3D or DreamFusion, it should be noted that these models utilize large and undisclosed models, limiting their applicability. In contrast, our proposed method offers an advantage in terms of computation.
### Ablation study
In this section, we conduct an ablation study on the components of our method. We used inpainting SDS loss instead of conventional SDS loss to make the IB 3D object and diffusion-generated part continued smoothly. Otherwise, as shown in Fig. 8, it was obvious that the shape and color were lost. In the case where \(\mathcal{L}_{R}\) does not exist, it was confirmed that the generated object was larger than IB 3D object or additional elements were added to it. There was a problem that the 3D prior could not be reconstructed when sampling viewpoints uniformly without using decaying beta distribution. Finally, if refinement was not performed, the semantic part continued, but discontinuity occurred in color and texture. Further detailed explanation is on the supplementary.
## 5 Limitation
Depth prediction is based on the shadows present in the image and the prior of the model. Therefore, if an image with minimal shadows or an image that is generated from outside of the distribution of the diffusion model is input, accurate depth estimation may not be possible. This can result in an inadequate IB 3D object being generated, leading to lower-quality 3D object outputs compared to those obtained in general. Additionally, if the quality of the image generated by the diffusion model is poor, learning can become more challenging as illustrated in Fig. 9. Also, in cases where a low-probability image is selected from the diffusion model, it may not follow the image prior, leading to a convergence of the 3D object within the diffusion model. This can also cause a discontinuity issue between the image prior and diffusion-generated parts.
Figure 8: Ablation studies by generating 3D object with a text prompt _“a rabbit, animated film character, 3D rendered”_ using full model (a), without i-SDS loss (b), without reliability-guided loss (c), with uniform sampling (not PGVS) (d), and without refinement stage (e).
## 6 Conclusion
We introduce DITTO-NeRF, a novel approach for obtaining a 3D representation from a single image or text input. DITTO-NeRF leverages an IB 3D object built from images obtained from user input or text, which is then used to train a NeRF model that generates 3D objects using a diffusion model. Based on user studies and qualitative comparisons, we draw the following conclusions: our proposed model achieves higher fidelity and quality in terms of image-to-3D correspondence than existing works. Moreover, in the context of text-to-3D generation, our method offers both higher fidelity and more computation-efficient diversity than existing Stable diffusion-based studies.
## Acknowledgements
This work was supported by the National Research Foundation of Korea(NRF) grants funded by the Korea government(MSIT) (NRF-2022R1A4A1030579) and Basic Science Research Program through the NRF funded by the Ministry of Education(NRF-2017R1D1A1B05035810).
Figure 9: As the diffusion model failed to generate proper images, the generated 3D objects’ quality are also limited.
DTTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model (Supplementary Material)
## Appendix A Additional Results
### Videos
We present a video showcasing a comparison of the results obtained using our proposed DITTO-NeRF model, which is a novel method for learning 3D representations from either image or text inputs, against other baselines in the same task. For video, please check the folllowing page: janeyeon.github.io/ditto-nerf.
## Appendix B Details on Methods
### Algorithms
```
Input:\(y\), \(I\) Output:\(\mathbf{V}_{out}\) // Training & Refinement
1if\(\exists I\)then
2\(y_{image}=CLIP((y,y_{bg}))\)
3\(z_{image}\leftarrow\) inpainting-LDM(\(y_{image}\))
4\(I\leftarrow\mathcal{D}(z_{image})\)
5\(\mathcal{Z}_{p},\mathbf{M}\leftarrow\) Generate-IB-3D(\(I\))
6for\(i=1,2,...,t_{total}\)do
7\(\theta,\phi\leftarrow\) PGVS(\(i\))
8if\(IB\)then
9\(z_{p},\mathcal{M}\leftarrow\) Find-closest(\(\mathcal{Z}_{p},\mathbf{M},\theta,\phi\))
10\(\mathbf{o},\mathbf{d}\leftarrow\) Render-ray(\(\theta,\phi\))
11\(z_{r},\ \mathcal{P}_{ws}\leftarrow\mathcal{N}(\mathbf{o},\mathbf{d})\)
12if\(i\leq i_{refine}\)then
13\(\text{Training}(\mathcal{N},z_{p},\mathcal{M},z_{r},\mathcal{P}_{ws},y)\)
14else
15\(\text{Refinement}(\mathcal{N},z_{p},\mathcal{M},z_{r},\mathcal{P}_{ws},y,i)\)
16 // Rendering output
17\(\mathbf{V}_{out}\leftarrow\{\}\)
18for\(\theta,\phi\) in rendering angles do
19\(\mathbf{o},\mathbf{d}\leftarrow\) Render-ray(\(\theta,\phi\))
20\(z_{r}\leftarrow\mathcal{N}(\mathbf{o},\mathbf{d})\)
21\(I_{out}\leftarrow\mathcal{D}(z_{r})\)
22 Append \(I_{out}\) to \(\mathbf{V}_{out}\)
```
**Algorithm 1**Overall pipeline
Overall pipeline.The overall algorithm of DITTO-NeRF is described in Algorithm 1. First, it requires text input \(y\) and additional image input \(I\) from the user. In the case of image-to-3D synthesis, the algorithm receives a reference image \(I\) and corresponding text value \(y\) as input. Whereas in the case of text-to-3D synthesis, only text input is required. If no reference image is available, two text values \(y,y_{bg}\) are concatenated to create a text embedding \(y_{image}\) that is used to generate a latent vector \(z_{image}\) in inpainting-LDM. \(z_{image}\) is then passed through a decoder \(\mathcal{D}\) to create a reference image \(I\). \(I\) is fed into the Generate-IB-3D function to obtain latent vector \(z_{p}\) and mask \(\mathcal{M}\) for randomly selected \(N\) angles through uniform distribution within the in-boundary (IB) area.
For \(N_{total}\) iterations, the algorithm performs the following procedures: the camera view position, \(\theta\), and \(\phi\) are determined using the PGVS function along the iteration. The IB flag is then used to determine whether the angle values are used directly or switched to pre-selected angles within the in-boundary area. Using the Find-closest function, \(z_{p}\) and \(\mathcal{M}\) values are obtained for each pre-selected angle. The Render-ray function is used to obtain the \(\mathbf{o}\) and \(\mathbf{d}\) values for each ray based on the \(\theta\) and \(\phi\). These values are used by the NeRF model (\(\mathcal{N}\)) to render the latent \(z_{r}\) and weight values \(\mathcal{P}_{ws}\), respectively, for each point of the ray. If the current iteration \(i\) is less than \(i_{refine}\), Training is performed, and if it is larger, Refinement is executed.
As the \(\mathcal{N}\) is trained, the latent vector \(z_{r}\) is extracted from the \(\mathcal{N}\) along the angles corresponding to'rendering angles'. At each rendering angle, the rendered image \(\mathcal{D}(z_{r})\) is obtained from the latent vector \(z_{r}\) using the decoder \(\mathcal{D}\). By collecting rendered images \(I_{out}\), we can obtain the final results, \(\mathbf{V}_{out}\), a video of a 3D rendered object. It should be noted that the output of the \(\mathcal{N}\) is a 4-channel latent vector, unlike the RGB output in standard NeRF. So the \(z_{r}\) should be passed through the \(\mathcal{D}\) used by inpainting-LDM in the last rendering step to get an RGB image.
Training procedure.As described above, when the current iteration \(i\) is less than \(i_{refine}\), Training is performed. The entire algorithm of the Training stage is described in Algorithm 2. During this stage, the input text \(y\) is concatenated with direction information corresponding to the angle and put into a CLIP to get \(y_{train}\). The subsequent process differs depending on whether it is IB or not. In the case of IB, the \(\mathcal{L}_{iSDS}\) is calculated by inputting the \(z_{p}\) and \(\mathcal{M}\) values into the inpainting-LDM. The difference between \(z_{r}\) and \(z_{p}\) is then corrected using the Reliability-Guide
function, which adjusts the background and foreground values using a \(\mathcal{M}\) and expands the size of the reliable region as learning proceeds. When it is not IB, network training is performed using the inpainting-LDM with \(z_{r}\) and \(\mathcal{M}_{0}\) values. After obtaining the \(\mathcal{L}_{sp}\) to eliminate the lump, back-propagation is performed by adding all of the losses into \(\mathcal{L}_{total}\).
Refinement procedure.When the current iteration \(i\) is greater than \(i_{refine}\), a refinement process is performed as outlined in Algorithm 3. Unlike the previous training step, the Reliability-Guide process is omitted, and the Dimension-Refine and Add-Patch processes are added. The process of obtaining \(y_{refine}\) is the same as the previous step, and the Dimension-Refine function calculates \(H\) and \(W\) based on the current iteration \(i\). This function linearly increases the size of \(H\) and \(W\) from 64 to 128 as the refinement process progresses, for improvement of the resolution and quality in rendered views. To match the corresponding sizes of \(H\) and \(W\), the pre-calculated \(\mathcal{M}\) and \(z_{p}\) sizes are adjusted before entering. The Add-Patch process enters only the \(\mathcal{M}\) in the IB area and maintains consistency of the front and back sides by randomly arranging \(S_{patch}\) size of \(N_{patch}\) patches in a mask of \(S_{mask}\) size. Subsequently, \(\mathcal{L}_{iSDS}\) and \(\mathcal{L}_{sp}\) are added to \(\mathcal{L}_{total}\), and the \(\mathcal{N}\) is trained in the same way as in the training process.
```
Input:\(\mathcal{N},z_{p},\mathcal{M},z_{r},\mathcal{P}_{ws},y\) Output:\(\mathcal{N}\)
1FunctionReliability-Guide\((z_{p},z_{r},\mathcal{M})\):
2return\([\zeta\cdot\mathcal{M}+\eta(t)\cdot(1-\mathcal{M})]\odot||z_{r}-z_{p}||_{1}\)
3\(y_{train}\gets CLIP((y,y_{dir}))\)
4if\(IB\)then
5\(\mathcal{L}_{iSDS}\leftarrow\) inpainting-LDM\((z_{p},y_{train},\mathcal{M})\)
6\(\mathcal{L}_{R}\leftarrow\) Reliability-Guide\((z_{p},z_{r},\mathcal{M})\)
7else
8\(\mathcal{M}_{0}\gets J_{H.f_{scale}}\)
9\(\mathcal{L}_{iSDS}\leftarrow\) inpainting-LDM\((z_{r},y_{train},\mathcal{M}_{0})\)
10\(\mathcal{L}_{sp}\leftarrow\) Sparsity\((\mathcal{P}_{ws})\)
11\(\mathcal{L}_{total}\leftarrow\lambda_{iSDS}\mathcal{L}_{iSDS}+\delta_{R}\left( \theta,\phi\right)\mathcal{L}_{R}+\lambda_{sp}\mathcal{L}_{sp}\)
12 Update \(\mathcal{N}\) with \(\mathcal{L}_{total}\)
```
**Algorithm 2**Training
### IB 3D object pre-rendering
To pre-render an IB 3D object, the angle of the in-boundary (IB) must be established first. In instances where this angle is too narrow, the matching between the IB and outer-boundary (OB) is insufficient. Whereas if the angle is too wide, additional views must be sampled, which can cause the bottleneck problem of Find-closest process in Algorithm 1. Through several experiments, we discovered the setting \(\phi\in[-30,30]\) and \(\theta\in[60,120]\) provides optimal conditions. Moreover, if the number of pre-rendered images is too small, IB training cannot be performed sufficiently, while a large number of images increases the computational burden. Thus, based on our experimental findings, we determined that setting N=64 provides optimal conditions.
### Camera intrinsic parameters for point cloud
In order to generate a point cloud from an RGB-D image, it is essential to know the camera intrinsic parameters. For real-world cameras, these values are readily available; however, as our images are synthesized using a diffusion model, the actual parameters remain unknown. The required intrinsic parameters include skew rate, \(c_{x}\), \(c_{y}\), \(f_{x}\), and \(f_{y}\). Given that contemporary cameras exhibit minimal skew, we approximated the skew rate to be nearly zero. Additionally, considering the generated image's dimensions of \(512^{2}\), we assigned \(c_{x}=c_{y}=256\). The most important value was the focal length, that is, \(f_{x}\) and \(f_{y}\). As these values are typically quite similar, we assumed equivalence and conducted multiple investigations to verify this assumption.
Fig. S1 illustrates the incorporation of an additional prompt for lens focal length alongside the three existing prompts: object, portrait, and landscape. The focal length
prompts employed in this instance include "fish eye," "35mm lens," "50mm lens," and "135mm lens". Upon examining the outcomes, we observed no substantial differences in the results, except the landscape category, particularly the "fish eye" prompt. This phenomenon is attributed to the majority of training data falling within the range of 35mm to 50mm focal lengths. Consequently, we generated point clouds assuming a focal length of approximately 45mm, yielding satisfactory results for the majority of tasks necessitating the generation of object-related images.
### Progressive global view sampling (PGVS)
The beta distribution for PGVS is initialized with \(\alpha_{0}=2\) and \(\beta_{0}=8\). Thereafter, both constants decrease linearly until the \(t_{u}\) iteration, which is defined as \(0.3\) times the total number of iterations \(t_{total}\). After \(t_{u}\), both constants are fixed at a value of 1 for the remaining iterations. For the experiments reported in this paper, we set \(t_{total}\) to 5000, resulting in \(t_{u}\) being equal to 1500.
### Sparsity loss
The sparsity loss used in this study was introduced in the Dreamfields [24] and was also employed in Latent-NeRF. It has a similar form to the binary cross-entropy loss and is designed to encourage the density values to converge to either 0 or 1. This helps improve the quality of the generated 3D representation by ensuring that the rays either pass through completely or are blocked. The mathematical expression for the sparsity loss is shown below.
\[\mathcal{L}_{sp}=-\mathbb{E}[\alpha log(\alpha)+(1-\alpha)log(1-\alpha)]\] (S1)
with \(\alpha\) is the weighted sum of a ray which is clipped with \([\epsilon,1-\epsilon]\), where \(\epsilon=10^{-5}\) and \(\lambda_{sp}=5\cdot 10^{-5}\).
### Reliability-guided loss
The equation for reliability-guided loss is on the following:
\[\mathcal{L}_{R}=[\zeta\cdot\mathcal{M}+\eta(t)\cdot(1-\mathcal{M})]\odot||z_{ r}-z_{p}||_{1}\] (S2)
with \(\eta(t)=e^{-t/\lambda_{\eta}}\). And all the experiments proceeded with \(\lambda_{\eta}=8\), \(\zeta=2\).
### Patch refinement
Patch refinement is a crucial method to ensure that the final 3D representation has uniformity in terms of overall tone and texture. This refinement applies \(N_{patch}\) patches of size \(k\times k\) to a mask with size \(H\times W\). The choice of patch size \(k\) is important. In the case of large \(k\), it shows the unintended result such as a change of a semantic part of the existing IB.
When \(k\) goes too small, the effect of patch refinement would disappear while downsizing the mask. Similarly, the number of patches \(N_{patch}\) is also important, as an insufficient number of patches may not produce the appropriate refinement, while an excessive number of patches may not maintain the existing IB. Therefore, we selected the values of \(k=16\) and \(N=256\) for this experiment.
## Appendix C Implementation Details
### Monocular depth estimation
The image/text-to-3D task at hand necessitates the ability to extract accurate relative depth maps from a set of images that consists of diverse objects. Hence, it was crucial to employ a robust and accurate depth estimation model that had been trained on a variety of datasets, and for this purpose, we adopted MiDaS [43]. MiDaS was trained using pre-existing models from 12 different datasets, including ReDWeb [63], DIML [29], Movies, MegaDepth [31], WSVD [56], TartanAir [60], HRWSI [64], ApolloScape [23], BlendedMVS [71], IRS [59], KITTI [17], and NYU Depth V2 [38]. The largest model provided by MiDaS 'DPT BeiT Large' [41] which offers the highest quality depth estimation among the available models, was employed. This model utilizes a transformer architecture, which enables more precise and detailed depth estimation as compared to convolutional structures.
### IB 3D construction based on depth map
An RGB-D image can be generated by combining a depth map obtained from monocular depth estimation with an image, followed by the creation of a point cloud using the Open3D [76] library. To eliminate outliers, points are removed if their distance from the five surrounding points exceeds one standard deviation. The normal vector of each vertex is then estimated from the resulting point cloud, and a mesh is created through Poisson surface reconstruction, with a depth value of 10. Vertices with a density below 0.1 quantiles are subsequently removed, based on the assumption that the object of interest is expected to exhibit a relatively high density. An example of the process's output is shown in Fig. S2
### Stable diffusion inpainting
We utilized Stable diffusion inpainting [48] for inpainting-LDM to train NeRF in OB. The model was fine-tuned with original Stable diffusion. The following steps shows how it was fine-tuned: To initialize the Stable diffusion inpainting model, the creater used the weights from the Stable diffusion-v-1-2 model. The training process consisted of two phases: regular training for 595k steps followed by 440k steps of training with inpainting task at a resolution of 512 \(\times\) 512 using the 'LAION-Aesthetics v2 5+' dataset [51]. To improve the classifier-free guidance sampling, the text-conditioning was dropped by 10%. For inpainting, the UNet was augmented with five additional input channels - four for the encoded masked-image and one for the mask itself. These additional channels were zero-initialized after restoring the non-inpainting checkpoint. During the training process, synthetic masks were generated and 25% of the images were masked in each iteration.
### Prompt conditioning
Monocular depth estimation is typically carried out by leveraging the shadow information present in an image, along with prior knowledge incorporated into the model. As a consequence, the depth map generated by such methods may be inadequate for images that lack substantial shading. Furthermore, it has been observed that the depth estimation process encounters difficulties when the background is either complex or positioned in close proximity to the subject. To overcome these issues, we propose a preprocessing step in which the initial image used for generating an IB 3D object is accompanied by the phrase "A whole photo of \(\sim\)" at the beginning of the prompt for non-cropped image and "\(\sim\) in the white background taken with 50mm lens" at the end of the prompt. This ensures that the depth estimation process proceeds smoothly, resulting in an acceptable IB 3D object.
## Appendix D Experimental Details
### Evaluation details
Baselines.In this study, we conducted a comparative analysis of existing image-to-3D models and text-to-3D models against our proposed DITTO-NeRF model. For the image-to-3D model, we evaluated our results against the current state-of-the-art model, NeuralLift-360 [68], for which the code has not been made publicly available. Therefore, we compared our results with the reference images provided on the NeuralLift-360 webpage [65]. Since the size of the NeuralLift-360 results was small, we cropped them to ensure a fair quality comparison. Regarding the text-to-3D model, we compared our results with two existing models based on Stable diffusion: Stable-Dreamfusion [54] and Latent-NeRF [34]. Stable-Dreamfusion replaces the generative diffusion model with Stable diffusion instead of Imagen which was used for the original Dreamfusion [39]. In contrast, Latent-NeRF learns the 3D representation directly in the latent space by learning the gradient through Stable diffusion in latent space. During the evaluation, the rendered latent vector is passed through the decoder of a Variational AutoEncoder (VAE) to extract the final RGB image, thereby achieving a fast learning time. All the evaluation of Stable-dreamfusion was conducted on \(20000\) iterations because the \(10000\) iterations' output quality was not enough for proper comparison and Latent-NeRF, it was \(5000\) iterations. Our work was conducted based on the code provided by Latent-NeRF.
User study.In this study, a total of 3,150 responses were collected through administering a survey of 15 questions to 210 participants. The Image-to-3D evaluation was performed using four different models, namely, SinNeRF [67], DietNeRF [25], NeuralLift-360 [68], and our DITTO-NeRF about three different images, namely, "baseball" "apple" and "hydrant" all of which were sourced from the NeuralLift-360 webpage. The evaluation of the image-to-3D object correspondence and fidelity was carried out with respect to the models' outcome and scored on a five-point scale. Additionally, the text-to-3D task was evaluated for diversity using three prompts, namely, "a loaf of bread", "a small saguaro cactus planted on a clay pot" and "a single candle burning on an ornate silver candlestick". Outputs were generated by Stable-Dreamfusion, Latent-NeRF, and our DITTO-NeRF with three different seeds for each of the three prompts. The evaluation procedure was similar to that used in the image-to-3D evaluation and the diversity of the outputs was scored on a five-point scale. Lastly, the fidelity of the outputs in the text-to-3D task was evaluated using three different prompts, namely, "a hamburger" "an astronaut" and "a suitcase" with the same three models. The results of each evaluation item were averaged to create mean opinion scores.
## Appendix E Additional Ablation Studies
### Effectiveness of Dimension refinement
If the dimension refinement stage does not exist, we can see 'Jaggies' phenomenon where the boundary looks like a saw, as shown in Fig. S3. As the cause of this, we identified that it is due to the projection of high-frequency information such as pre-rendered images into a low dimension such as latent space. We devised a method to increase the rendering dimension of NeRF to alleviate these artifacts. However, there was a problem that the training time increased rapidly when continuously using a large rendering dimension. Therefore, we devised a method of linearly increasing the rendering dimension during the refinement stage, finally doubling it, focusing on the fact that coarse shapes are generated even at low rendering dimensions. In actual implementation, the rendering dimension starting at \(64\) starts to increase linearly in the refinement stage and finally reaches \(128\) and learning ends.
### Effectiveness of PGVS
We tried other types of distributions for view sampling such as simple uniform sampling,'moving beta distribution' and 'discrete accumulation distribution'. In this chapter, we would like to explain the two distributions mentioned above and their results.
Moving beta distribution.In our full model, the Probabilistic Gradient Vector Sampling (PGVS) method is employed to optimize the camera placement. In this method, the values of \(\alpha\) and \(\beta\) of the beta distribution decrease simultaneously and eventually converge to a value of 1. However, in the modified distribution used in this study, the two values gradually change to each other's first value. During the initial phase of training, many camera positions are sampled in the frontal part, and as the training progresses, a larger number of samples are taken in the back part. Similar to PGVS, the modified distribution also has a uniform point, after which the camera view is sampled uniformly. The results of this
approach are presented in Fig. S4, where it can be observed that the frontal reconstruction did not work as effectively as in the full model.
Discrete accumulation distribution.The sampler used in this study is similar to the moving beta distribution described earlier, but it differs in that it has a discrete distribution. This sampler operates within a pre-defined interval, where the probability within a specific interval is set to \(r\), and the probability in the remaining intervals is set to \(1-r\). At each specific iteration, the interval with the probability of \(r\) is passed on to the next iteration. Although the IB part was reconstructed effectively using this sampler, the back part did not converge to a normal shape. To address this issue, an experiment was conducted where \(r\) was set to 0.65, as shown in Fig. S4.
|
2310.18738
|
TLM: Token-Level Masking for Transformers
|
Structured dropout approaches, such as attention dropout and DropHead, have
been investigated to regularize the multi-head attention mechanism in
Transformers. In this paper, we propose a new regularization scheme based on
token-level rather than structure-level to reduce overfitting. Specifically, we
devise a novel Token-Level Masking (TLM) training strategy for Transformers to
regularize the connections of self-attention, which consists of two masking
techniques that are effective and easy to implement. The underlying idea is to
manipulate the connections between tokens in the multi-head attention via
masking, where the networks are forced to exploit partial neighbors'
information to produce a meaningful representation. The generality and
effectiveness of TLM are thoroughly evaluated via extensive experiments on 4
diversified NLP tasks across 18 datasets, including natural language
understanding benchmark GLUE, ChineseGLUE, Chinese Grammatical Error
Correction, and data-to-text generation. The results indicate that TLM can
consistently outperform attention dropout and DropHead, e.g., it increases by
0.5 points relative to DropHead with BERT-large on GLUE. Moreover, TLM can
establish a new record on the data-to-text benchmark Rotowire (18.93 BLEU). Our
code will be publicly available at https://github.com/Young1993/tlm.
|
Yangjun Wu, Kebin Fang, Dongxiang Zhang, Han Wang, Hao Zhang, Gang Chen
|
2023-10-28T15:42:47Z
|
http://arxiv.org/abs/2310.18738v1
|
# TLM: Token-Level Masking for Transformers
###### Abstract
Structured dropout approaches, such as attention dropout and DropHead, have been investigated to regularize the multi-head attention mechanism in Transformers. In this paper, we propose a new regularization scheme based on token-level rather than structure-level to reduce overfitting. Specifically, we devise a novel **T**oken-**L**evel **M**asking (TLM) training strategy for Transformers to regularize the connections of self-attention, which consists of two masking techniques that are effective and easy to implement. The underlying idea is to manipulate the connections between tokens in the multi-head attention via masking, where the networks are forced to exploit partial neighbors' information to produce a meaningful representation. The generality and effectiveness of TLM are thoroughly evaluated via extensive experiments on 4 diversified NLP tasks across 18 datasets, including natural language understanding benchmark GLUE, ChineseGLUE, Chinese Grammatical Error Correction, and data-to-text generation. The results indicate that TLM can consistently outperform attention dropout and DropHead, e.g., it increases by \(0.5\) points relative to DropHead with BERT-large on GLUE. Moreover, TLM can establish a new record on the data-to-text benchmark Rotowire (\(18.93\) BLEU). Our code will be publicly available at [https://github.com/Young1993/tlm](https://github.com/Young1993/tlm).
## 1 Introduction
In recent years, a variety of pre-trained language models based on the Transformer (Vaswani et al., 2017) architecture have been presented, such as BERT, GPT (Brown et al., 2020), and T5 (Raffel et al., 2022). These models push state-of-the-art forward in numerous NLP tasks.
With the rapid growth of model parameters, deep neural networks are highly likely to encounter overfitting challenges because supervised data is usually expensive and insufficient for large language models. This problem could cause the degradation of model generalization. To address this issue, regularization methods, such as dropout (Srivastava et al., 2014) and subsequent research (Wan et al., 2013; Fan et al., 2020; liang et al., 2021) have been developed from a structural perspective, which trained with "thinned" subnetworks. The feature of dropout is that it randomly drops units from the neural networks during training, preventing units from co-adapting too much.
To further mitigate overfitting for Transformers, structured methods such as DropHead (Zhou et al., 2020) are proposed to drop entire attention heads in the attention mechanism with the purpose of pre
Figure 1: Illustrations of the attention score with Attention dropout, DropHead, and TLM. The row denotes _Query_, and the column represents _Key_. Attention dropout randomly drops some attention weights. DropHead directly drops entire attention heads. Regarding TLM, Self-masking (left) denotes that the scores for the masked column are invalid. Siblings-masking (right) means the scores for the row and column are useless except for the masked token itself.
venting a small subset of heads from dominating the whole model. Dropping entire attention heads may result in losing a significant amount of feature information. Attention dropout is the application of the dropout technique to the attention mechanism. It arbitrarily drops some attention weights in the matrix calculation of self-attention. However, experiments in DropHead and our preliminary trials (shown in Table 1) demonstrate that the difference is not obvious with or without attention dropout.
In this work, we introduce a novel regularization scheme based on token-level instead of a structural perspective. This method, **T**oken-**L**evel **M**asking (TLM), is a training technique to regularize the connections among tokens during the attention calculation in each layer of Transformer blocks. Specifically, considering the example shown in Fig.1 and 2, TLM contains two techniques: 1) Siblings-masking. The first step of this method is that we use random function1 to select a percentage of the tokens in the _k-th_ layer, e.g., the masked token 'I' (the gray block in Fig.2) is excluded from the calculation of attention weights among the sibling tokens but copies itself, and its neighboring tokens considers other siblings in addition to 'I'. Then, we feed the Feed-Forward Network with the attention output to obtain the new hidden state as the input of the next layer. 2) For Self-masking, we borrow the idea from CBOW [10] where its attention score is entirely contributed by others. The difference with Siblings-masking is the masked token 'I' is forbidden to attend to attention computation. In the training phase, we randomly invoke one of two masking strategies at each batch with a \(50\)-\(50\) chance2. In this manner, the networks are forced to utilize partial neighbors' attention information, not the whole (i.e. the connections between the masked tokens and their neighboring tokens are invalid, which are implemented by assigning a large negative number in the matrix of attention weights). This scheme introduces a bottleneck that the nets should work hard to become robust and produce a meaningful representation.
Footnote 1: Bernoulli function: [https://pytorch.org/docs/stable/generated/torch.bernoulli.html?highlight=bernoulli](https://pytorch.org/docs/stable/generated/torch.bernoulli.html?highlight=bernoulli)
Footnote 2: For simplicity, we conduct most of the experiments with 50-50 chance and ablate the proportion in Appendix D.
To confirm the effectiveness of our approach, we conducted extensive experiments on 18 popular datasets. The tasks range from English natural language understanding benchmark GLUE, ChineseGLUE, and Chinese Grammatical Error Correction, to data-to-text generation. The experimental results demonstrate that our TLM with the backbones can substantially improve performance. Particularly, our method with BERT-base/BERT-large boosts the score of DropHead from \(79.2\) to \(79.9\) and \(81.7\) to \(82.2\) on GLUE, and it achieves a new state-of-the-art performance (\(18.93\) BLEU) on data-to-text generation. Further experimental analyses demonstrate that our TLM is more effective in alleviating overfitting than the baselines.
Our main contributions are summarized as follows:
* To reduce overfitting, we present TLM, a novel, simple yet effective training technique to refine the self-attention computation flow in the multi-head attention mechanism without modifying the structure of Transformer models.
* TLM can seamlessly integrate with pre-trained Transformer models without extra cost. The experiments on 18 popular datasets indicate that TLM can lead to consistency improvements compared to the strong baselines.
* Further analyses demonstrate that TLM can reduce overfitting and enhance the robustness of the networks.
## 2 Related Work
Mask language modeling.In BERT[14], 15% of input tokens are selected, and 80% of those selected tokens are replaced with the special token [MASK], while 10% remain unchanged and the remaining 10% are randomly replaced. However, this random masking and replacement are only done once during data pre-processing, resulting in a mismatch between pre-training and fine-tuning. RoBERTa [10] duplicates training data 10 times to address this issue, but this requires more training steps. In
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & QQP & RTE \\ BERT -w Attention-dropout & 87.0 & 61.8 \\ BERT -w/o Attention-dropout & 86.9 & 62.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of BERT w and w/o attention-dropout on QQP and RTE. The descriptions of datasets are available in section 4.1.
contrast, our proposed method modifies the attention computation flow in each layer without requiring additional data or a special token.
Attention Mask.Attention Mask in Transformers, which is originally used to ignore those invaluable tokens, called padding tokens, in order to ensure that all the sequences are the equal length for each batch; or to prevent the networks from seeing the tokens' subsequent position in the auto-regressive decoder, such as UniLM Dong et al. (2019), OPTZhang et al. (2022), and LLaMATouvron et al. (2023). In this work, we also employ the attention mask with the purpose of mitigating overfitting, i.e., we remove some attention links among tokens during training and keep the same as the standard Transformer in inference.
Regularization.Common regularization methods include L1 and L2 regularization, dropout, early stopping, and data augmentation. Dropout Srivastava et al. (2014) is a very popular training technique that aims to avoid overfitting by arbitrarily dropping some units. LayerDrop Fan et al. (2020) randomly drops some entire substructures or components. Data dropout Iyyer et al. (2015) is performed on the input data level as data augmentation. Compared to the previous methods, our approach aims to carefully control the connections between tokens in the multi-head attention mechanism.
## 3 Approach
In this section, we first review the self-attention computing workflow of the Transformer and then describe our TLM in more detail.
### Multi-head Attention in Transformer
In the research of the Transformer Vaswani et al. (2017), the calculation for the vanilla attention is formulated as follows:
\[Attn(Q,K,V)=softmax(\frac{S(QK)}{\sqrt{d_{emb}/H}})V \tag{1}\]
\[S(QK)=MatMul(QK^{T}) \tag{2}\]
where the queries \(Q\), keys \(K\), and values \(V\) are all matrices, where the shapes are \(Batch\ size\times sequence\ length\times d_{emb}\). \(d_{emb}\), and \(H\) denote the dimension of embedding and the number of heads, separately.
There are three types of multi-head attention: 1) **Self-Attention** usually denotes the self-attention in the encoder, which means all of the keys (\(K\)), values (\(V\)), and queries (\(Q\)) come from the same place in a self-attention layer. Thus, every position in the encoder can attend to the attention computing of all positions. 2) **Cross-Attention** is applied in the encoder-decoder layer, and its feature is that queries (\(Q\)) come from the decoder layer, and the memory keys/values come from the output of the encoder. In this manner, each position in the decoder can be present at the attention computing of
Figure 2: The computing flow of TLM for the sentence ’_I ate some pizza.’_. Our TLM can be employed on the encoder, encoder-decoder, and decoder architecture in the Transformer.
all positions in the input sequence. 3) **Masked-Attention** is the form of auto-regression in the decoder. The self-attention layers allow each position in the decoder to attend to all places in the decoder up to the current position.
### Tlm
The core computing procedure of TLM is shown in Fig.2 and Algorithm (1). In a nutshell, we modify the computing of self-attention by adding a novel step to control the connections between tokens, which in turn influences the attention scores and contextual semantic representation. We utilize TLM in the training phase while keeping the attention calculation as the vanilla during testing.
In the training phase, we first compute attention weights \(\text{S}(QK)\) by performing Eq.2, and \(\text{S}(QK)\) denotes the similarity between the queries \(Q\) and keys \(K\) for the input tokens. Then, the random function _Bernoulli_ is executed to select a fixed rate \(R\) of masked tokens in the _k-th_ layer of Transformer blocks at each batch.
\[Att\hat{\text{\emph{n}}}\_M=Bernoulli(Attn\_M,R) \tag{3}\]
Where \(Attn\_M\) refers to the vector of attention-mask 3. The tokens selected as masked tokens will be stored in memory with the attention mask value of 0. When the rate \(R\) is set to \(0.1\), which denotes 10% of tokens would be masked.
Footnote 3: Attention-mask, is abbreviated as \(Attn\_M\), \(Attn\_M=[1,1,...0]\). The values equal 1 denoting the input tokens, or 0 when it belongs to padding token or masked token.
To fit the identical dimension as the attention weight \(\text{S}(QK)\), we expand the attention mask vector \(Att\hat{\text{\emph{n}}}\_M\) into the matrix \(M\):
\[M=Extend(Att\hat{\text{\emph{n}}}\_M) \tag{4}\]
Here, \(M\in\mathbb{R}^{B\times H\times N\times N}\). \(B\), \(H\), and \(N\) refer to batch size, the number of self-attention heads, and the max input sequence length, respectively. The weights of masked tokens in \(M\) are set to the minimum value of the tensor. Then, we can modify the Eq.1 and 2 as follows:
\[\hat{Attn}(Q,K,V)=softmax(\frac{\hat{\text{\emph{S}}}(QK)}{\sqrt{d_{emb}/H}})V \tag{5}\]
\[\hat{\text{\emph{S}}}(QK)=\text{S}(QK)+M \tag{6}\]
The attention scores of masked connections among tokens are very large negative numbers by performing Eq.6, so their weights equal 0 after executing _softmax_. This makes the connections to the masked tokens ineffective in influencing the current masked token.
Next, we feed the Feed-Forward Network with \(\hat{Attn}(Q,K,V)\) to obtain the hidden state \(h_{t}\). We recursively invoke the identical operation until all the layers have been traversed and yield the final output tensors.
## 4 Experiments
To verify the generality and effectiveness of our proposed TLM, we perform extensive experiments on a wide variety of tasks with \(18\) benchmark datasets, including the English natural language understanding benchmark GLUE (\(10\) datasets), ChineseGLUE (\(6\) datasets), Chinese Grammatical Error Correction (\(1\) dataset), and data-to-text generation (\(1\) dataset). Note that our method can be utilized both in the pre-training and fine-tuning, but we only estimate TLM during the fine-tuning in this work due to limited computational resources. In the following, we present the key findings, and more details can be found in the Appendix.
### English Language Understanding
Dataset.GLUE benchmark is a collection of diverse natural language understanding tasks introduced by Wang et al. (2018). GLUE consists of three types of tasks: single-sentence, similarity and paraphrase, and inference tasks. Single-sentence tasks require models to predict the grammaticality or sentiment of a given sentence. Similarity and paraphrase tasks involve determining the degree of semantic equivalence between sentence pairs. Inference tasks aim to capture the entailment relationship between sentences.
Model and Training.For a fair comparison, we choose BERT as the backbone and train it
using BERT-small, BERT-base, and BERT-large to explore the effect on model size. The experiments include BERT without attention-dropout (Att-dropout), with att-dropout/DropHead/TLM at the rate of \(10\%/20\%/5\%\) for each task. We then submit all the files of prediction at 3 different random seeds to the official website GLUE4 and obtain the scores of the test set. As for the hyper-parameters, the learning rate, dropout, and batch size are uniformly set to 2e-5, 0.1, and 32 for all the tasks. All the models are trained and evaluated on 24G Nvidia RTX3090.
Footnote 4: [https://gluebenchmark.com/](https://gluebenchmark.com/)
Analysis.We present the specific results in Table 2. Specifically, we can find that there is only a slight increase in the _AVG_ (\(0.2\) points) with Att-dropout at small/base/large sizes, and both TLM and DropHead are well ahead of Att-dropout by a large margin. Compared to DropHead, our method shows a more significant improvement (\(0.9/0.7/0.5\) points) while scaling model size, which provides evidence that our method is effective in improving the understanding of natural language.
Regarding the sentence-level classification, CoLA, which only contains 8,511 sentences in the training set, our method demonstrates a significant improvement from \(27.5\) to \(35.3\), \(51.0\) to \(53.7\), and \(59.7\) to \(61.7\) as we scale the sizes from small, base to large. This finding is coherent with the design principle of TLM for mitigating overfitting. Thus, our method can achieve much larger performance gains than Att-dropout and DropHead as applied to small-scale supervised data. When scaling the size of datasets such as MNLI-m, which contains 392k/9k sentence pairs on training/test sets, our approach still outperforms the DropHead by both \(0.7\) points at the sizes of BERT-small and BERT-base.
On similarity and paraphrase tasks, such as
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Model & CoLA SST-2 & MRPC & STS-B & QQP & MNLI-m & MNLI-m & QNLIRTE & WNLI AVG & STD \\ BERT-small & & & & & & & & & & \\ w/o Att-dropout & 27.5 & 89.3 & 83.2 & 78.9 & 86.9 & 77.5 & 76.8 & 86.2 & 62.0 & 62.1 & 73.0 & 2.8e-4 \\ +Att-dropout & 27.8 & 89.7 & 83.4 & 79.2 & 87.0 & 77.6 & 77.0 & 86.4 & 61.8 & 62.3 & 73.2 & 4.3e-2 \\ +DropHead & 31.7 & 89.6 & 83.2 & 80.3 & 87.2 & 77.7 & 77.2 & 87.3 & 62.5 & 63.0 & 74.1 & 1.1e-3 \\ +TLM & **35.3** & **90.8** & **83.5** & **81.0** & **87.8** & **78.4** & **77.8** & **87.5** & **63.4** & **64.4** & **75.0** & 2.8e-3 \\ \hline BERT-base & & & & & & & & & & & \\ w/o Att-dropout & 51.0 & 92.3 & **88.2** & 84.2 & 87.7 & 83.5 & 83.2 & 90.3 & 63.0 & 63.0 & 78.6 & 3.1e-3 \\ +Att-dropout & 51.9 & 92.8 & 87.3 & 84.4 & 88.0 & 84.0 & 83.4 & 90.4 & 62.4 & 63.0 & 78.8 & 1.1e-3 \\ +DropHead & 52.0 & **93.4** & 87.8 & **84.5** & 87.5 & 83.6 & 83.1 & 90.4 & 65.2 & 64.4 & 79.2 & 1.8e-3 \\ +TLM & **53.7** & 93.3 & 87.9 & **84.5** & **88.6** & **84.3** & **83.6** & **90.5** & **67.5** & **65.1** & **79.9** & 6.8e-3 \\ \hline BERT-large & & & & & & & & & & & \\ w/o Att-dropout & 59.7 & 93.9 & 88.0 & 86.1 & 88.7 & 86.5 & 85.6 & 92.5 & 69.7 & 63.7 & 81.4 & 2.6e-3 \\ +Att-dropout & 59.8 & **94.3** & 87.9 & **86.5** & 88.9 & 86.6 & 85.7 & 92.7 & 69.6 & 63.7 & 81.6 & 7.9e-4 \\ +DropHead & 60.1 & 94.1 & 88.1 & 85.9 & 89.2 & **86.7** & 85.8 & 92.6 & 70.1 & 64.4 & 81.7 & 6.5e-3 \\ +TLM & **61.0** & 94.2 & **88.6** & **86.5** & **89.3** & **86.7** & **86.1** & **92.8** & **70.8** & **66.4** & **82.2** & 4.7e-4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fine-tuned BERT-small, BERT-base, and BERT-large performances on English natural language understanding benchmark GLUE. Each method is tuning with 3 different random seeds. The AVG denotes the average results and STD is the standard deviation of 3 results. The highest numbers are in bold.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Model & AFQMC & TNEWS1.1 & IFLYTEK & CMNLI & CLUEWSC & CSL & AVG & STD \\ BERT -w/o Att-dropout & 73.6 & 56.7 & 60.2 & 79.4 & 62.2 & 80.2 & 68.6 & 2.2e-2 \\ BERT+Att-dropout & 73.7 & 56.6 & 60.3 & **79.7** & 62.1 & 80.4 & 68.8 & 1.1e-3 \\ BERT+DropHead & 73.6 & 57.0 & 60.6 & 79.0 & 71.4 & 80.5 & 70.4 & 5.2e-4 \\ BERT+TLM & **73.8** & **58.2** & **61.5** & 79.3 & **73.4** & **81.4** & **71.3** & 1.1e-3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Fine-tuned BERT-base performances on Chinese language understanding benchmark CLUE. The AVG denotes the average results and STD is the standard deviation of 3 results.
QQP, our method can upgrade the performance by \(0.6/1.1\) points (\(87.2\to 87.8\) and \(87.5\to 88.6\)) compared to DropHead at the sizes of small and base, and this provides evidence that randomly dropping entire heads benefits less than carefully token masking for the representations of sentence-pairs. When scaling the size to BERT-large, the improvement is not obvious like in BERT-base, we speculate that \(89.3\) is approaching \(91.1\) (the performance of SOTA) on the GLUE leaderboard, where the enhancement is limited only through regularization methods.
As to WNLI, this inference task requires the model to fully understand the contextual information provided by words or phrases in a sentence. The experimental results indicate that our TLM, by carefully controlling masking, can bring more benefits than DropHead and attention dropout.
### Chinese Language Understanding
Dataset.CLUE, introduced by Xu et al. (2020), is a widely used benchmark for Chinese Language Understanding Evaluation. Specifically, _TNEWS1.1_ task classifies short news titles into 15 categories. As to the IFLTTEK task, which involves assigning a label from a total of 119 categories to app descriptions. Other tasks include _CLUEWSC2020_ (CLUEWSC), which determines co-reference between pronouns and nouns in a sentence. AFQMC aims to judge the semantic similarity of sentence pairs. CSL is a keyword recognition task. CMNLI determines the entailment relationship between sentence pairs.
Model and Training.We choose the Chinese BERT-base from huggingface5 as our backbone and train it with/without Attention dropout (Attdropout), with TLM and DropHead. We submit the results with 3 different random seeds to CLUE leaderboard6 and obtain the scores. We evaluate our model with the masking rate of \(5\%\) and Drophead at the rate of \(10\%\). Other hyper-parameters are the same as the original BERT.
Footnote 5: [https://huggingface.co/bert-base-chinese](https://huggingface.co/bert-base-chinese)
Footnote 6: [https://www.cluebenchmarks.com/](https://www.cluebenchmarks.com/)
Analysis.The overall results with TLM, DropHead, Att-dropout, and without Att-dropout are reported in table 3. First, both DropHead and our TLM can notably boost the _AVG_ compared to with/without Attention dropout, which verifies the effectiveness of these two regularization methods. Concerning the classification tasks, our approach can significantly outperform DropHead on short text _TNEWS1.1_ by \(1.2\%\) and long text _IFLTEK_ by \(0.9\%\). The possible explanation of promotions is that Siblings-masking and Self-masking introduce a bottleneck, where the networks can only utilize partial neighbors' attention information. Thus, the networks should work hard to become robust and this scheme benefits more than DropHead. We also notice that adding regularization methods results in performance degradation on CMNLI, and we leave this for future work to investigate the reasons.
### Chinese Grammatical Error Correction
Dataset.Chinese Grammatical Error Correction (CGEC) is a task that automatically detects and corrects the grammatical errors in the input sentence without revising the meaning of the original sentence as much as possible. For this task, we evaluate our proposed method on the benchmark dataset CGED7 introduced by Rao et al. (2020). The set contains 28,031/3,115 utterances for training/validation with the average length of sentences 46.24/45.55. For the test sets, CGED2021 and CGED2020 comprise 2,294 and 1,457 utterances.
Footnote 7: [https://github.com/blcuicall/cged_datasets](https://github.com/blcuicall/cged_datasets)
Model and Training.To demonstrate the effectiveness as possible, we choose the strong pre-trained model Chinese Bart8 as our backbone and fine-tune it with TLM, DropHead, and Att-dropout at the rate of \(10\%\). We also compare results with the top-performing baselines GECToR Omelianchuk et al. (2020) with BERT, RoBERTa, and ELECTRA. The learning rate, batch size, and epoch are set to 3e-5, 32, and 150, respectively.
Footnote 8: [https://huggingface.co/fnlp/](https://huggingface.co/fnlp/)
\begin{table}
\begin{tabular}{l c} \hline \hline Model & Score \\ GECToR-BERT & 32.8 \\ GECToR-RoBERTa & 33.5 \\ GECToR-ELECTRA & 32.7 \\ MAN Fan et al. (2021) & 41.3 \\ Bart-base -w/o Att-dropout & 42.0 \\ Bart-base + Att-dropout & 42.3 \\ Bart-base + DropHead & 42.7 \\ Bart-base + TLM & **43.7** \\ \hline \hline \end{tabular}
\end{table}
Table 4: The overall results on CGED2021.
Analysis.The definition of metrics is from the Chinese Learner Text Correction9, and we present the results in Table 4 obtained by the official scripts. First, we can observe that our TLM outperforms GECToR-BERT/RoBERTa/ELECTRA by \(10.9\)/\(10.2\)/\(11.0\) points. In contrast with MAN (A more detailed comparison can be found in Appendix C), our approach leads to an improvement of \(2.4\) points without requiring extra model parameters like MAN. The scale of the set (\(28k\) sentences) is relatively small, while the model size of Bart (over \(138M\)) is quite large, which may easily cause overfitting. Under this circumstance, our method improves by \(1.0\) and \(1.4\) points compared to DropHead and attention-dropout. We speculate that DropHead drops entire heads of attention and may lose some syntactic information. Our token-level masking has the advantage of detecting syntax errors and correcting the errors because the absence of some tokens will strengthen the model's sensitivity to syntactic information.
Footnote 9: [https://github.com/blcuicall/CCL2022-CLTC/tree/main/metrics/track2](https://github.com/blcuicall/CCL2022-CLTC/tree/main/metrics/track2)
### Data-to-Text Generation
Dataset.Data-to-text generation is a significant challenge that aims to automatically yield a description to represent the valuable key information in the structured data. The benchmark dataset is ROTOWIRE introduced by Wiseman et al. (2017), whose output is the summary of NBA basketball games as the input is corresponding records that represent the performance of their teams and players. The summaries are professionally written and relatively well structured with an average generation length of 337.1 words per example. Following the research10, the set has been split into training, validation, and test sets consisting of 3,398/727/728 summaries, respectively.
Footnote 10: [https://github.com/harvardnlp/boxscore-data](https://github.com/harvardnlp/boxscore-data)
Model and Training.In this task, we take the encoder-decoder model T5-small11 as our backbone and compare it with SOTA models. We train T5 with TLM, DropHead, and Att-dropout at the rate of \(10\%\). As to the hyper-parameters, we set beam search as 8, learning rate as 5e-4, and max text length as 512. To penalize repetition, the repetition penalty is set to 2.
Footnote 11: [https://huggingface.co/t5-small](https://huggingface.co/t5-small)
Analysis.We report the results with BLEU in table 5. The current SOTA model HierarchicalEncoder proposes Number Ranking and Importance Ranking as two auxiliary tasks to capture the individual relations between the records. Our approach increases by \(0.97\) points relative to HierarchicalEncoder and achieves a new SOTA. Meanwhile, TLM is extremely easy to train with T5 in an end-to-end manner as well as no extra modules or tasks are required. It can also increase by \(0.92\) points in contrast to DropHead, the advantage for our TLM is the masking scheme can encourage models to capture complex relations among the records and select the salient information in the table. However, dropping entire heads for DropHead may cause the nets are not sensitive in capturing complicated feature relationships.
### Ablation study
Although the experimental results are superior, the effectiveness of our TLM has not been thoroughly investigated. Thus, we conduct further studies to gain a better understanding of our approach. As to the selection of datasets, the STS-B and CoLA are sentence-level datasets while IFLYTEK and CSL are long-text datasets, and CGED2020/2021 are grammatical error correction datasets. We hope the tasks can cover different lengths of text meanwhile
Figure 3: The comparison among Attention Dropout, DropHead, and TLM on STS-B.
\begin{table}
\begin{tabular}{l c} \hline \hline Model & BLEU \\ ENT (Puduppully et al., 2019) & 16.12 \\ DUV (Gong et al., 2020) & 15.92 \\ HierarchicalEncoder (Li et al., 2021) & 17.96 \\ T5 -w/o Att-dropout & 18.00 \\ T5 + Att-dropout & 17.98 \\ T5 + DropHead & 18.01 \\ T5 + TLM & **18.93** \\ \hline \hline \end{tabular}
\end{table}
Table 5: BLEU results on Rotowire.
the diversity can be guaranteed.
**TLM vs Attention Dropout/DropHead.** To analyze the relationship among TLM, attention dropout (Att-dropout), and DropHead applied to self-attention layers in Transformers, we first train BERT-small only with TLM/Att-dropout/DropHead in the [\(0.05\), \(0.10\), \(0.15\), \(0.20\)] range to explore their influences on performance. The results are presented in Fig.3. We keep all the parameters the same except for the rates. The finding is that both TLM and DropHead are well ahead of Att-dropout by a large margin on STS-B, and our method is more stable than DropHead.
Second, we test the effect of different combinations on the self-attention models. As shown in Table 6, we observe that adding any type of regularization method can improve the performance of the vanilla model, and our TLM outperforms attention dropout and DropHead under the same conditions by a large margin. When combined together, we find that the performance is not optimal, especially when all regularization methods are used together. This is mainly due to the fact that excessive use of regularization may cause training instability.
Effect on training data ratio.We further investigated the impact of different training data ratios by training BERT-base+TLM using 50%/65%/80% of supervised data on CSL, and the results are presented in Fig.4. In contrast to DropHead, TLM can achieve comparable performances even when trained with only 50% of data. Moreover, our method outperforms DropHead with 65% of data on CSL. The improvements may be attributed to the token-level masking, as this strategy encourages the model to capture meaningful context representation with long input sentences. Therefore, our TLM can benefit the robustness of the networks and reduce the dependence on supervised data.
Effect of TLM rate.We conduct further testing on the impact of varying the rate of masking in the encoder and decoder of Bart-base, ranging from 10% to 20%. As outlined in Table 7, the results of _SCORE_ are better than the baseline, except for the rate of \(20\%\). A possible explanation for why our TLM underperforms BERT at the rate of \(20\%\) is that an excessive amount of masking may confuse the networks and decrease their ability to comprehend syntax information. It's important to note that a too-large rate should be cautious. Overall, our optimal option is the group of (10%, 15%) for encoder and decoder, and it outperforms the strong baseline by 1.4 points (\(37.8\to 39.2\)). This promotion demonstrates that our TLM can enhance the understanding of grammatical information and guide the model toward correcting grammatical errors.
Effect of Siblings-masking/Self-masking.We also analyze the influence of our masking techniques, and the results at the size of BERT-small are reported in Table 8. It can be observed that the
\begin{table}
\begin{tabular}{l r} \hline \hline Method & STS-B \\ BERT w/o regularization & 78.7 \\ + dropout & 78.9 \\ + dropout + Att-dropout & 79.2 \\ + dropout + DropHead & 80.3 \\ + dropout + TLM & **81.0** \\ + dropout + DropHead + Att-dropout & 80.4 \\ + dropout + DropHead + TLM & 80.0 \\ + dropout + TLM + Att-dropout & 80.7 \\ + All & 79.9 \\ \hline \hline \end{tabular}
\end{table}
Table 6: The effect of different regularization combinations. All the rates equal \(0.1\).
Figure 4: Results with different training data ratios on the test set of CSL.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**CGED2020**} \\ \cline{2-6} & \(E_{R}\) & \(D_{R}\) & DET-F1 & COR-F1 & SCORE \\ \hline Bart & - & - & 80.5 & 19.3 & 37.8 \\ TLM & 20 & 20 & 78.9 & 19.3 & 37.1 \\ TLM & 15 & 15 & 81.4 & 19.5 & 38.3 \\ TLM & 15 & 10 & 82.1 & 20.1 & 39.1 \\ TLM & 10 & 15 & **82.4** & 20.2 & **39.2** \\ TLM & 10 & 10 & 82.1 & **20.3** & 38.9 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The results on CGEC test sets of CGED2020. The \(E_{R}\) and \(D_{R}\) refer to the rate of masking in the encoder and decoder. DET-F1 means the F1 score for detection, and COR-F1 denotes the F1 score for correction. The complete results can be found in Appendix B.
results decrease by \(2.6\%\) and \(0.6\%\) on CoLA and STS-B when removing the Siblings-masking technique. Similarly, the results without Self-masking decrease by \(3.0\%\) and \(1.2\%\), and there is a drop of \(7.6\%\) and \(2.2\%\) without both techniques. These findings highlight the importance of both Siblings-masking and Self-masking methods.
## Conclusion
In this paper, we propose a simple training strategy, Token-Level Masking (called TLM) to reformulate the computation flow of multi-head self-attention for reducing overfitting. During training, we randomly invoke one of the two masking techniques: 1) Siblings-masking, where the masked token is forbidden to interact with its siblings when computing attention weights, and 2) Self-masking, the attention weights for the masked token is solely reliant on others. This regularization scheme enables the networks to work hard to become robust and acquire meaningful information.
To verify the effectiveness of our proposed method, we conducted various experiments with 18 benchmark datasets. The results demonstrate that TLM can consistently improve the performances compared to the strong baselines and even achieve SOTA on data-to-text generation. Through further analysis, we observe that our TLM is more stable than DropHead or attention dropout. Meanwhile, it can seamlessly integrate with pre-trained models.
## Limitations
Here, we list several of what we consider to be limitations:
1. The rate of our masking is a hyper-parameter that needs to be tuned, as the experiments shown in Table 7, the performance may underperform when the rate is set too large (e.g., over 20%).
2. We argue that TLM can also be applied to vision or speech Transformer-based networks, such as VIT (Dosovitskiy et al., 2020) and UniSpeech (Wang et al., 2021), we leave it as the future work for further validation. Meanwhile, we haven't yet estimated the performance by combining TLM with extremely large language models, such as T5-11B and LLaMA.
3. Due to the limitation of computational resources. we merely fine-tuned the pre-trained models with TLM in this work. The effectiveness of TLM applied to the pre-training phase needs to be further validated.
4. In contrast with dropout, TLM can only apply to Transformer-based networks, not all the neural networks, such as CNN or LSTM.
5. Despite numerous ablation studies being performed, the explanation of TLM's optimization on the self-attention mechanism remains insufficient, especially in terms of the effect on attention distribution. Further exploration is needed.
## Acknowledgements
The authors gratefully acknowledge Yao Zhao, Min Liang, Mengqi Zhang, Xiangyu Jin, Fengli Shi, Shanhoo Luo, and Fang Shu for giving valuable suggestions on this study. Our thanks also go to all the anonymous reviewers for their positive feedback. The work is supported by the National Key Research and Development Project of China (2022YFF0902000). In addition, we thank HANGZHOU YIYOULIAO TECHNOLOGY CO LTD [https://www.yiyouliao.com/](https://www.yiyouliao.com/) for providing computing resources.
|
2301.01528
|
Dielectric relaxation induced by oxygen vacancies in
Na$_{0.5}$Bi$_{0.5}$TiO$_{3}$ ceramics
|
Dielectric permittivity was studied in ceramics of relaxor ferroelectric
bismuth-sodium titanate Na$_{0.5}$Bi$_{0.5}$TiO$_{3}$. The measurements were
performed on as sintered and heat treated in vacuum samples. The diffuse
dielectric anomalies associated with the structural phase transitions were
observed in as sintered samples. The intense peak of permittivity ($
\varepsilon_{\text{max}} \sim 10^{4}$) appeared after heat treating in vacuum.
The anomaly of $\varepsilon(T)$ was contributed by slow polarization processes
($f<10$ kHz) and was non-stable, vanishing on heating in air up to $\sim 800$
K. Temperature and frequency dependencies of $\varepsilon$ were described by
using Cole-Cole model with accounting thermally stimulated decay of the
non-stable polarization. It is supposed that the dielectric anomaly is
determined by space charge polarization mechanism. Oxygen vacancies
V$_{\rm{O}}^{\bullet \bullet}$ and electrons localized on titanium ions
Ti$'_{\rm{Ti}}$
are assumed to be responsible for the phenomenon observed.
|
V. M. Sidak, M. P. Trubitsyn, T. V. Panchenko
|
2023-01-04T10:47:20Z
|
http://arxiv.org/abs/2301.01528v1
|
[
###### Abstract
Dielectric permittivity was studied in ceramics of relaxor ferroelectric bismuth-sodium titanate Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\). The measurements were performed on as sintered and heat treated in vacuum samples. The diffuse dielectric anomalies associated with the structural phase transitions were observed in as sintered samples. The intense peak of permittivity (\(\varepsilon_{\rm max}\sim 10^{4}\)) appeared after heat treating in vacuum. The anomaly of \(\varepsilon(T)\) was contributed by slow polarization processes (\(f<10\) kHz) and was non-stable, vanishing on heating in air up to \(\sim 800\) K. Temperature and frequency dependencies of \(\varepsilon\) were described by using Cole-Cole model with accounting thermally stimulated decay of the non-stable polarization. It is supposed that the dielectric anomaly is determined by space charge polarization mechanism. Oxygen vacancies \(\nu_{\rm O}^{\bullet\bullet}\) and electrons localized on titanium ions \(\Pi_{\Pi}^{\prime}\) are assumed to be responsible for the phenomenon observed. dielectric properties, permittivity, perovskites, defects
###### Abstract
Dielectric permittivity was studied in ceramics of relaxor ferroelectric bismuth-sodium titanate Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\). The measurements were performed on as sintered and heat treated in vacuum samples. The diffuse dielectric anomalies associated with the structural phase transitions were observed in as sintered samples. The intense peak of permittivity (\(\varepsilon_{\rm max}\sim 10^{4}\)) appeared after heat treating in vacuum. The anomaly of \(\varepsilon(T)\) was contributed by slow polarization processes (\(f<10\) kHz) and was non-stable, vanishing on heating in air up to \(\sim 800\) K. Temperature and frequency dependencies of \(\varepsilon\) were described by using Cole-Cole model with accounting thermally stimulated decay of the non-stable polarization. It is supposed that the dielectric anomaly is determined by space charge polarization mechanism. Oxygen vacancies \(\nu_{\rm O}^{\bullet\bullet}\) and electrons localized on titanium ions \(\Pi_{\Pi}^{\prime}\) are assumed to be responsible for the phenomenon observed.
dielectric properties, permittivity, perovskites, defects Date 30, 2022, in final form August 31, 2022, in final form August 31, 2022, in final form August 31, 2022, in final form August 31, 2022
## 1 Introduction
High sensitivity for external fields is the most valuable requirement for functional materials used to transform energy from certain kind to another one. An increased susceptibility often results from lattice instability in the range of structural phase transition. That is why crystalline compounds undergoing structural transformations are intensively investigated by researchers and technologists involved in creation of new functional materials for piezoelectric, thermoelectric, photovoltaic and other converters. The crystals with perovskite ABO\({}_{3}\) structure have found wide range of applications in modern electronics. Consequently, the compounds of the perovskite family are among the most popular objects for studies in materials sciences. Variations of chemical composition, formation of the structure on nano- and micrometer levels, control on the lattice defects make it possible to create the materials with a broad variety of physical properties. Thus, ceramics based on Pb-ZrTiO\({}_{3}\) show extremely high electro-mechanical parameters and are used in piezoelectric devices [1]. Introducing the transition groups ions into the structural ABO\({}_{3}\) unit leads to the appearance of magneto-electrical coupling in multiferroic materials (BiMnO\({}_{3}\), BiFeO\({}_{3}\), TbMnO\({}_{3}\)) [2]. Some crystals with complex perovskite structure like ACu\({}_{3}\)Ti\({}_{4}\)O\({}_{12}\) (A = Ca, Ba, Sr) possess extremely high dielectric constants (\(\sim 10^{4}-10^{5}\)) which opens new prospects to be used as the materials with high permittivity in memory and microwave devices [3, 4].
It is well known that structural imperfections can strongly affect the properties of crystals and even are capable of inducing new phenomena which are not observed in a perfect lattice. That is why comprehensive information on typical intrinsic and extrinsic lattice defects becomes of high importance. At the present time, numerous works are aimed at studying the mechanisms of the influence of defects on the properties of crystals. Based on the knowledge gained, the technological approaches are developed that allow to control qualitatively and quantitatively the defectiveness of the crystal structure. Doping with iso- or heterovalent impurities, heat treatment in various atmospheres, applying external fields make it
possible to stimulate the appearance of the defects that improve the targeted characteristics or, conversely, to reduce the content of undesirable defects that degrade the useful parameters. Intensive experimental and technological studies aimed at controlling the subsystem of defects, have led to the appearance of the "defect engineering" concept [5].
Modern requirements in the field of environmental protection considerably changed the situation in the production of functional materials and urge the search for new compositions free from health harmful chemical elements. Lead-free bismuth-sodium titanate Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\) (NBT) meets these requirements and shows a number of attractive physical properties. NBT crystal belongs to a group of complex perovskites with the structure of A\({}^{\prime}\)A\({}^{\prime}\)BO\({}_{3}\) type, where sodium and bismuth atoms are randomly distributed through the A-site (figure 1) [6]. The extremely high electro-mechanical coupling is the most prominent physical property of NBT crystal and solid solutions based on it [7]. The specific properties are directly related to the structural phase states observed in NBT. On cooling from high-temperature side NBT undergoes the following sequence of phase transitions: from cubic to tetragonal ferroelastic phase at \(T_{C}\approx 810\) K, and further to rhombohedral ferroelectric phase at \(T_{R}\approx 490\) K [6]. In the range of T\({}_{R}\), NBT demonstrates high permittivity and specific dielectric dispersion peculiar to relaxor ferroelectrics [8]. Besides, the properties of NBT can be substantially modified by doping and technological treatments [9; 10; 11; 12].
Recently, the strong dielectric anomaly (\(\sim 10^{4}\)) was observed near 670-690 K in NBT single crystal [9; 13; 14; 15] and Na\({}_{0.5}\)Bi\({}_{0.5}\)TiO\({}_{3}\) - BaTiO\({}_{3}\) (NBT-BT) solid solutions [16]. The \(\varepsilon(T)\) dependence showed an anomalous temperature behaviour and unusual frequency dispersion (here and below symbol \(\varepsilon\) without prime means real part of permittivity). In addition, permittivity peak disappeared after heat treatment in air (\(\sim 800\) K) and could be restored by heat treating in vacuum (\(\sim 1070\) K). The authors of [15] supposed that dielectric anomaly was contributed by the dipole defects formed by oxygen vacancies (\(\mathbf{V}_{\mathbf{O}}^{\bullet\bullet}\)) and electrons localized on the nearest titanium ions Ti\({}_{\mathrm{T}}^{\prime}\). The associated dipole defects (Ti\({}_{\mathrm{T}}^{\prime}\)-\(\mathbf{V}_{\mathbf{O}}^{\bullet\bullet}\))\({}^{\bullet}\) were considered as unstable and decomposing upon heating.
These results were obtained for NBT single crystals. Of course, for practical applications, NBT ceramics can be expected as more commercially and technologically acceptable. In this paper anomalous dielectric relaxation mentioned above is studied in NBT ceramics. By accounting the permittivity value in maximum (\(\sim 10^{4}\)), the previous interpretation based on the dipole defects [15; 16] is considered critically. It is supposed that a strong dielectric peak can be associated with space charge polarization phenomenon. The possible microscopic mechanisms of the dielectric relaxation are briefly discussed.
Figure 1: The crystal structure of NBT in tetragonal phase [6].
## 2 Experimental results
The NBT ceramics were prepared by usual sintering technique. The samples for electrical properties measurements were cut off as the plane-parallel plates with the edges of about \(5\times 5\times 0.8\) mm\({}^{3}\). The Pt electrodes were deposited on the main planes of the samples by cathode sputtering method. Electrical properties were measured using AC bridge P 5083 in the temperature interval 300-800 K for the frequency range 0.5-100 kHz. Two types of the samples were used: i) prepared from as sintered ceramics and ii) heat treated in vacuum. The regimes of heat treating were the same as those previously used for single crystals (\(T\) = 1070 K, \(t\) = 2 h, \(p\approx 1\) Pa) [16].
The temperature dependencies of dielectric permittivity \(\varepsilon\) and electrical conductivity \(\sigma\) measured on heating for as sintered NBT ceramics are shown in figure 2. In contrast to the data obtained for NBT and NBT-BT single crystals [15, 16], \(\varepsilon(T)\) dependence does not show intense relaxation anomaly and reflects the structural transformations in the range of T\({}_{C}\) and T\({}_{R}\) only (figure 2). The \(\varepsilon(T)\) dependencies measured on the next cooling run and on the subsequent heating-cooling cycles coincide with each other. The inset to figure 2 shows the temperature dependencies of conductivity \(\sigma\) plotted in Arrhenius scale. One can see that \(\sigma\) increases with AC field frequency \(f\) and weakly depends on temperature. This behaviour is typical of dielectrics at relatively low temperatures. Only at frequencies \(f<2\) kHz and for \(T\geq 500\) K conductivity starts to grow exponentially on heating that gives nearly linear regions in the Arrhenius plot. Such a behaviour reflects a growing contribution of thermally activated charge transfer.
Next, the sample of NBT ceramics was heat treated in vacuum, cooled to room temperature and after that its electrical properties were measured. The data obtained are shown in figure 3. One can see that after heat treating, in the range 700 - 780 K, \(\varepsilon(T)\) demonstrates intense maximum (\(\varepsilon_{\max}\sim 5\cdot 10^{4}\), \(f=0.5\) kHz) which is strongly dependent on frequency \(f\). As \(f\) increases, the peak of \(\varepsilon(T)\) sharply decreases in magnitude and shifts to higher temperatures. Similarly to the data observed for NBT and NBT-BT single crystals [15, 16], intense \(\varepsilon(T)\) maximum (figure 3) could be detected for the first heating run only and disappeared for the next cooling and heating runs. Corresponding dependencies of conductivity \(\sigma(1/T)\) are shown in the inset to figure 3. In the low-temperature interval (\(T<500\) K), \(\sigma\) demonstrates nearly the same behaviour as in the untreated sample (the inset to figure 2), but for higher temperatures conductivity shows an intense peak corresponding to the relaxation maximum of \(\varepsilon(T)\). In subsequent temperature runs, the \(\sigma(1/T)\) dependencies did not show contribution from dielectric relaxation and were the same as shown in the inset to figure 2.
Figure 2: The permittivity dependencies \(\varepsilon(T)\) in as sintered NBT ceramics. The AC field frequency was f = 0.5 (1); 0.8 (2); 1 (3); 2 (4); 5 (5); 10 (6); 50 (7); 100 (8) kHz. The inset shows Arrhenius plot of conductivity \(\sigma(1/T)\) dependencies.
## 3 The model
As mentioned in section 2, the dielectric anomaly \(\varepsilon(T)\) similar to the one shown in figure 3 was earlier detected in single crystals of NBT and NBT-BT [15; 16]. Special attention was paid to the nearly symmetrical shape of permittivity peak, that was quite different from the asymmetrical \(\varepsilon(T)\) anomaly of Debye relaxator. It was proposed that dipoles or associated complexes responsible for the dielectric anomaly were thermally destroyed on heating. Such decomposition was noticeable in the temperature range where the dielectric relaxation was detected. Consequently, the high-temperature wing of the \(\varepsilon(T)\) anomaly decreased more sharply.
The first attempt to explain the specific character of the dielectric anomaly (figure 3) was made in [15] where a decrease of the dipoles or concentration of mobile defects was described as simple exponential temperature decay. The possible role of configurational and vibrational entropy of the dipole defects was considered somewhat later in [14]. Nevertheless, these approaches allowed to interpret the data only at the qualitative level and not provide a correct quantitative description of the experimental results. More accurately, the \(\varepsilon(T,f)\) behavior in NBT-BT single crystal was described in [16], where Debye relaxator model was combined with the kinetic equation that determins the decay of the polarizing entities with temperature.
Dielectric response of real structures in an external AC field can be described by Cole-Cole, Davidson-Cole and other models [17]. These models predict different types of dielectric spectra, symmetrical or non-symmetrical diagrams in complex (\(\varepsilon^{\prime}\)-\(\varepsilon^{\prime\prime}\)) plane, where \(\varepsilon^{\prime}\) and \(\varepsilon^{\prime\prime}\) represent real and imaginary parts of permittivity. The dielectric anomaly shown in figure 3 is observed practically in the same temperature-frequency range (\(T>500\) K, \(f<10\) kHz), where charge transfer processes notably contribute to conductivity (the nearly linear regions in the \(\sigma\left(1/T\right)\) dependencies, the insert to figure 2). That is why the experimental diagrams plotted in (\(\varepsilon^{\prime}\)-\(\varepsilon^{\prime\prime}\)) for the used frequency region do not permit to make a reliable choice between the models mentioned. Hence, the experimental data are described by Cole-Cole model [17]
\[\varepsilon^{*}(T,\omega)=\varepsilon_{\infty}+\frac{C/T}{1+(\mathrm{i}\omega \tau_{R})^{1-\alpha}}, \tag{1}\]
where \(\omega=2\pi f\) is an AC field frequency; \(k\) is Boltzmann constant. Expression (1) includes a minimum number of the fitting parameters: \(\varepsilon_{\infty}\) - permittivity at high-frequency; Curie constant \(C\sim n\) which is directly proportional to the concentration \(n\) of the dipoles; \(\tau_{R}(T)=\tau_{R}^{0}\exp(E/kT)\) is the time of the
Figure 3: The dependencies \(\varepsilon(T)\) measured in the first heating run of NBT ceramics previously heat treated in vacuum (1070 K, 2 h). The AC field frequencies are indicated in the caption to figure 2. The solid lines were calculated by using (1), (4). The inset shows corresponding \(\sigma(1/T)\) dependencies.
relaxation of dipole moments in an external field; energy parameter \(E\) estimates the height of the potential barrier which is overcome at the reorientation dipole moments; phenomenological parameter \(0\leqslant\alpha<1\) describes the distribution of relaxation times \(\tau_{R}\) in disordered structures.
It should be noted that for the samples heat treated in vacuum (figure 3), the anomalies of permittivity imaginary part \(\varepsilon^{\prime\prime}(T,\ f)\) contain contributions from dielectric relaxation and charge transfer in the same temperature-frequency range (see the comments above). Hence, the analysis of \(\varepsilon^{\prime\prime}(T,\ f)\) dependences needs to separate these contributions with apriori unknown parameters. The anomalies of permittivity real part \(\varepsilon^{\prime}(T,\ f)\) (figure 3) are mainly contributed by the polarization processes. Thus, the parameters of the discussed dielectric relaxation can be determined more directly from \(\varepsilon^{\prime}(T,\ f)\) dependencies. In addition, non-stable nature of polarization causes a specific type of anomalous behaviour which is more evident just for dependencies \(\varepsilon^{\prime}(T)\). That is why the following analysis is focused on permittivity real part dependencies.
Further, it should be considered that polarizing entities (dipoles, associated complexes) are non-equilibrium and undergo thermal decomposition. One can assume that a decrease of their concentration \(n\) can be described by the simple kinetic equation [18]
\[\frac{\mathrm{d}n}{\mathrm{d}t}=-\frac{n}{\tau_{D}}\,. \tag{3.2}\]
Here, \(\tau_{D}(T)=\tau_{D}^{0}\exp(U/kT)\) and \(U\) are the time and energy parameters determining the thermal decay of non-stable polarization. In (3.2), one can go from differentiation in time to a derivative in temperature by considering that during the experiments, the samples were heated and cooled with a constant rate. Thus, the temperature of the samples can be written as \(T(t)=T_{0}+\gamma t\), where T\({}_{0}\) is an initial temperature; \(\gamma\) is the rate of temperature changes; \(t\) is the current time. Thus, remembering that \(C\sim n\), from equation 3.2 one can rewrite Curie constant as
\[C(T)=C_{0}\cdot\exp\left[-\frac{1}{\gamma\tau_{D}^{0}}\cdot\int\limits_{T_{0} }^{T}\exp\left(-\frac{U}{kT}\right)\mathrm{d}T\right]. \tag{3.3}\]
Direct fitting of expression (3.1), with accounting (3.3), to the experimental data was complicated and did not give reliable results. Nevertheless, an approximate integration in (3.3) performed in [19] yielded the following expression
\[C(T)=C_{0}\cdot\exp\left[-\frac{kT^{2}}{\gamma\tau_{D}^{0}(U+kT)}\exp\left(- \frac{U}{kT}\right)\right]. \tag{3.4}\]
Thus, the experimental data shown in figure 3 can be described using Cole-Cole formulae 3.1 combined with the approximate solution 3.4 of the kinetic equation 3.2.
## 4 Discussion
It is well known that oxygen vacancies V\({}_{\mathrm{O}}^{\bullet\bullet}\) are the typical defect for the crystals of complex oxides. In tightly packed structures like perovskites ABO\({}_{3}\), the excess positive charge (\(+2e\)) associated with V\({}_{\mathrm{O}}^{\bullet\bullet}\) more probably can be compensated by the necessary number of cationic vacancies, which gives rise to the appearance of Schottky-type defects. If the concentration of V\({}_{\mathrm{O}}^{\bullet\bullet}\) is too high with respect to the number of cation vacancies, the additional electronic defects appear and the valence of the cations neighboring the V\({}_{\mathrm{O}}^{\bullet\bullet}\) can decrease. In ABO\({}_{3}\) structures, the weakly bound electrons can be localized on titanium ions and as a result, Ti\({}_{\mathrm{T_{T_{T}}}}^{\prime}\) centers can arise [20, 21, 22]. The energy levels of Ti\({}_{\mathrm{T_{T_{T}}}}^{\prime}\) centers are shallow enough, and the electrons that hop via regular titanium ions can participate in the charge transfer. The presence of the nearest neighboring V\({}_{\mathrm{O}}^{\bullet\bullet}\) stabilizes the localized electrons and as a result, associated pairs \((\)Ti\({}_{\mathrm{T_{T}}}^{\prime}\)-V\({}_{\mathrm{O}}^{\bullet\bullet})^{\bullet}\) can arise [20].
One can expect that thermal treatment of NBT ceramics in vacuum (\(T\) = 1070 K) should increase mainly the concentration of V\({}_{\mathrm{O}}^{\bullet\bullet}\). Each V\({}_{\mathrm{O}}^{\bullet\bullet}\) that arose in the treated ceramics could cause the emergence of
two \(\mathrm{Ti}^{\prime}_{\mathrm{T_{1}}}\) centers. Correspondingly, the appearance of intense \(\varepsilon(T)\) anomaly after heat treatment (figure 3) can be just associated with the defects formed by \(\mathrm{V}^{\bullet\bullet}_{\mathrm{O}}\). That is why in the previous works [9; 14; 15; 16] a slow dielectric relaxation in NBT single crystals was attributed to re-orientations of \((\mathrm{Ti}^{\prime}_{\mathrm{T_{1}}}\)-\(\mathrm{V}^{\bullet\bullet}_{\mathrm{O}})^{\bullet}\) dipoles resulting from hopping of \(\mathrm{V}^{\bullet\bullet}_{\mathrm{O}}\) through oxygen octahedra vertices. Thermal decay of polarization (3.2) was interpreted as a result of disassociation of \((\mathrm{Ti}^{\prime}_{\mathrm{T_{1}}}\)-\(\mathrm{V}^{\bullet\bullet}_{\mathrm{O}})^{\bullet}\) centers occurring on heating. Nevertheless, a great value of permittivity in the peak (\(\sim 5\cdot 10^{4}\) at \(f=0.5\) kHz, see figure 3) can be hardly attributed to the dipole defects, the concentration of which is assumed to be low enough. More probably, such a high value of \(\varepsilon\) can be the result of space charge polarization which is often observed in inhomogeneous media. Usually, permittivity of such substances is defined as an effective one.
Let us consider more in detail the assumption that intense \(\varepsilon(T)\) peak in figure 3 is contributed by mobile charge defects which in an external electric field can accumulate near certain inhomogeneities.
Earlier in [9; 14; 15; 16] we supposed that reorientation of the dipole complexes \((\mathrm{Ti}^{\prime}_{\mathrm{T_{1}}}\)-\(\mathrm{V}^{\bullet\bullet}_{\mathrm{O}})^{\bullet}\) in an external field and their thermal dissociation on heating occurred through \(\mathrm{V}^{\bullet\bullet}_{\mathrm{O}}\) hopping. That is why calculating the \(\varepsilon(T,f)\) dependencies by using (3.1), (3.4), we associated the pre-exponential factors \(\tau^{0}_{R}\), \(\tau^{0}_{D}\) for the relaxation times with inverse Debye frequency. Correspondingly, the values of \(\tau^{0}_{R}\), \(\tau^{0}_{D}\) were fixed (\(\approx 2\cdot 10^{-13}\) s) and estimated from Debye temperatures (\(\theta\approx 260\)-\(350\) K) typical of perovskites [23].
Assuming the space charge polarization to be the main effect, we have no apriori information on the values of \(\tau^{0}_{R}\), \(\tau^{0}_{D}\) and that is why we set them free. We should add that considering \(\tau^{0}_{R}\), \(\tau^{0}_{D}\) as the fitting parameters allowed to reduce by about an order of magnitude the mean square deviation of the calculated data from the experimental ones. The curves calculated with the help of the model discussed in section 3, are drawn in figure 3 with the solid lines. It should be noted that the background contribution to \(\varepsilon(T)\) due to the structural phase transitions was taken into account as it was described earlier in [9; 14; 15; 16]. In the scale chosen, this contribution shows only a weak dependency on temperature and in pure form can be seen in figure 2 where intense \(\varepsilon(T)\) peak is absent. The values of the parameters, used in (3.1), (3.4) and averaged for all studied frequencies, are presented in table 1. One can see that fitting the calculated data to the experimental ones, in contrast to the previous assumption in [16], gives strongly different values for the pre-exponential factors \(\tau^{0}_{R}\) and \(\tau^{0}_{D}\). The value of \(\tau^{0}_{R}\) corresponds to the order of typical lattice frequencies. By contrast, the factor \(\tau^{0}_{D}\) for polarization decay is found to be twelve orders longer which corresponds to the infra-low frequency range. Seemingly, such extremely high value of \(\tau^{0}_{D}\) indirectly evidence in favor of space charge polarization mechanism.
The dependencies of dielectric relaxation time \(\tau_{R}(1/T)\) and polarization decay time \(\tau_{D}(1/T)\), calcu
Figure 4: The dependencies of the dielectric relaxation time \(\tau_{R}(1/T)\) and polarization decay time \(\tau_{D}(1/T)\) calculated from the data given in table 1.
lated with the help of the data in table 1, are shown in figure 4. One can see that for the whole studied interval, \(\tau_{D}\) values considerably exceed the typical time (\(\sim 1\) s) of a single measurement at certain \(T\) and \(f\). On the other hand, \(\tau_{D}\) values are comparable with the time (\(\sim 3-4\) h) of a single measuring run.
One can show that the model (section 3) combining Cole-Cole formulae (3.1) with kinetic equation (3.2) allows one to describe the specific features of the dielectric relaxation discussed. Thus, (3.4) predicts that the form of the dielectric anomaly \(\varepsilon(T,f)\) should depend on time and on heating rate \(\gamma\). Obviously, such effects can be tested in experiment. Besides, one can expect that the behaviour of \(\varepsilon(T,f)\) should depend on the ratio between the rates of dielectric relaxation \(\tau_{R}^{-1}\) and polarization decay \(\tau_{D}^{-1}\). Really, the anomaly \(\varepsilon(T,f)\) takes a specific form (figure 3) since the decay of non-equilibrium polarization becomes notable in the same temperature range where dielectric peak is detected. Thus, one can expect that the type of dielectric anomaly for certain \(\tau_{R}^{0}\), \(\tau_{D}^{0}\) values should depend on the ratio between activation energies \(U/E\). Let us consider the effects mentioned.
Figure 5 shows the \(\varepsilon(T)\) anomaly calculated for different \(\gamma\) values. One can see how considerably the variations of \(\gamma\) can change the \(\varepsilon(T)\) behaviour. For high values of \(\gamma\) during the whole measuring cycle, the decay of non-equilibrium polarization remains practically negligible. Correspondingly, in the limit \(\gamma\rightarrow\infty\), the behaviour of \(\varepsilon(T)\) approaches a classic behavior of Debye relaxator (figure 5). For the intermediate values of \(\gamma\), the anomaly of \(\varepsilon(T)\) takes the form of nearly symmetrical peak. At infinitely low heating rate (\(\gamma\to 0\)), non-equilibrium polarization has enough time to decay totally before the permittivity peak can be detected. As a result, on lowering \(\gamma\), the permittivity peak decreases in amplitude and finally disappears (figure 5). The inset to figure 5 shows the calculated temperature dependence of Curie constant. On heating, \(C(T)\) shows a step-like decrease manifesting a decay of non-equilibrium polarization. For high rates of \(\gamma\), Curie constant possesses a maximum possible value and is practically temperature independent in the whole interval studied. For low values of \(\gamma\), Curie constant on heating decreases to zero before the dielectric relaxation can be detected.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline C\({}_{0}\), K & \(\alpha\) & \(\tau_{B}^{0}\), s & \(E\), eV & \(\tau_{D}^{0}\), s & \(U\), eV \\ \hline \hline
3.4(8) \(\cdot\) 10\({}^{\prime}\) & 0.09(1) & 1.2(2) \(\cdot\) 10\({}^{-13}\) & 1.21(1) & 1.1(2) \(\cdot\) 10\({}^{-1}\) & 0.67(2) \\ \hline \end{tabular}
\end{table}
Table 1: The values of the parameters in (3.1), (3.4) obtained from the \(\varepsilon(T,\ f)\) dependencies (figure 3).
Figure 5: The contribution from non-stable polarization to dielectric anomaly \(\Delta\varepsilon(T)\). The dashed curves are calculated by using the following heating rates \(\gamma\) = 0.1 (1); 1 (2); 10 (3); 10\({}^{2}\) (4); 10\({}^{6}\) (5) K/min. The parameters in (3.1), (3.4) are taken from processing the experimental data measured at \(f=1\) kHz. The solid line is calculated for the experimental data in figure 3 (\(\gamma\) = 1.7 K/min, \(f\ =\ 1\) kHz). The inset shows the corresponding dependencies of Curie constant \(C(T)\).
Figure 6 illustrates how \(\varepsilon(T)\) anomaly changes its form when the ratio between activation energies \(U/E\) varies. For higher values of \(U/E\), one has a Debye-type behavior of \(\varepsilon\). On lowering the ratio \(U/E\), permittivity \(\varepsilon(T)\) takes the intermediate peak-like form and finally it disappears when the ratio \(U/E\) decreases. The inset to figure 6 shows the corresponding dependencies of Curie constant \(C(T)\).
Assuming that an intense \(\varepsilon(T)\) peak (figure 3) is determined by space charge polarization effects, and for NBT ceramics one can consider the same typical defects such as oxygen vacancies \(\mathbf{V_{O}^{\bullet\bullet}}\), electrons localized on titanium \(\mathrm{T_{T_{T}}^{\prime}}\) and probably associated complexes based on them. \(\mathbf{V_{O}^{\bullet\bullet}}\) can be assumed to be more heavy defects, whereas localized electrons \(\mathrm{T_{T_{T_{T}}^{\prime}}}\) can be supposed to be more light ones. The mobile charge defects can accumulate near the following inhomogeneities in NBT ceramics: i) intergrain boundaries; ii) ferroelectric or ferroelastic domains boundaries; iii) near-electrode regions. Presumably, one can expect that during long enough period the oxygen vacancies can accumulate near certain inhomogeneities and form the regions with higher \(\mathbf{V_{O}^{\bullet\bullet}}\) concentration. In the applied electric field, electrons \(\mathrm{T_{T_{T_{T}}^{\prime}}}\) move between the regions with increased \(\mathbf{V_{O}^{\bullet\bullet}}\) concentration. On heating, due to diffusion, the regions with high \(\mathbf{V_{O}^{\bullet\bullet}}\) concentration dissolve. More information on the nature of the inhomogeneities which can cause space charge polarization in NBT can be obtained from comparison of the experimental data measured for ceramic and single crystalline NBT. This work is in progress at the moment.
## 5 Summary
An intense low frequency anomaly of dielectric permittivity appeared in NBT ceramics after heat treating in vacuum (\(T=1070\) K, \(t=2\) h). The corresponding polarization was found to be non-stable and disappeared after heating in air up to \(\sim 800\) K. The results of the thermal treatment evidenced that the observed dielectric relaxation was contributed by the defects including oxygen vacancies. Dielectric maxima were detected in the same temperature-frequency range where charge transfer processes gave notable contribution to conductivity in AC field. That is why we could not examine reliably the type of the experimental diagrams plotted in the complex plane of permittivity. The temperature and frequency dependencies of \(\varepsilon\) were described on the basis of Cole-Cole model which could be used to describe dielectric relaxation in partially disordered structures. Thermal decay of the non-equilibrium polarization was described using the simple kinetic equation. The analysis was focused on the behaviour of permittivity real part which was mainly contributed by the polarization processes. Combination of Cole-Cole model with kinetic equation allowed us to describe the experimental data with a good accuracy and to predict
Figure 6: The dielectric anomaly \(\Delta\varepsilon(T)\) (dashed lines) calculated for the following ratios \(U/E\) = 0.4 (1); 0.5 (2); 0.6 (3); 0.7 (4); 0.8 (5); 1.2 (6). The value of \(E=1.21\) eV is fixed, and the value of \(U\) is varied. The solid line corresponds to the experimental data in figure 3 (\(U/E\) = 0.55, \(f=1\) kHz). The \(C(T)\) dependencies are shown in the inset.
the evolution of dielectric anomaly under variations of the experimental conditions and characteristics of the phenomena observed. The great value of permittivity in the maximum (\(\varepsilon_{\rm max}\sim 5\cdot 10^{4}\), \(f=0.5\) kHz), observed for NBT ceramics, made doubtful the assumption that the dielectric anomaly could be due to the dipole defects the concentration of which was assumed to be not extremely high. That is why it was supposed that the observed dielectric relaxation was determined by space charge polarization mechanism. Oxygen vacancies \(\mathbf{V_{O}^{\bullet\bullet}}\) and electrons localized on titanium ions \(\rm{Ti}^{\prime}_{T_{I}}\) were assumed to be responsible for the phenomena studied. One can hope that more details on the microscopic mechanism of the thermally non-stable dielectric relaxation can be derived from comparative studies of the electrical properties of NBT single crystals and ceramics treated in atmospheres enriched and depleted in oxygen.
## Acknowledgements
The study was funded by Ministry of Education and Science of Ukraine according to the research projects No. 0119U100694, No. 0120U102239 and No. 0122U001228.
|
2301.06247
|
Circle action of the punctured mapping class group and cross
homomorphism
|
In the following short note, we give a new geometric interpretation of the
generator of the infinite cyclic group
$H^1(\text{Mod}(S_{g,1});H^1(S_g;\mathbb{Z}))$ (this computation is proved by
Morita). There are several construction of this class given by Earle, Morita,
Trapp and Furuta. The construction we give here uses the action of
$\text{Mod}(S_{g,1})$ on the circle and its rotation numbers. We suspect that
our construction is the same as the construction by Furuta and Trapp using
winding numbers and provide half of the proof.
|
Lei Chen
|
2023-01-16T03:37:05Z
|
http://arxiv.org/abs/2301.06247v1
|
# Circle action of the punctured mapping class group and cross homomorphism
###### Abstract.
In the following short note, we give a new geometric interpretation of the generator of the infinite cyclic group \(H^{1}(\operatorname{Mod}(S_{g,1});H^{1}(S_{g};\mathbb{Z}))\) (this computation is proved by Morita). There are several construction of this class given by Earle, Morita, Trapp and Furuta. The construction we give here uses the action of \(\operatorname{Mod}(S_{g,1})\) on the circle and its rotation numbers. We suspect that our construction is the same as the construction by Furuta and Trapp using winding numbers and provide half of the proof.
## 1. The construction and result
Let \(S_{g}\) be a genus \(g\) surface and let \(p\) be a point on \(S_{g}\). When \(g>1\), the universal cover of \(S_{g}\) is the hyperbolic plane \(\mathbb{H}^{2}\). Pick a lift \(\tilde{p}\) of \(p\) in \(\mathbb{H}^{2}\), any diffeomorphism \(\phi\) of \(S_{g}\) fixing \(p\) can be lifted to a diffeomorphism \(\tilde{\phi}\) of \(\mathbb{H}^{2}\) fixing \(\tilde{p}\). Furthermore, the lift \(\tilde{\phi}\) extends to the boundary \(\partial\mathbb{H}^{2}\cong S^{1}\). The boundary action \(\partial(\tilde{\phi})\) does not depend on the isotopy type of \(\phi\). The above construction describes the following Gromov boundary action
\[G:\operatorname{Mod}(S_{g,1})\to\operatorname{Homeo}^{+}(S^{1}).\]
For a proof of the above description, see [1, Chapter 5.5]. Note that Mann-Wolff [13] prove that any action of \(\operatorname{Mod}(S_{g,1})\) on the circle is either trivial or semiconjugate to \(G\).
Let \(\widetilde{\operatorname{Homeo}^{+}(S^{1})}\) be the homeomorphism of \(\mathbb{R}\) that commutes with translation by one map on \(\mathbb{R}\). Then we have the following short exact sequence.
\[1\to\mathbb{Z}\to\widetilde{\operatorname{Homeo}^{+}(S^{1})}\xrightarrow{q} \operatorname{Homeo}^{+}(S^{1})\to 1 \tag{1}\]
Let \(\operatorname{trans}:\widetilde{\operatorname{Homeo}^{+}(S^{1})}\to\mathbb{R}\) be the translation number, which is defined as the following limit, which always exists
\[\operatorname{trans}(f)=\lim_{n\to\infty}\frac{f^{n}(0)}{n}.\]
Let \(P:\pi_{1}(S_{g})\to\operatorname{Mod}(S_{g,1})\) be the point-pushing homomorphism. Let \(e:F_{2g}\to\pi_{1}(S_{g})\) be the natural homomorphism where \(F_{2g}\) is generated by \(a_{1},b_{1},...,a_{g},b_{g}\) and the only relation of \(\pi_{1}(S_{g})\) is given by \(c:=[a_{1},b_{1}]...[a_{g},b_{g}]=1\). Even though homomorphism \(G\circ P\) does not have a lift under the map \(\operatorname{q}\) to \(\widetilde{\operatorname{Homeo}^{+}(S^{1})}\) in the
exact sequence (1), the free group \(G\circ P\circ e\) can be lifted to \(\widetilde{\text{Homeo}^{+}(S^{1})}\). Let
\[\tilde{G}:F_{2g}\to\widetilde{\text{Homeo}^{+}(S^{1})}\]
be a lift of \(G\circ P\circ e\) (which is not unique).
For a group \(H\), an outer-automorphism of \(H\) is an automorphism of \(H\) up to a conjugation of an element in \(H\). By Dehn-Nielsen-Baer Theorem of \(\text{Mod}(S_{g,1})\) (see, e.g., [12, Chapter 8]), we know that \(\text{Mod}(S_{g,1})\cong\text{Out}^{*}(F_{2g})\), where \(\text{Out}^{*}(F_{2g})\) denotes the outer-automorphism group of \(F_{2g}\) consisting of elements \(f:F_{2g}\to F_{2g}\) such that \(f(c)\) is conjugate to \(c\). By composing with a conjugation from \(F_{2g}\), any element in \(\text{Out}^{*}(F_{2g})\) has a representative \(f:F_{2g}\to F_{2g}\) such that \(f(c)=c\) (such \(f\) is still not unique but differed by an element in the centralizer of \(c\), which is a power of \(c\)).
We now define a map
\[R:\text{Mod}(S_{g,1})\to\text{Map}(F_{2g},\mathbb{Z}).\]
Let \(\phi\in\text{Mod}(S_{g,1})\) and \(f:F_{2g}\to F_{2g}\) be a representative of \(\phi\) such that \(f(c)=c\). Then the definition of \(R\) is given by the following formula
\[R(\phi)(\gamma)=\text{trans}(\tilde{G}(f(\gamma)))-\text{trans}(\tilde{G}( \gamma))\]
For a group \(H\) and an \(H\)-module \(M\), we call a map \(\rho:H\to M\) a cross homomorphism if
\[\rho(gh)=\rho(g)+g(\rho(h))\]
The main result of this paper is the following.
**Theorem 1.1**.: _The map \(R\) is well-defined, has image in \(\text{Hom}(F_{2g},\mathbb{Z})\cong H^{1}(\pi_{1}(S_{g});\mathbb{Z})\) and \(R\) is a cross-homomorphism. Furthermore, as an element in the cohomology class, \([R]\) is a generator of \(H^{1}(\text{Mod}(S_{g,1});H_{1}(\pi_{1}(S_{g});\mathbb{Z})\), which is equal to the cross ratio defined by Morita [10]._
Notice that Earle [1], Morita [10], Trapp [11] and Furuta [10] (described by Morita) give various constructions of the generator of the group
\[H^{1}(\text{Mod}(S_{g,1});H_{1}(\pi_{1}(S_{g});\mathbb{Z}))\cong\mathbb{Z}.\]
Trapp and Futura use the unit tangent bundle and winding numbers. Earle uses Moduli spaces and Morita gives a combinatorial construction. All of the above definition seem very different. It is an interesting question to consider how those construction interact with each other. Kuno [12] gave an understanding of the difference between Earle construction and Morita construction. This cross homomorphism is also related to Johnson homomorphism as discussed in Morita [10].
The second result is to relate our construction with that of Trapp [11] and Furuta [10] (described by Morita). Let \(D\subset S_{g}\) be an open disk. Let \(X\) be a nowhere zero vector field on \(S_{g}-D\). Then there is a winding number map
\[\omega_{X}:\pi_{1}(S_{g}-D)=F_{2g}\to\mathbb{Z}\]
given by how \(X\) rotates along a smooth representative of a curve in \(\pi_{1}(S_{g}-D)\).
For any map \(h:F_{2g}\to\mathbb{Z}\), we can define its defect
\[D(h):F_{2g}\times F_{2g}\to\mathbb{Z}\]
as \(D(h)(a,b)=h(ab)-h(a)-h(b)\), which measures how far \(h\) is from a homomorphism. We ask the following question.
**Problem 1.2**.: Do we have \(D(\omega_{X})=D(\operatorname{trans}\circ\tilde{G})\)? Moreover, does there exist a nowhere zero vector field \(X\) on \(S_{g}-D\) such that \(\omega_{X}=\operatorname{trans}\circ\tilde{G}\)?
The positive answer of the first question implies the positive answer of the second question. We also prove the following.
**Theorem 1.3**.: _For \(\alpha,\beta\) which generate a free group of order \(2\) corresponds to a cover \(U\) of \(S_{g,1}\) that is a once-punctured torus, we have that_
\[D(\omega_{X})(\alpha,\beta)=D(\operatorname{trans}\circ\tilde{G})(\alpha,\beta )=0.\]
**Acknowledgement.** We thank Bena Tshishiku for introducing me the paper of Morita.
## 2. The proof of Theorem 1.1
**Step 1: \(R\) is well-defined, has image in \(\operatorname{Hom}(F_{2g},\mathbb{Z})\cong H^{1}(\pi_{1}(S_{g});\mathbb{Z})\).**
Let \(\phi\in\operatorname{Mod}(S_{g,1})\). To prove that \(R(\phi)\) is well-defined, we need to prove that it does not depend on the choice of \(f:F_{2g}\to F_{2g}\) representing \(\phi\in\operatorname{Out}^{*}(F_{2g})\) such that \(f(c)=c\). This comes from the fact that a different choice is given by composing \(f\) with a conjugation by a power of \(c\) where \(\tilde{G}(c)\) is a translation by integers. Conjugation by a translation of integers has no effect on the translation number since translation by integers commute with \(\widetilde{\operatorname{Homeo}^{+}(S^{1})}\).
We now show that \(R\) has image in \(\operatorname{Hom}(F_{2g},\mathbb{Z})\cong H^{1}(\pi_{1}(S_{g});\mathbb{Z})\).
**Claim 2.1**.: Let \(\phi\in\operatorname{Mod}(S_{g,1})\) and \(\alpha,\beta\in F_{2g}\), then
\[R(\phi)(\alpha\beta)=R(\phi)(\alpha)+R(\phi)(\beta)\]
Proof.: Let \(f:F_{2g}\to F_{2g}\) be a representative of \(\phi\in\operatorname{Out}^{*}(F_{2g})\) such that \(f(c)=c\).
We have the following canonical Euler cocycle
\[\tau:\operatorname{Homeo}^{+}(S^{1})\times\operatorname{Homeo}^{+}(S^{1})\to \mathbb{Z}\]
given by
\[\tau(f,g):=\operatorname{trans}(\tilde{f}\tilde{g})-\operatorname{trans}(\tilde {f})-\operatorname{trans}(\tilde{g}),\]
where \(\tilde{f}\) and \(\tilde{g}\) are lifts of \(f,g\) in \(\widetilde{\operatorname{Homeo}^{+}}(S^{1}))\). The value of \(\tau\) is in \(\{-1,0,1\}\).
By Matsumoto [14, Theorem 3.3], \(\tau\) is the same as a geometric cocyle
\[\theta:\pi_{1}(S_{g})\times\pi_{1}(S_{g})\to\mathbb{Z}\]
when restricted to the Fuchsian representation \(\pi_{1}(S_{g})\to\operatorname{Homeo}^{+}(S^{1})\). The cocycle \(\theta\) is given by the following rule: Suppose that the subgroup \(F\) generated by \(\alpha,\beta\) is a free group on two generator, consider the covering \(U\to S_{g}\) associated with \(F\), \(U\) is
either a punctured torus or a pair of pants. If \(U\) is a pair of pants and that \(\alpha\) and \(\beta\) are represented by two boundary curves which match (resp. oppose) the orientation of \(U\), we define \(\theta(\alpha,\beta)=-1(resp.-1)\). In all the other cases, define \(\theta(\alpha,\beta)=0\).
The formula \(\theta(\alpha,\beta)\) is equivariant under the action of the mapping class group \(\operatorname{Mod}(S_{g,1})\). Thus from the identification of \(\theta\) and \(\tau\), we obtain that
\[\tau(\phi(\alpha),\phi(\beta))=\tau(\alpha,\beta).\]
This implies that
\[\begin{split}&\operatorname{trans}(\tilde{G}(\alpha)\tilde{G}( \beta))-\operatorname{trans}(\tilde{G}(\alpha))-\operatorname{trans}(\tilde{ G}(\beta))\\ =&\operatorname{trans}(\tilde{G}(f(\alpha))\tilde{G }(f(\beta)))-\operatorname{trans}(\tilde{G}(f(\alpha)))-\operatorname{trans}( \tilde{G}(f(\beta)))\\ &\operatorname{trans}(\tilde{G}(f(\alpha))\tilde{G}(f(\beta)))- \operatorname{trans}(\tilde{G}(\alpha)\tilde{G}(\beta))\\ =&\operatorname{trans}(\tilde{G}(f(\alpha)))- \operatorname{trans}(\tilde{G}(\alpha))+\operatorname{trans}(\tilde{G}(f(\beta) ))-\operatorname{trans}(\tilde{G}(\beta))\end{split} \tag{2}\]
Then
\[R(\phi)(\alpha\beta)=R(\phi)(\alpha)+R(\phi)(\beta).\qed\]
### Step 2: \(R\) is a cross homomorphism
We need to show that
\[R(\phi\eta)=R(\phi)+\phi^{*}(R(\eta))\]
Let \(f,h:F_{2g}\to F_{2g}\) be representative of \(\phi,\eta\) such that \(f(c)=h(c)=c\). Then \(h\circ f\) is a representative of \(\phi\eta\) such that \(h\circ f(c)=c\). Then
\[\begin{split}& R(\phi\eta)(\alpha)\\ =&\operatorname{trans}(h\circ f(\alpha))- \operatorname{trans}(\alpha)\\ =&\operatorname{trans}(h(f(\alpha)))-\operatorname{ trans}(f(\alpha))+\operatorname{trans}(f(\alpha))-\operatorname{trans}(\alpha)\\ =& R(\eta)(f(\alpha))+R(\phi)(\alpha)\\ =&\phi^{*}R(\eta)(\alpha)+R(\phi)(\alpha)\end{split} \tag{4}\]
This proves what we need.
### Step 3: \(R\) is the generator of \(\operatorname{Mod}(S_{g,1})\to H^{1}(S_{g};\mathbb{Z})\)
For this part, we only need to check the restriction on the point-pushing subgroup \(P:\pi_{1}(S_{g})\to\operatorname{Mod}(S_{g,1})\). This gives a homomorphism
\[R|_{P}:\pi_{1}(S_{g})\to H^{1}(S_{g};\mathbb{Z}),\]
since the point-pushing subgroup acts trivially on \(H^{1}(S_{g};\mathbb{Z})\). The homomorphism \(R|_{P}\) factors through the abelianization \(H_{1}(S_{g};\mathbb{Z})\) of \(\pi_{1}(S_{g})\). Let \(i:\pi_{1}(S_{g})\times\pi_{1}(S_{g})\to\mathbb{Z}\) be the algebraic intersection number, we have the following.
**Claim 2.2**.: We have the following formula
\[R|_{P}(a)(b)=(2-2g)i(a,b).\]
Proof.: We will do this by a calculation. Let \(a_{1},b_{1},...,a_{g},b_{g}\) be the standard generating set of \(\pi_{1}(S_{g})\). The point-pushing \(P(a_{1})\) has the following representative \(f:F_{2g}\to F_{2g}\) such that
\[a_{1}\to a_{1},b_{1}\to a_{1}^{-1}cb_{1}a_{1},a_{j}\to a_{1}^{-1}ca_{j}c^{-1}a_{ 1},b_{j}\to a_{1}^{-1}cb_{j}c^{-1}a_{1}. \tag{5}\]
We obtain the above representative by first find some representative using the exact mapping class, then we compose with a conjugation to make sure that \(f(c)=c\).
Now we can easily obtain that \(R(P(a_{1}))(a_{1})=0\), \(R(P(a_{1}))(b_{1})=2-2g\) and \(R(P(a_{1}))(a_{j})=0,R(P(a_{1}))(b_{j})=0\) for \(1<j\leq g\). Thus we obtain that
\[R(P(a_{1}))(b)=(2-2g)i(a_{1},b).\]
Let \(c\) be an element in \(H_{1}(S_{g};\mathbb{Z})\) represented as a non-separating simple closed curve. Then there is an element \(\phi\in\operatorname{Mod}(S_{g,1})\) such that \(\phi(a_{1})=c\). Then we have that
\[R(\phi^{-1}P(a_{1})\phi)=R(P(c))\]
By cross homomorphism of \(R\), we know that
\[R(\phi^{-1}P(a_{1})\phi)=R(\phi^{-1}P(a_{1}))+(\phi^{-1}P(a_{1}))^{*}R(\phi)= R(\phi^{-1})+(\phi^{*})^{-1}(P(a_{1}))+(\phi^{*})^{-1}(R(\phi))\]
Since \(R(\phi^{-1}\phi)=R(id)=0\), we know that \(R(\phi^{-1})+(\phi^{*})^{-1}(R(\phi^{-1}))=0\).
Thus we obtain that \(R(P(c))=(\phi^{*})^{-1}(R(P(a_{1})))\), then we have
\[R(P(c))(x)=(\phi^{*})^{-1}(R(P(a_{1})))(x)=R(P(a_{1}))((\phi_{*} )^{-1}(x))\] \[= i(a_{1},(\phi_{*})^{-1}(x))=i(\phi_{*}(a_{1}),x)=i(c,x)\qed\]
Then by Morita [14, Section 4], we know that \(R\) as a cohomology class in \(H^{1}(\operatorname{Mod}(S_{g,1}),\mathbb{Z})\) is the same as Morita's class.
## 3. Open questions
We first identify the defects of two winding numbers. The following claim implies that if the first question in Problem 1.2 has a positive answer, so does the second question.
**Claim 3.1**.: Let \(X,Y\) be two different nowhere zero vector fields on \(S_{g}-D\). Then \(D(\omega_{X})=D(\omega_{Y})\). The defect \(D(\operatorname{trans}\circ\tilde{G})\) also does not depend on the choice of the lift \(\tilde{G}\).
Proof.: We have that \(\omega_{X}-\omega_{Y}:F_{2g}\to\mathbb{Z}\) is a homomorphism, which is proved by Trapp [11, Section 1.3]. Thus we have that
\[\omega_{X}(ab)-\omega_{Y}(ab)=\omega_{X}(a)-\omega_{Y}(a)+\omega_{X}(b)-\omega _{Y}(b).\]
A rearrangement gives us the result. The proof for the defect \(D(\operatorname{trans}\circ\tilde{G})\) is similar.
We now prove Theorem 1.3.
Proof.: We now prove that if \(\alpha,\beta\) generate a free group of order \(2\) such that the cover determined by \(\langle\alpha,\beta\rangle\) is a once-punctured torus, then
\[D(\omega_{X})(\alpha,\beta)=0.\]
We can choose \(X\) to be a non-where zero vector field on the whole torus. Then since every loop can be straightened on the torus, we know that the winding number \(\omega_{X}(\gamma)=0\) for any \(\gamma\). The fact that \(D(\operatorname{trans}\circ\tilde{G})(\alpha,\beta)=0\) is given by Matsumoto [13, Theorem 3.3].
In Morita's construction, he constructed a map \(f:F_{2g}\to\mathbb{Z}\) such that
\[D(f)(a,b)=f(ab)-f(a)-f(b)=\langle a,b\rangle,\]
where \(\langle a,b\rangle\) denotes the algebraic intersection number. Then he define the cross homomorphism the same way as the following
\[C_{f}(\phi)(a)=f(\phi(a))-f(a).\]
Whether or not \(C_{f}\) is a cross homomorphism or not only depends on \(D(f)\) that \(D(f)\) has to be invariant under \(\operatorname{Mod}(S_{g,1})\) and satisfies an association equation. We wonder about the following.
**Question 3.2**.: Other than \(f\), \(\omega_{X}\) and \(\operatorname{trans}\circ\tilde{G}\) (possibly the same as \(\omega_{X}\)), do we have other functions \(f:F_{2g}\to\mathbb{Z}\) such that \(C_{f}\) is a cross homomorphism?
We also would like to consider the difference between \(C_{f}\) and \(R\) as in Kuno [11].
**Question 3.3**.: What's the coboundary of \(C_{f}-R\)?
|
2302.07409
|
Quantum Learning Theory Beyond Batch Binary Classification
|
Arunachalam and de Wolf (2018) showed that the sample complexity of quantum
batch learning of boolean functions, in the realizable and agnostic settings,
has the same form and order as the corresponding classical sample complexities.
In this paper, we extend this, ostensibly surprising, message to batch
multiclass learning, online boolean learning, and online multiclass learning.
For our online learning results, we first consider an adaptive adversary
variant of the classical model of Dawid and Tewari (2022). Then, we introduce
the first (to the best of our knowledge) model of online learning with quantum
examples.
|
Preetham Mohan, Ambuj Tewari
|
2023-02-15T00:22:44Z
|
http://arxiv.org/abs/2302.07409v4
|
# Quantum Learning Theory Beyond Batch Binary Classification
###### Abstract
Arunachalam and de Wolf (2018) showed that the sample complexity of quantum batch learning of boolean functions, in the realizable and agnostic settings, has the _same form and order_ as the corresponding classical sample complexities. In this paper, we extend this, ostensibly surprising, message to batch multiclass learning, online boolean learning, and online multiclass learning. For our online learning results, we first consider an adaptive adversary variant of the classical model of Dawid and Tewari (2022). Then, we introduce the first (to the best of our knowledge) model of online learning with quantum examples.
## 1 Introduction
Ever since Bshouty and Jackson (1995)'s formalization of a quantum example, several works (Servedio and Gortler, 2004; Atici and Servedio, 2005; Zhang, 2010), culminating in Arunachalam and de Wolf (2018), have provided sample complexity bounds for quantum batch learning of boolean functions. In Arunachalam and de Wolf (2018), the message was crystallized:
1. There is no new combinatorial dimension needed to characterize quantum batch learnability of boolean functions, namely the VC dimension continues to do so.
2. There is _at most a constant_ sample complexity advantage for quantum batch learning of boolean functions, in both the realizable and agnostic settings, as compared to the corresponding classical sample complexities.
In this paper, we show that this, ostensibly surprising, message continues to hold in three other learning settings: batch learning of multiclass functions, online learning of boolean functions, and online learning of multiclass functions.
Our motivation for considering quantum batch learning of multiclass functions is an open question in Arunachalam and de Wolf (2018) which asks "what is the quantum sample complexity for learning concepts whose range is \([k]\) rather than \(\{0,1\}\), for some \(k>2\)?" We resolve this question for \(2<k<\infty\) (see Section 3). In classical multiclass batch learning, an approach to establish the lower and upper sample complexity bounds (Daniely et al., 2015) is to proceed via a reduction to the binary case, with an appeal to the definition of Natarajan dimension. While classically straightforward, extending such a proof approach to establish sample complexity bounds for quantum multiclass batch learning involves manipulating quantum examples, which has to be done with utmost care (see Appendix A).
Unlike the batch setting, quantum online learning of classical functions, to the best of our knowledge, has no predefined model. One possible explanation is that we need, as an intermediary, a new classical online learning model (see Section 4.2) where, at each round, the adversary provides a distribution over the example (input-label) space instead of a single example. With this new
classical model, and the definition of a quantum example, a model for online learning in the quantum setting arises as a natural extension (see Figure 1).
### Our Contributions
In Tables 1 and 2, we summarize the pre-existing results1, and outline our contributions, for batch and online learning of boolean and multiclass functions, in the realizable and agnostic settings, in the classical and quantum paradigms. In particular, our contributions are
Footnote 1: We state results that have the tightest dependence on the combinatorial parameters that characterize learning in the respective settings. In particular, in the Batch+Multiclass+Realizable case, an upper bound with a tighter dependence on \(\epsilon\) (but looser on \(\operatorname{Ndim}(\mathcal{H})\)) exists in both (see Appendix B).
* establishing lower and upper sample complexity bounds for quantum batch multiclass classification in the realizable and agnostic settings,
* proposing a new classical online learning model, which is an adaptive adversary variant of an existing classical online learning model (Dawid and Tewari, 2022),
* proposing a quantum online learning model, as a natural generalization of our proposed classical online learning model,
* establishing lower and upper regret bounds for quantum online binary classification in the realizable and agnostic settings, and
* establishing lower and upper regret bounds for quantum online multiclass classification in the realizable and agnostic settings.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & & Classical & Quantum \\ \hline \multirow{4}{*}{Buden} & \multirow{2}{*}{Realizable} & \(\Theta\Big{(}\frac{\operatorname{VCGim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon}\Big{)}\) & \(\Theta\Big{(}\frac{\operatorname{VCGim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon}\Big{)}\) \\ & & Blumer et al. (1989); Hanneke (2016) & Arunachalam and de Wolf (2018) \\ \cline{3-4} & \multirow{2}{*}{Agnostic} & \(\Theta\Big{(}\frac{\operatorname{VCGim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon^{2}}\Big{)}\) & \(\Theta\Big{(}\frac{\operatorname{VCGim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon^{2}}\Big{)}\) \\ & & Kearns et al. (1992); Talagrand (1994) & Arunachalam and de Wolf (2018) \\ \cline{2-4} & \multirow{2}{*}{Realizable} & \(\Omega\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon}\Big{)}\) & \(\Omega\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon}\Big{)}\) \\ & & Natarajan (1989) & (Thm. 3.2) \\ \cline{3-4} & \multirow{2}{*}{Multiclass} & \(\mathcal{O}\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})\log(k)\log(\frac{1}{ 3})+\log(\frac{1}{3})}{\epsilon}\Big{)}\) & \(\mathcal{O}\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})\log(k)\log(\frac{1}{ 3})+\log(\frac{1}{3})}{\epsilon}\Big{)}\) \\ & & Daniely et al. (2015) & (Thm. B.1) \\ \cline{3-4} & \multirow{2}{*}{Agnostic} & \(\Omega\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon^{2}}\Big{)}\), & \(\Omega\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})+\log(\frac{1}{3})}{ \epsilon^{2}}\Big{)}\) (Thm. 3.2) \\ & & \(\mathcal{O}\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})\log(k)+\log(\frac{1}{ 3})}{\epsilon^{2}}\Big{)}\) & \(\mathcal{O}\Big{(}\frac{\operatorname{Ndim}(\mathcal{H})\log(k)+\log(\frac{1}{ 3})}{\epsilon^{2}}\Big{)}\) \\ \cline{3-4} & & Ben-David et al. (1995) & (Thm. B.1) \\ \hline \end{tabular}
\end{table}
Table 1: An overview of sample complexity results for batch learning in classical and quantum paradigms. Our novel contributions are presented in boxes shaded gray.
Figure 1: Mapping of the tools necessary for generalizations of learning paradigms from classical to quantum.
## 2 Preliminaries
### Notation
In the bra-ket (Dirac) notation, a ket, \(\ket{x}\), denotes a column vector in a complex vector space with an inner product (i.e. a Hilbert space2). It is used primarily in the context of describing the state of a quantum system (e.g. see Definition 2.1). A bra \(\langle\cdot|\) is the dual of the ket, in that \(\langle x|=\ket{x}\rangle^{\dagger}\), where the \(\dagger\) operator denotes the conjugate transpose. Typically, the bra notation is used for operators \(\langle M|\) (e.g. measurement operators) acting on a ket. This notation lends itself naturally to the notion of inner product \(\langle x|x\rangle=\lVert x\rVert^{2}\), and matrix-vector multiplication \(\langle M|x\rangle\). Furthermore, note that \(\ket{x,y}\) denotes the tensor product \(\ket{x}\otimes\ket{y}\). The comma may be omitted, and we have numerous equivalent notations for the tensor product summarized via the e.g. \(\ket{00}=\ket{0,0}=\ket{0}\ket{0}=\ket{0}\otimes\ket{0}=\ket{0}^{\otimes 2}\).
Footnote 2: While, in context, we are only concerned with finite-dimensional vectors, mathematically speaking, Hilbert spaces are more general in that they could be infinite-dimensional.
### Quantum Computing Basics
Analogous to how a classical bit (_bit_) is a unit of classical information, a quantum bit (_qubit_) is a unit of quantum information. The difference between the two is best illustrated by considering how each is realized. A bit is realized via the expectation value of a physical property of a system (e.g. voltage across an element in an electric circuit). If the value is higher than a certain threshold, the bit assumes the value 1. Otherwise, it assumes the value 0. Thus, a bit carries the information equivalent of its namesake, a binary digit.
A qubit, on the other hand, is realized as a two-level quantum system3. For example, spin (up, down) of an electron, polarization (horizontal, vertical) of a photon, and, more practically speaking (Kjaergaard et al., 2020), discrete energy levels (ground, excited) of an ion. So, it is governed by the postulates of quantum mechanics (Nielsen and Chuang, 2002). By the first postulate, we know that the _state space_ of a qubit is a 2-dimensional complex vector space, i.e. \(\mathbb{C}^{2}\). The first postulate further provides us with a description of what the _state vector_ must look like, which is summarized in the definition below (Definition 2.1).
Footnote 3: This can be attributed quantum computing being ideated with a vision of simulating quantum physics (Feynman, 1982). The laws of physics determine the kinds of computation that can be done. But, can computing be powerful enough to describe and simulate the laws of physics? What if we compute using a physical system?
**Definition 2.1** (Qubit).: _A single (isolated) qubit is described by a state vector \(\ket{\psi}\), which is a unit
\begin{table}
\begin{tabular}{|c|c|c|c c|} \hline & & & Classical & Quantum \\ \hline \multirow{6}{*}{Online} & \multirow{3}{*}{Boolean} & \multirow{3}{*}{Realizable} & \(\Theta(\mathrm{Ldim}(\mathcal{H}))\) & \(\Omega(\mathrm{Ldim}(\mathcal{H}))\) (Thm. 5.1) \\ & & & Littlestone (1988) & \(\mathcal{O}(\mathrm{Ldim}(\mathcal{H})+\log\log T)\) (Thm. 5.2) \\ \cline{3-4} & & & \(\Theta(\sqrt{\mathrm{Ldim}(\mathcal{H})T})\) & \(\Omega(\sqrt{\mathrm{Ldim}(\mathcal{H})T})\) (Thm. 5.1) \\ & & \multirow{3}{*}{Agnostic} & Ben-David et al. (2009) & \(\mathcal{O}(\sqrt{\mathrm{Ldim}(\mathcal{H})T\log T})\) \\ & & & Alon et al. (2021) & (Thm. 5.3) \\ \cline{2-4} & & \multirow{3}{*}{Realizable} & \(\Theta(\mathrm{mcLdim}(\mathcal{H}))\) & \(\Omega(\mathrm{mcLdim}(\mathcal{H}))\) (Thm. 5.5) \\ & & & Daniely et al. (2015) & \(\mathcal{O}(\mathrm{mcLdim}(\mathcal{H})+\log\log T)\) (Thm. 5.6) \\ \cline{3-4} & & \multirow{3}{*}{Agnostic} & \(\Omega(\sqrt{\mathrm{mcLdim}(\mathcal{H})T})\), & \(\Omega(\sqrt{\mathrm{mcLdim}(\mathcal{H})T})\) (Thm. 5.5) \\ \cline{3-4} & & & \(\mathcal{O}(\sqrt{\mathrm{mcLdim}(\mathcal{H})T\log(Tk)})\) & \(\mathcal{O}(\sqrt{\mathrm{mcLdim}(\mathcal{H})Tk\log T\log k})\) \\ \cline{3-4} & & & \(\mathrm{Daniely et al. (2015)}\) & (Thm. 5.7) \\ \hline \end{tabular}
\end{table}
Table 2: An overview of expected regret bounds for online learning in classical and quantum paradigms. Our novel contributions are presented in boxes shaded gray.
vector in the state space \(\mathbb{C}^{2}\). Mathematically,_
\[\left|\psi\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle, \quad\alpha,\beta\in\mathbb{C},\quad\left|\alpha\right|^{2}+\left|\beta\right|^ {2}=1,\]
_where \(\left|0\right\rangle=\begin{bmatrix}1\\ 0\end{bmatrix}\) and \(\left|1\right\rangle=\begin{bmatrix}0\\ 1\end{bmatrix}\) are basis vectors for the state space._
So, although we have two basis states (much as we did for the classical bit), the qubit is allowed to be in a (complex) superposition of the two, whereas a classical bit must deterministically be in one of the basis states. Now, if we are working with several qubits, then the fourth postulate tells us that the state space of this composite system is the tensor product of the state spaces of its components.
**Definition 2.2** (System of \(n\) qubits).: _The joint state of the composite system formed by \(n\) qubits, each in state \(\left|\psi_{i}\right\rangle,\;i\in\{1,\ldots,n\}\), is given by,_
\[\left|\psi_{1}\right\rangle\otimes\left|\psi_{2}\right\rangle\otimes\cdots \otimes\left|\psi_{n}\right\rangle\in\mathbb{C}^{2^{n}}.\]
As a comparative, the joint state of \(n\) classical bits is described by their Cartesian product. This essential distinction between Cartesian and tensor products is precisely the phenomenon of quantum entanglement, namely the existence of (pure) states of a composite system that are not product states of its parts. Quantum entanglement, alongside superposition, lies at the heart of intrinsic advantages of quantum computing (Jozsa and Linden, 2003). For a physical system to lend itself to computation, it must be amenable to manipulation. By the second postulate of quantum mechanics, any manipulation of a quantum system is limited to unitary evolution. In particular, _all quantum gates are unitary operators_ and, therefore, can only be used for computation that is reversible. An avenue for irreversible computation, and the only way to obtain classical outputs, in the quantum realm is the notion of a measurement, laid out by the third postulate.
**Definition 2.3** (Measurement).: _Quantum measurements are described by a collection \(\{M_{m}\}\) of measurement operators acting on the state space of the system. The index \(m\) denotes the classical outcome of the measurement. If the quantum system is in the state \(\left|\psi\right\rangle\) before measurement, then the probability that result \(m\) occurs is given by \(p(m)=\left\langle\psi\right|M_{m}^{\dagger}M_{m}\left|\psi\right\rangle\), and the state of the system after the measurement "collapses to" \(M_{m}\left|\psi\right\rangle/\sqrt{p(m)}\). To ensure conservation of total probability, \(\sum_{m}M_{m}^{\dagger}M_{m}=I\) is satisfied. As an example, measurement in the standard basis for a single qubit is provided by measurement operators \(M_{0}=\left|0\right\rangle\left\langle 0\right|\), and \(M_{1}=\left|1\right\rangle\left\langle 1\right|\)._
### PAC Learning Framework
In the PAC (Probably Approximately Correct) learning model (Valiant, 1984), a learner is provided oracle access to samples \((x,y)\), where \(x\) is sampled from some unknown distribution \(D\) on \(\mathcal{X}\) and \(y=h^{\star}(x)\), for some _target_ hypothesis \(h^{\star}:\mathcal{X}\rightarrow\mathcal{Y}\). We assume that \(h^{\star}\in\mathcal{H}\), where \(\mathcal{H}\) is a predefined hypothesis class, i.e. the learner has prior knowledge of \(\mathcal{H}\). The goal of the learning problem is to find4\(h:\mathcal{X}\rightarrow\mathcal{Y}\) such that the generalization error, given by the loss function \(\mathcal{L}=\mathds{P}_{x\sim D}(h(x)\neq h^{\star}(x))\), is minimized.
Footnote 4: Note that \(h\) need not necessarily belong to \(\mathcal{H}\). If it does, the learner is called _proper_. If not, the learner is _improper_.
**Definition 2.4** (PAC Learner).: _An algorithm \(\mathcal{A}\) is an \((\epsilon,\delta)\)-PAC learner for a hypothesis class \(\mathcal{H}\) if for any unknown distribution \(D\) and for all \(h^{\star}\in\mathcal{H}\), \(\mathcal{A}\) takes in \(m\) pairs of labeled instances, i.e. \(\{(x_{i},y_{i})\}_{i=1}^{m}\), each drawn from \(D\) and outputs a hypothesis \(h\) such that \(\mathds{P}[\mathcal{L}\leq\epsilon]\geq 1-\delta\)._
Indeed, an \((\epsilon,\delta)\)-PAC learner outputs a hypothesis that is, with high probability \((\geq 1-\delta)\), approximately correct (\(\mathcal{L}\leq\epsilon\)). A hypothesis class \(\mathcal{H}\) is _PAC-learnable_ if there exists an algorithm \(\mathcal{A}\) that is an \((\epsilon,\delta)\)-PAC learner for \(\mathcal{H}\). When \(\mathcal{Y}=\{0,1\}\), we are in the setting of binary classification. For the sample complexity of learning boolean function classes, we define below a combinatorial parameter known as the VC dimension.
**Definition 2.5** (VC Dimension).: _Given a hypothesis class \(\mathcal{H}=\{h:\mathcal{X}\rightarrow\{0,1\}\}\), a set \(S=\{s_{1},\ldots,s_{t}\}\subseteq\mathcal{X}\) is said to be shattered by \(\mathcal{H}\) if, for every labeling \(\ell\in\{0,1\}^{t}\), there exists an \(h\in\mathcal{H}\) such that \((h(s_{1}),h(s_{2}),\ldots,h(s_{t}))=\ell\). The VC dimension of \(\mathcal{H}\), \(\text{VCdim}(\mathcal{H})\), is the size of the largest set \(S\) that is shattered by \(\mathcal{H}\)._
### Agnostic Learning Framework
In the PAC learning framework, we worked with the _realizability assumption_, namely that \(h^{\star}\in\mathcal{H}\). If we omit this rather strong assumption, we are able to generalize the PAC learning framework to the agnostic learning framework (Kearns et al., 1992). Here, a learner is provided with oracle access to samples \((x,y)\), sampled from some unknown distribution \(D\) on \(\mathcal{X}\times\mathcal{Y}\). The learner has knowledge of a predefined hypothesis class \(\mathcal{H}\). The objective of the learning problem is to find5\(h:\mathcal{X}\rightarrow\mathcal{Y}\) such that the regret
Footnote 5: Once again, note that \(h\) need not necessarily belong to \(\mathcal{H}\). If it does, the learner is called _proper_. If not, the learner is _improper_.
\[\mathcal{R}=\mathds{P}_{(x,y)\sim D}(h(x)\neq y)-\inf_{h_{c}\in\mathcal{H}} \mathds{P}_{(x,y)\sim D}(h_{c}(x)\neq y),\]
is minimized. One can notice that if the labels come from some \(h^{\star}\in\mathcal{H}\), \(\mathcal{R}=\mathcal{L}\).
**Definition 2.6** (Agnostic Learner).: _An algorithm \(\mathcal{A}\) is an \((\epsilon,\delta)\)-agnostic learner for a hypothesis class \(\mathcal{H}\) if for an arbitrary, unknown, distribution \(D\), \(\mathcal{A}\) takes in \(m\) pairs of labeled instances, i.e. \(\{(x_{i},y_{i})\}_{i=1}^{m}\), each drawn from \(D\) and outputs a hypothesis \(h\) such that \(\mathds{P}[\mathcal{R}\leq\epsilon]\geq 1-\delta\)._
### Quantum PAC (and Agnostic) Learning Frameworks
In the quantum setting, the primary difference from the classical setting is with how the examples are provided. In particular, in the PAC learning setup, a quantum example (Bshouty and Jackson, 1995) takes the form
\[\sum_{x\in\{0,1\}^{n}}\sqrt{D(x)}\left|x,h^{\star}(x)\right\rangle, \tag{1}\]
for some \(h^{\star}\in\mathcal{H}\), where \(D:\{0,1\}^{n}\rightarrow[0,1]\) is a distribution over the instance space6, as before. This might appear slightly strange, as a single example seemingly contains information about _all_ possible classical examples. However, if we view it via the lens of measurement (see Definition 2.3), then it is clear that measuring a quantum example will provide the learner with a _single_ classical example \((x,h^{\star}(x))\) with probability \(D(x)\), exactly how it was in the classical PAC learning setup. While we have argued that the quantum example is a natural generalization of the classical example, the question still remains as to whether any sample complexity advantages in the quantum realm arise from the intrinsic description of a quantum example or from the quantum algorithm used or from both.
Footnote 6: Here, we have taken \(\mathcal{X}=\{0,1\}^{n}\) for convenience and ease of analysis. However, any \(\mathcal{X}\) could be mapped to this one, if needed.
In the agnostic learning setting, a quantum example takes the form
\[\sum_{(x,y)\in\{0,1\}^{n+1}}\sqrt{D(x,y)}\left|x,y\right\rangle, \tag{2}\]
where, now, \(D:\{0,1\}^{n+1}\to[0,1]\). These examples, like in the quantum PAC setting above, are typically _prepared_ by acting on the all-zero state \(|0^{n},0\rangle\) via an appropriate quantum circuit.
Given quantum examples (instead of classical examples), Definitions 2.4 and 2.6 otherwise stay exactly the same in the quantum setting.
## 3 Quantum Batch Learning
### Multiclass Classification
For the sample complexity results in the batch multiclass setting, we define the combinatorial parameter, Natarajan Dimension (Ndim\((\cdot)\)), which is a generalization of the VC dimension to the multiclass setting.
**Definition 3.1** (Natarajan Dimension).: _Given a hypothesis class \(\mathcal{H}=\{h:\mathcal{X}\to[k]\}\), a set \(S=\{s_{1},\ldots,s_{t}\}\subseteq\mathcal{X}\) is said to be N-shattered by \(\mathcal{H}\) if there exist two functions \(f_{0},f_{1}:S\to[k]\) such that:_
* _For every_ \(x\in S\)_,_ \(f_{0}(x)\neq f_{1}(x)\)_._
* _For every_ \(R\subseteq S\)_, there exists a function_ \(h\in\mathcal{H}\) _such that_ \[\forall x\in R,h(x)=f_{0}(x)\text{ and }\,\forall x\in S\setminus R,h(x)=f_{1}(x).\]
_The Natarajan dimension of \(\mathcal{H}\), Ndim\((\mathcal{H})\), is the size of the largest set \(S\) that is N-shattered by \(\mathcal{H}\)._
### Lower Bounds
**Theorem 3.2** (Sample Complexity Lower Bounds for Quantum Batch Multiclass Classification).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\), where \(2<|\mathcal{Y}|<\infty\). For every \(\delta\in(0,1/2)\) and \(\epsilon\in(0,1/10)\), an \((\epsilon,\delta)\)-quantum PAC (resp. quantum agnostic) learner for \(\mathcal{H}\) needs_
\[m_{\text{PAC}}=\Omega\Bigg{(}\frac{\text{Ndim}(\mathcal{H})+\log(\frac{1}{ \delta})}{\epsilon}\Bigg{)},\,\,\,\text{resp.}\,\,\,\,\,m_{\text{Agnostic}}= \Omega\Bigg{(}\frac{\text{Ndim}(\mathcal{H})+\log(\frac{1}{\delta})}{\epsilon^ {2}}\Bigg{)}\,\,\,\text{quantum examples.}\]
The proof of Theorem 3.2 is deferred to Appendix A. At its core, the proof proceeds via reduction to the quantum binary case, with an appeal to the definition of N-shattering. A key step in the reduction involves the following transformation from a quantum binary example to a quantum multiclass example,
\[\sum_{x\in[d]}\sqrt{D(x)}\,|x,y\rangle\to\sum_{x\in[d]}\sqrt{D(x)}\,|x,f_{y}( x)\rangle\,,\,\text{where}\,\,y\in\{0,1\},\,\text{and}\,\,f_{0},f_{1}:[d]\to[| \mathcal{Y}|]. \tag{3}\]
While in the corresponding classical reduction proof, converting \((x,y)\to(x,f_{y}(x))\) is entirely trivial with the knowledge of \(x,y,f_{0},f_{1}\), performing the transformation in (3) using only unitary operations (in a reversible manner) in the quantum realm involves delicate reasoning using an explicit quantum circuit. In particular, it is noteworthy as its existence hinges on the reversibility of the transformation \(y\leftrightarrow f_{y}\), which is guaranteed precisely due to the definition of N-shattering.
### Upper Bounds
In general, classical sample complexity upper bounds trivially translate to the corresponding quantum ones, as the quantum learner always has the option of simply performing a measurement on each quantum example, and perform the classical learning algorithm on the resulting \(m\) classical examples. For the theorem statement and proof, refer to Appendix B.
Classical Online Learning
So far, we have been working with learning in the batch setting, where we are provided with all the examples in one go7. For several practical applications, it is either impossible to obtain all the examples at once (e.g. recommendation systems), or we simply wish to evolve our learning over time. In these cases, _online learning_(Littlestone, 1988) - where we cycle through training our hypothesis using examples we receive over time, and using our current hypothesis to predict for the upcoming example - is the appropriate framework to be placing ourselves in.
Footnote 7: This is typical for most settings where we are trying to learn a hypothesis via inductive reasoning (e.g. learning a function to fit data, etc.)
First, we will introduce known models and results in classical online learning, and a generalization in Section 4.2 that, in turn, provides us with a quantum online learning model (Section 5) as a natural generalization. For ease of description, here, we work with boolean function classes in the realizable setting.
### Adversary provides an input
Let \(\mathcal{C}:=\{c:\{0,1\}^{n}\to\{0,1\}\}\), and \(\mathcal{H}\subseteq\mathcal{C}\). A protocol for online learning is a \(T\)-round procedure described as follows: at the \(t\)-th round,
1. Adversary provides input point in the domain: \(x_{t}\in\{0,1\}^{n}\).
2. Learner uses a hypothesis \(h_{t}\in\mathcal{C}\), and makes the prediction \(\hat{y}_{t}=h_{t}(x_{t})\in\{0,1\}\).
3. Adversary provides the input point's label, \(y_{t}=h^{\star}(x_{t})\), where \(h^{\star}\in\mathcal{H}\).
4. Learner suffers a loss of 1 (a'mistake'), if \(\hat{y}_{t}\neq y_{t}\), i.e. \(\mathcal{L}_{t}=\mathbf{1}[\hat{y}_{t}\neq y_{t}]\).
Therefore, the learner's total loss8 is given by,
Footnote 8: **Notation:** We use \(\mathbf{h}\) to denote the sequence of hypotheses \(h_{t}\) that the learner uses for \(t=1,\ldots,T\). And, \(\mathbf{h}_{A}\) to explicitly denote that the sequence of hypotheses comes from algorithm \(A\).
\[\mathcal{L}(\mathbf{h},h^{\star})=\sum_{t=1}^{T}\mathbf{1}[h_{t}(x_{t})\neq h ^{\star}(x_{t})]. \tag{4}\]
The learner's objective is to minimize \(\mathcal{L}(\mathbf{h},h^{\star})\), i.e. make as few errors as possible. For an online learning algorithm \(A\), let \(M_{A}(\mathcal{H})\) denote the maximal number of mistakes \(A\) makes on a sequence of examples labeled by \(h^{\star}\in\mathcal{H}\), i.e. \(M_{A}(\mathcal{H})=\sup_{h^{\star}\in\mathcal{H}}\mathcal{L}(\mathbf{h}_{ \mathbf{A}},h^{\star})\). For the subsequent bound on \(M_{A}(\mathcal{H})\), we first define the combinatorial parameter, Littlestone Dimension (Ldim(\(\cdot\))).
**Definition 4.1** (Littlestone Dimension).: _Let \(T\) be a rooted tree whose internal nodes are labeled by elements from \(\mathcal{X}\). Each internal node's left edge and right edge are labeled 0 and 1, respectively. The tree \(T\) is L-shattered by \(\mathcal{H}\) if, for every path from root to leaf which traverses the nodes \(x_{1},\ldots,x_{d}\), there exists a hypothesis \(h\in\mathcal{H}\) such that, for all \(i\), \(h(x_{i})\) is the label of the edge \((x_{i},x_{i+1})\). We define the Littlestone Dimension, Ldim(\(\mathcal{H}\)), to be the maximal depth of a complete binary tree that is L-shattered by \(\mathcal{H}\)._
The classical online learning model in this subsection (Section 4.1) has been studied thoroughly, and the following theorem characterizes it in terms of the Littlestone Dimension.
**Theorem 4.2** (Mistake Bound for the Classical Online Model).: _(Shalev-Shwartz and Ben-David (2014), Cor. 21.8) Let \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\) be a hypothesis class. Then, the standard optimal algorithm enjoys the mistake bound \(M_{\text{SOA}}(\mathcal{H})=\text{Ldim}(\mathcal{H})\) and no other algorithm can have \(M_{A}(\mathcal{H})<\text{Ldim}(\mathcal{H})\)._
However, if one attempts to generalize this to the quantum setting, one quickly realizes that quantum examples of the form (1) do not split the input-label pair. In particular, an adversary cannot temporally separate its provision of the input point and its label.
A first step towards a model that can be generalized to the quantum setting, then, is to reorder the steps at the \(t\)-th round to \(2,1\,\&\,3,4\) (i.e. where the learner provides a prediction \(\hat{y}_{t}\) after which the adversary presents _both_ the input and its label \((x_{t},y_{t})\)).
While this reordering gives an entirely equivalent model that is once again characterized by Littlestone Dimension (Theorem 4.2), it is not sufficient for a natural quantum generalization. An adversary only ever presents one example at each round, and it is futile to attempt to generalize a single classical example to a quantum example that sits in a superposition. The missing piece, evidently, is the lack of a notion of a distribution over examples in the classical online models examined so far.
### Adversary provides a distribution
Now that we have identified the missing piece, we obtain the appropriate generalization of the model in Section 4.1 by asking the adversary to, at each \(t\), choose a distribution over a set of input-label pairs, from which an explicit input-label pair is then drawn. The protocol for the \(T\)-round procedure will be as follows: at the \(t\)-th round,
1. Learner provides a hypothesis \(h_{t}\in\mathcal{C}\).
2. Adversary chooses a distribution \(D_{t}:\{0,1\}^{n}\to[0,1]\) on the instance space, draws \(x_{t}\sim D_{t}\), and reveals \((x_{t},h^{\star}(x_{t}))\) to the learner.
3. Learner suffers, but does not "see", a loss of \(\mathcal{L}_{t}=\mathds{P}_{x\sim D_{t}}(h_{t}(x)\neq h^{\star}(x))\).
Here, the learner's total loss is given by,
\[\mathcal{L}(\mathbf{h},\mathbf{D},h^{\star})=\sum_{t=1}^{T}\mathds{P}_{x\sim D _{t}}(h_{t}(x)\neq h^{\star}(x)). \tag{5}\]
We identify this model as the adaptive adversary variant of the online learning model recently considered in Dawid and Tewari (2022). Note that, if we force the distribution \(D_{t}\) to be point masses at each \(t=1,\dots,T\), we recover the _reordered_ model in Section 4.1.
**Theorem 4.3** (Upper Bound on Expected Loss for the Classical Online Model in Section 4.2).: _Let \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\) be a hypothesis class and let \(h^{\star}\in\mathcal{H}\). Then,_
\[\mathds{E}[\mathcal{L}(\mathbf{h},\mathbf{D},h^{\star})]=\mathcal{O}(\text{ Ldim}(\mathcal{H})+\log\log T).\]
The proof of Theorem 4.3 is deferred to Appendix C. The principle difference between Equation 5, as compared to Equation 4, is that the loss function has changed from the 0-1 loss to the _expectation_ of the 0-1 loss over \(\mathbf{D}\). The proof, naturally then, makes use of the preexisting analysis for the bounds on Equation 4. In particular, it proceeds to show that, \(\mathds{P}_{x\sim D_{t}}(h_{t}(x)\neq h^{\star}(x))-\mathbf{1}[h_{t}(x_{t}) \neq h^{\star}(x_{t})]\) forms a martingale difference sequence, and uses a sophisticated martingale concentration inequality to show that Equations 4 and 5 are _close_, which then allows to transfer the existing \(\mathcal{O}(\text{Ldim}(\mathcal{H}))\) upper bound on Equation 4 here. The particular martingale concentration inequality used is a variant of Freedman's inequality (Lemma 3 of Kakade and Tewari (2008)) for which we pay an "extra" \(\log\log T\) factor. We believe that this extra \(\log\log T\) factor is an artifact of the specific concentration inequality used, and can potentially be done away with. We note that the standard Hoeffding-Azuma inequality would have provided us with a substantially worse upper bound of \(\mathcal{O}(\sqrt{\text{Ldim}(\mathcal{H})\cdot T\log T})\).
Quantum Online Learning
Equipped with our model in Section 4.2, we are finally ready to introduce our quantum online learning model. However, prior to the model description, we clarify our scope. In its nascent existence, quantum online learning has primarily focused on the online learning of _quantum states_(Aaronson et al., 2018; Arunachalam et al., 2021). In contrast, our focus in this paper is on the online learning of _classical functions_ via quantum examples. Our scope is motivated by the abundance of classical online learning literature (Littlestone, 1988; Ben-David et al., 2009; Daniely et al., 2015; Shalev-Shwartz and Ben-David, 2014), that presents us with an at-the-ready comparison.
### Model Description
Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\) be a hypothesis class. Identifying the \(T\)-round protocol in Section 4.2 with the definition of a quantum example in (1) and (2), we obtain the following "natural" model for quantum online learning. The \(T\)-round protocol proceeds as follows: at the \(t\)-th round,
1. Learner provides a hypothesis \(h_{t}:\mathcal{X}\to\mathcal{Y}\).
2. Adversary reveals an example \(\left|\psi_{t}\right\rangle\) where 1. \(\left|\psi_{t}\right\rangle=\sum_{x\in\mathcal{X}}\sqrt{D_{t}(x)}\left|x,h^{ \star}(x)\right\rangle\) for some \(D_{t}:\mathcal{X}\to[0,1]\) and \(h^{\star}\in\mathcal{H}\) (realizable), 2. \(\left|\psi_{t}\right\rangle=\sum_{x\in\mathcal{X},\,y\in\mathcal{Y}}\sqrt{D_{t }(x,y)}\left|x,y\right\rangle\) for some \(D_{t}:\mathcal{X}\times\mathcal{Y}\to[0,1]\) (agnostic9). Footnote 9: The adversary need not be consistent: i.e., they could reveal both \(\left|x,0\right\rangle\) and \(\left|x,1\right\rangle\) during the \(T\)-round protocol.
3. Learner incurs loss10\(\mathcal{L}_{t}:=\mathds{P}_{x\sim D_{t}}[h_{t}(x)\neq y]\). Footnote 10: Alternatively, if one prefers a mistake model, one could specify a threshold \(\epsilon\), where iff \(\mathds{P}_{x\sim D_{t}}[h_{t}(x)\neq y]>\epsilon\), we treat it as a mistake round, i.e. \(\mathcal{L}_{t}^{\epsilon}=\mathds{1}[\mathds{P}_{x\sim D_{t}}(h_{t}(x)\neq y )>\epsilon]\).
In each of the realizable and agnostic cases, the regret is given by
\[\mathcal{R}:=\sup_{h\in\mathcal{H}}\sum_{t=1}^{T}\Big{(}\mathds{P}_{x\sim D_ {t}}[h_{t}(x)\neq y]-\mathds{P}_{x\sim D_{t}}[h(x)\neq y]\Big{)}. \tag{6}\]
In the agnostic case, equation 6 above can be equivalently written as
\[\mathcal{R}^{\text{agnostic}}:=\sum_{t=1}^{T}\mathds{P}_{x\sim D_{t}}[h_{t}( x)\neq y]-\inf_{h\in\mathcal{H}}\sum_{t=1}^{T}\mathds{P}_{x\sim D_{t}}[h(x)\neq y]. \tag{7}\]
In the realizable case, as \(y=h^{\star}(x)\) for some \(h^{\star}\in\mathcal{H}\), Equation 6 simplifies to
\[\mathcal{R}^{\text{realizable}}:=\sup_{h^{\star}\in\mathcal{H}}\sum_{t=1}^{T} \mathds{P}_{x\sim D_{t}}[h_{t}(x)\neq h^{\star}(x)]. \tag{8}\]
### Binary Classification
#### 5.2.1 Lower Bounds
**Theorem 5.1** (Lower Bounds on Expected Regret for Quantum Online Binary Classification).: _Let \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\). Every quantum online learner satisfies_
\[\mathds{E}[\mathcal{R}^{\text{realizable}}]=\Omega(\textit{Ldim}(\mathcal{H})), \text{ and }\mathds{E}[\mathcal{R}^{\text{agnostic}}]=\Omega(\sqrt{\textit{Ldim}( \mathcal{H})\cdot T}).\]
The proof involves the adversary choosing \(D_{t}\) to be a point mass for each \(t\), rendering each quantum example \(\left|\psi_{t}\right\rangle\) information theoretically equivalent to a classical example \((x_{t},y_{t})\), allowing us to obtain these lower bounds from the corresponding classical lower bounds. The full proof can be found in Appendix D.
#### 5.2.2 Upper Bounds
**Theorem 5.2** (Upper Bound on Expected \(\mathcal{R}^{\text{realizable}}\) for Quantum Online Binary Classification).: _Let \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\). Every quantum online learner for \(\mathcal{H}\) satisfies_
\[\mathds{E}[\mathcal{R}^{\text{realizable}}]=\mathcal{O}(\text{Ldim}(\mathcal{H })+\log\log T).\]
Proof.: This proof proceeds identically to the proof of Theorem 4.3, so we present an abridged version of that proof here. Consider \(M_{t}=\underbrace{\mathds{P}_{x\sim D_{t}}(h_{t}(x)\neq h^{\star}(x))}_{P_{t}} -\underbrace{\mathbf{1}[h_{t}(x_{t})\neq h^{\star}(x_{t})]}_{I_{t}}\), where \((x_{t},h^{\star}(x_{t}))\) is a particular sample obtained when the learning algorithm measures \(|\psi_{t}\rangle\). With the filtration \(\mathcal{F}_{t}\) now corresponding to the set of observed quantum states up to (and, including) time \(t\), we have that \(M_{t}\) is a martingale difference sequence. Applying Freedman's inequality exactly as in the proof of Theorem 4.3, we obtain \(\mathds{E}[\mathcal{R}^{\text{realizable}}]=\mathds{E}[\sup_{h^{\star}\in \mathcal{H}}\sum_{t=1}^{T}P_{t}]=\mathcal{O}(\text{Ldim}(\mathcal{H})+\log \log T)\) (as Theorem 4.3 assumes an arbitrary \(h^{\star}\in\mathcal{H}\) throughout). This completes our proof.
**Theorem 5.3** (Upper Bound on Expected \(\mathcal{R}^{\text{agnostic}}\) for Quantum Online Binary Classification).: _Let \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\). Every quantum online learner for \(\mathcal{H}\) satisfies_
\[\mathds{E}[\mathcal{R}^{\text{agnostic}}]=\mathcal{O}(\sqrt{\text{Ldim}( \mathcal{H})\cdot T\log T}).\]
The proof of Theorem 5.3 is deferred to Appendix E. In contrast to the proof of Theorem 5.2, for \(\mathds{E}[\mathcal{R}^{\text{agnostic}}]\), we need control over the _supremum_ of martingales (i.e. we seek a uniform martingale law of large numbers) here. This prompts us to appeal to three theorems of Rakhlin et al. (2015) and proceed as follows. We first bound \(\mathds{E}[\mathcal{R}^{\text{agnostic}}]\) by twice the sequential Rademacher complexity of the loss-of-hypothesis (\(\ell\circ\mathcal{H}\)) class (see Definition E.1), which is bounded by a quantity involving the 0-covering number of \(\ell\circ\mathcal{H}\) on its (subsequently) L-shattered trees, which is bounded by a quantity involving \(\text{Ldim}(\ell\circ\mathcal{H})\), which we then bound by \(\text{Ldim}(\mathcal{H})\) (Lemma E.2).
### Multiclass Classification
For the subsequent results in the multiclass setting, we first define the combinatorial parameter, Multiclass Littlestone Dimension (\(\text{mcLdim}(\cdot)\)).
**Definition 5.4** (Multiclass Littlestone Dimension).: _Let \(T\) be a rooted tree whose internal nodes are labeled by elements from \(\mathcal{X}\) and whose edges are labeled by elements from \(\mathcal{Y}\), such that the edges from a single parent to its child-nodes are each labeled with a different label. The tree \(T\) is mcL-shattered by \(\mathcal{H}\) if, for every path from root to leaf which traverses the nodes \(x_{1},\ldots,x_{d}\), there exists a hypothesis \(h\in\mathcal{H}\) such that, for all \(i\), \(h(x_{i})\) is the label of the edge \((x_{i},x_{i+1})\). We define the Multiclass Littlestone Dimension, \(\text{mcLdim}(\mathcal{H})\), to be the maximal depth of a complete **binary** tree that is mcL-shattered by \(\mathcal{H}\)._
In the binary case (where the only "different labels" are 0 and 1), it is not hard to see that the above definition reduces to Definition 4.1.
#### 5.3.1 Lower Bounds
**Theorem 5.5** (Lower Bounds on Expected Regret for Quantum Online Multiclass Classification).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\), where \(|\mathcal{Y}|>2\). Every quantum online learner for \(\mathcal{H}\) satisfies_
\[\mathds{E}[\mathcal{R}^{\text{realizable}}]=\Omega(\text{mcLdim}(\mathcal{H})), \text{ and }\mathds{E}[\mathcal{R}^{\text{agnostic}}]=\Omega(\sqrt{\text{mcLdim}( \mathcal{H})\cdot T}).\]
Proof.: The proof is identical to that of Theorem 5.1, where now, for the corresponding classical learners, \(\mathds{E}[\mathcal{R}^{\text{realizable}}]=\text{mcLdim}(\mathcal{H})= \Omega(\text{mcLdim}(\mathcal{H}))\) (from Theorem 5.1 of Daniely et al. (2015)), and \(\mathds{E}[\mathcal{R}^{\text{agnostic}}]=\Omega(\sqrt{\text{mcLdim}( \mathcal{H})\cdot T})\) (from Theorem 5.3 of Daniely et al. (2015)).
#### 5.3.2 Upper Bounds
**Theorem 5.6** (Upper Bound on Expected \(\mathcal{R}^{\text{realizable}}\) for Quantum Online Multiclass Classification).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\), where \(|\mathcal{Y}|>2\). Every quantum online learner for \(\mathcal{H}\) satisfies_
\[\mathds{E}[\mathcal{R}^{\text{realizable}}]=\mathcal{O}(\text{ mcLdim}(\mathcal{H})+\log\log T).\]
Proof.: The proof for the upper bound is identical to the proofs of Theorem 4.3 and Theorem 5.2, which we abridge here with the corresponding changes. Once again, we consider \(M_{t}=\underbrace{\mathds{P}_{x\sim D_{t}}(h_{t}(x)\neq h^{\star}(x))}_{P_{t}} -\underbrace{\mathbf{1}[h_{t}(x_{t})\neq h^{\star}(x_{t})]}_{I_{t}}\), where \((x_{t},h^{\star}(x_{t}))\) is a particular sample obtained when the learning algorithm measures \(|\psi_{t}\rangle\). With the same filtration \(\mathcal{F}_{t}\) as in the proof of Theorem 5.2, we have that \(M_{t}\) is a martingale difference sequence. Here, we would have that the predictable quadratic variation process \(W_{T}\) of \(M_{t}\), is given by
\[W_{T}=\sum_{t=1}^{T}\mathds{E}[M_{t}^{2}|\mathcal{F}_{t-1}]\leq\sum_{t=1}^{T}( P_{t}+I_{t})=\sum_{t=1}^{T}P_{t}+\sum_{t=1}^{T}I_{t}\leq\sum_{t=1}^{T}P_{t}+ \text{mcLdim}(\mathcal{H}),\]
which when we apply Freedman's inequality exactly as in the proof of Theorem 4.3 to, allows us to obtain \(\mathds{E}[\mathcal{R}^{\text{realizable}}]=\mathds{E}[\sup_{h^{\star}\in \mathcal{H}}\sum_{t=1}^{T}P_{t}]=\mathcal{O}(\text{mcLdim}(\mathcal{H})+\log \log T)\) (as Theorem 4.3 assumes an arbitrary \(h^{\star}\in\mathcal{H}\) throughout). This completes our proof.
**Theorem 5.7** (Upper Bound on Expected \(\mathcal{R}^{\text{agnostic}}\) for Quantum Online Multiclass Classification).: _Let \(\mathcal{H}\subseteq\mathcal{Y}^{\mathcal{X}}\), where \(2<|\mathcal{Y}|=k<\infty\). Then, every quantum online learner for \(\mathcal{H}\) satisfies_
\[\mathds{E}[\mathcal{R}^{\text{agnostic}}]=\mathcal{O}(\sqrt{\text{mcLdim}( \mathcal{H})\cdot T\log T\cdot k\log k}).\]
The proof of Theorem 5.7 is deferred to Appendix F. This proof proceeds similarly to the proof of Theorem 5.3. However, \(\text{Ldim}(\ell\circ\mathcal{H})\leq\text{Ldim}(\mathcal{H})\), proved in Lemma E.2, only holds for boolean hypothesis classes, and does not extend correspondingly to \(\text{mcLdim}(\mathcal{H})\). In the process of sidestepping this, we find a somewhat unexpected connection between \(\text{Ldim}(\ell\circ\mathcal{H})\) and \(\text{BLdim}(\mathcal{H})\), the Bandit Littlestone dimension of \(\mathcal{H}\). We bound \(\text{Ldim}(\ell\circ\mathcal{H})\leq\text{BLdim}(\mathcal{H})\) by an explicit construction of a tree that is BL-shattered by \(\mathcal{H}\), from a tree that is L-shattered by \(\ell\circ\mathcal{H}\). We then appeal to a known bound \(\text{BLdim}(\mathcal{H})=\mathcal{O}(k\log k\cdot\text{mcLdim}(\mathcal{H}))\)(Daniely and Helbertal, 2013).
This roundabout approach to bound \(\text{Ldim}(\ell\circ\mathcal{H})\) for multiclass \(\mathcal{H}\), by seeking bounds in the bandit setting, costs us a factor of \(k\) which, we strongly believe, can be improved by establishing a direct bound on \(\text{Ldim}(\ell\circ\mathcal{H})\), involving \(\text{mcLdim}(\mathcal{H})\). We have not attempted to optimize this bound in this paper, as our message of \(\text{mcLdim}(\mathcal{H})\) characterizing learning in the online multiclass setting (with finitely many labels) continues to hold, while Theorem 5.5 tells us that quantum online multiclass learning can hope for _at most a constant_ advantage in expected regret over classical online multiclass learning.
Before we end this section on online learning with quantum examples, we note that the proofs for the expected regret upper bounds were established by a quantum online learner that performs a measurement and subsequently learns classically. The fact that the upper bounds thus obtained are close to the lower bounds shows that the performance of this measure-and-learn-classically learner is close to that of the best "genuine" quantum online learner. We feel that this is consistent with the overall message of this paper, viz. that there is limited power in quantum examples to speed up learning especially when the adversary is allowed to play arbitrary distributions (including very degenerate ones like point masses).
Conclusion
In this work, we partially resolved an open question of Arunachalam and de Wolf (2018) by characterizing the sample complexity of multiclass learning (for \(2<k<\infty\)). With recent work (Brukhim et al., 2022) fully characterizing _classical_ multiclass learnability (including the case \(k=\infty\)) via the DS dimension, we ask whether _quantum_ multiclass learnability is also fully characterized by the DS dimension.
In the batch setting, the sample complexity upper bounds were trivial to establish due to the quantum learner's ability to measure quantum examples and learn classically on the resulting output. In the online setting, the expected regret lower bounds, in turn, were trivial due to the adversary's ability to provide point masses \(D_{t}\) at each \(t\), rendering each quantum example equivalent to a classical example. This prompts us to consider what happens when impose restrictions on \(D_{t}\) to force it away from a point mass. Would the expected regret bounds for classical online models in Section 4.1, Section 4.2, and the quantum online model in Section 5.1 all diverge from one another?
|
2303.07720
|
Polarized proton structure in the resonance region
|
In view of the precise data available on inclusive polarized electron
scattering off polarized proton targets in the nucleon resonance excitation
region, we compare these results with the coherent sum of resonant
contributions to the polarized structure function $g_1$ and virtual photon
asymmetry $A_1$. To this goal, we employ the nucleon resonance
electroexcitation amplitudes determined for photon virtualities $Q^2$ $<$ 5.0
GeV$^2$ from analyses of the CLAS data on exclusive electroproduction off
protons in the resonance region. Most of the well established resonances of
four star PDG status in the mass range up to 1.75~GeV are included. We find
that the resonance-like structures observed in the inclusive $g_1$ data are
related to the resonant contributions in the entire range of photon virtuality
$Q^2$ where the data on $g_1$ are available. In the range of invariant mass of
the final hadron system $W$ $>$ 1.5 GeV, the data on the asymmetry $A_1$ are
well reproduced even when accounting for resonant contributions only,
especially for the larger values of $Q^2$ and energies analysed. This
observation offers an interesting hint to quark-hadron duality seen in
polarized inclusive electron scattering observables.
|
A. N. Hiller Blin, V. I. Mokeev
|
2023-03-14T09:08:47Z
|
http://arxiv.org/abs/2303.07720v1
|
# Polarized proton structure in the resonance region
###### Abstract
In view of the precise data available on inclusive polarized electron scattering off polarized proton targets in the nucleon resonance excitation region, we compare these results with the coherent sum of resonant contributions to the polarized structure function \(g_{1}\) and virtual photon asymmetry \(A_{1}\). To this goal, we employ the nucleon resonance electroexcitation amplitudes determined for photon virtualities \(Q^{2}<5.0\) GeV\({}^{2}\) from analyses of the CLAS data on exclusive electroproduction off protons in the resonance region. Most of the well established resonances of four star PDG status in the mass range up to 1.75 GeV are included. We find that the resonance-like structures observed in the inclusive \(g_{1}\) data are related to the resonant contributions in the entire range of photon virtuality \(Q^{2}\) where the data on \(g_{1}\) are available. In the range of invariant mass of the final hadron system \(W>1.5\) GeV, the data on the asymmetry \(A_{1}\) are well reproduced even when accounting for resonant contributions only, especially for the larger values of \(Q^{2}\) and energies analysed. This observation offers an interesting hint to quark-hadron duality seen in polarized inclusive electron scattering observables.
+
Footnote †: preprint: JLAB-PHY-23-3773
## I Introduction
Inclusive electron scattering off protons and the exploration of its polarization observables offer an essential means to obtaining insights about the ground proton structure [1; 2; 3]. The extension of these studies to the resonance region will allow one to understand the proton structure at large values of the fractional parton momentum \(x\) in the resonance region and eventually to shed light on the strong interaction dynamics which underlies the transition from the strongly coupled to the perturbative QCD regimes, as well as the associated characteristics of quark-hadron duality [4; 5; 6; 7; 8; 9; 10; 11; 12].
There have been impressive advances in measuring inclusive scattering of polarized electron beams off polarized nucleon targets [13; 14; 15; 16; 17; 18; 19; 20], which open the path to duality studies in spin-dependent observables [4; 5; 21; 22; 23]. In order to improve the theory approaches describing the connection between resonances and scaling contributions, considerations need to be made about the role of the non-resonant background. While such a quantitative description from first principles is rather challenging, insight may be obtained from phenomenological analyses.
The experimental program exploring exclusive \(\pi^{+}n\), \(\pi^{0}p\), \(\eta p\), and \(\pi^{+}\pi^{-}p\) electroproduction channels in the resonance region with the CLAS detector at Jefferson Lab has provided important new information on the \(\gamma^{*}pN^{*}\) electrocouplings of most nucleon resonances in the mass range \(W\leq 1.75\) GeV and for \(Q^{2}\leq 5\) GeV\({}^{2}\)[24; 25; 26; 27; 28; 29; 30; 31]. These results allow one to quantitatively evaluate the coherent sum of resonant contributions to inclusive electron scattering observables, using parameters of the individual nucleon resonances extracted from data.
In our previous works [32; 33; 34], we confronted polarized and unpolarized inclusive electron-scattering data with the computation of resonant contributions in the resonance region. In the present work, we include updated data on the polarized structure function \(g_{1}\)[20] and the virtual photon asymmetry \(A_{1}\)[13; 14; 15; 16; 17; 18; 19; 20]. In particular, the latter have extended the coverage in \(Q^{2}\) and \(W\) in comparison with the data analyzed in our previous work, therefore permitting more insightful conclusions about the behavior of this observable in the resonance region.
In Sec. II we give a brief summary of the formalism, referring to our previous work [34] for a detailed description. The results of our computation compared with the available data are discussed in Sec. III. In Sec. IV we summarize our findings and give an outlook of these studies.
## II Formalism
The formalism used in the present work follows that thoroughly described in our previous article [34].
In terms of cross sections, the virtual photon asymmetries are given by [4; 35; 36]
\[A_{1}=\frac{\sigma_{T}^{1/2}-\sigma_{T}^{3/2}}{\sigma_{T}^{1/2}+\sigma_{T}^{3 /2}},\qquad A_{2}=\frac{\sigma_{I}}{\sigma_{T}}, \tag{1}\]
where \(\sigma_{I}\) is the real part of the interference amplitude for virtual photons with longitudinal and transverse polarizations. The structure functions are then related to the virtual photon asymmetries via
\[g_{1} =\frac{1}{\rho^{2}}\,F_{1}\Big{(}A_{1}+A_{2}\sqrt{\rho^{2}-1} \Big{)}, \tag{2a}\] \[g_{2} =\frac{1}{\rho^{2}}\,F_{1}\Big{(}-A_{1}+\frac{A_{2}}{\sqrt{\rho^{ 2}-1}}\Big{)}, \tag{2b}\]
with the kinematic factor \(\rho^{2}=1+Q^{2}/\nu^{2}\). Here, \(-Q^{2}\) is the 4-momentum transfer squared between the electron and the proton, while \(\nu\) is the virtual photon energy in the lab frame. It is related to the invariant mass \(W\) of
the virtual photon-target proton system via \(\nu=(W^{2}-M^{2}+Q^{2})/2M\), where \(M\) is the nucleon mass.
The coherent sum of contributions from the resonances \(R\) to the inclusive structure functions can be written as [4; 22]
\[\left(1+\frac{Q^{2}}{\nu^{2}}\right)g_{1}^{\rm res}= M^{2}\sum_{IJ\eta}\Bigg{\{}\bigg{|}\sum_{R^{IJ\eta}}G_{+}^{R^{IJ\eta}} \bigg{|}^{2}-\bigg{|}\sum_{R^{IJ\eta}}G_{-}^{R^{IJ\eta}}\bigg{|}^{2}\] \[+ \tag{3a}\] \[\left(1+\frac{Q^{2}}{\nu^{2}}\right)g_{2}^{\rm res}= -M^{2}\sum_{IJ\eta}\Bigg{\{}\bigg{|}\sum_{R^{IJ\eta}}G_{+}^{R^{IJ \eta}}\bigg{|}^{2}-\bigg{|}\sum_{R^{IJ\eta}}G_{-}^{R^{IJ\eta}}\bigg{|}^{2}\] \[- \tag{3b}\]
for the spin-dependent structure functions, and
\[F_{1}^{\rm res}= M\sum_{IJ\eta}\Bigg{\{}\bigg{|}\sum_{R^{IJ\eta}}G_{+}^{R^{IJ \eta}}\bigg{|}^{2}+\bigg{|}\sum_{R^{IJ\eta}}G_{-}^{R^{IJ\eta}}\bigg{|}^{2} \Bigg{\}}, \tag{3c}\] \[\left(1+\frac{\nu^{2}}{Q^{2}}\right)F_{2}^{\rm res}= M\nu\sum_{IJ\eta}\Bigg{\{}\bigg{|}\sum_{R^{IJ\eta}}G_{+}^{R^{IJ\eta}} \bigg{|}^{2}+\bigg{|}\sum_{R^{IJ\eta}}G_{-}^{R^{IJ\eta}}\bigg{|}^{2}+2\bigg{|} \sum_{R^{IJ\eta}}G_{0}^{R^{IJ\eta}}\bigg{|}^{2}\Bigg{\}}, \tag{3d}\]
for the spin-averaged structure functions. The outer sums in Eqs. (3) run over the possible values of the spin \(J\), isospin \(I\), and intrinsic parity \(\eta\), while the inner sums run over all resonances \(R^{IJ\eta}\) that satisfy \(J_{R}=J\), \(I_{R}=I\) and \(\eta_{R}=\eta\) for the spin, isospin and parity of the resonance \(R\). The amplitudes \(G_{+}^{R}\), \(G_{-}^{R}\), and \(G_{0}^{R}\) describe the contribution from the electroexcitation amplitudes of the resonance \(R\). They are related to the \(\gamma^{*}pN^{*}\) electrocouplings \(A_{1/2}\), \(A_{3/2}\), and \(S_{1/2}\) as detailed in Ref. [34]. The the \(\gamma^{*}pN^{*}\) electrocouplings have become available from the studies of exclusive meson electroproduction data with the CLAS detector within the mass range of \(W<1.75\) GeV and for \(Q^{2}<5.0\) GeV\({}^{2}\)[29; 31; 32; 37].
## III Results
In Fig. 1, we compare the experimental results on the \(g_{1}\) structure function measured with CLAS with the resonant contributions, computed by employing resonance electroexcitation amplitudes deduced from exclusive CLAS electroproduction data [29; 31; 32; 37]. This is outlined in Sec. II [20]. We constrain ourselves to the range of \(W<1.8\) GeV and \(Q^{2}<5\) GeV\({}^{2}\) where the resonance electrocouplings are currently available. Both the individual resonance contributions, as well as the coherent and incoherent sums of resonances are shown.
One can clearly see that the qualitative dips-and-peaks behavior in the \(W\) dependence of the inclusive data is accounted for by the resonant contributions, in all \(Q^{2}\) bins. The dominant contribution in the first resonance region is that of the \(\Delta(1232)\,3/2^{+}\), which in turn is driven by the \(G_{-}\) amplitude (or by the \(A_{3/2}\) electrocoupling). According to Eq. (3), the contribution from the \(G_{-}\) amplitude squared enters \(g_{1}\) with a minus sign. This explains the negative values seen both in the \(g_{1}\) data around that peak, as well as in the purely resonant contributions. For most of the remaining states, it is the \(G_{+}\) amplitude, related to the \(A_{1/2}\) resonance electrocouplings, that dominates [32; 34]. For this reason, the total resonant contributions to \(g_{1}\) display a sign flip at \(W\) values between the first and second resonance peaks, as is also observed in the \(W\)-dependence of the measured \(g_{1}\) data [20], as depicted in Fig. 1. Our analysis confirms that the resonance contributions are the drivers of this behavior.
In Fig. 2, we show the computed resonance contributions to the virtual photon asymmetry \(A_{1}\), compared to the data measured both with the large-acceptance CLAS
detector and with other detectors of smaller acceptance in the resonance region [13; 14; 15; 16; 17; 18; 19; 20]. Since the asymmetry is defined by a cross-section ratio, the resonance structure becomes elusive.
Nevertheless, it is intriguing to find that, for \(W>1.5\) GeV, the \(W\) and \(Q^{2}\) evolution of \(A_{1}\) seen in the data is already rather well described by the inclusion of resonance contributions only. This points to a particular sensitivity of the \(A_{1}\) observable to the resonance contributions at \(W>1.5\) GeV. Such a behavior offers a hint for quark-hadron duality seen in this inclusive polarized electron scattering observable. This finding motivates the ongoing and future studies of resonance electrocouplings with the CLAS12 detector and a possible CEBAF energy increase up to 22 GeV [38], in order to scrutinize whether this behavior holds for even larger \(Q^{2}\) values and for the higher-mass states.
In addition, the studies presented here can and have been extended to \(g_{2}\) and \(A_{2}\), therefore calling for future high-acceptance measurements of these observables.
## IV Summary and Outlook
In these proceedings, we present the results on the exploration of the \(W\) and \(Q^{2}\) dependence of the coherent and incoherent sums of nucleon resonance contributions to the spin-dependent \(g_{1}\) structure function and the \(A_{1}\) virtual-photon asymmetry. These are evaluated from the experimental results on \(\gamma^{*}pN^{*}\) electrocouplings deduced from the analyses of exclusive meson electroproduction data. As input, we used the electroexcitation amplitudes extracted from CLAS data in the mass range up to \(W=1.75\) GeV [29; 31; 32; 37].
Our findings provide evidence that the sign-flip behavior in the \(g_{1}\) data is accounted for by the resonance contributions. In addition, the results point to a particular sensitivity of the \(A_{1}\) observable to the resonant contri
Figure 1: Proton \(g_{1}\) structure function data [20] (open black squares): **(a)**\(Q^{2}\approx 1.10\) GeV\({}^{2}\), **(b)**\(Q^{2}\approx 2.26\) GeV\({}^{2}\), **(c)**\(Q^{2}\approx 3.18\) GeV\({}^{2}\), **(d)**\(Q^{2}\approx 4.51\) GeV\({}^{2}\), compared to the coherent (thick blue curves) and incoherent (thin blue curves) sum of resonance contributions. The latter are computed at fixed \(Q^{2}\) corresponding to the average value of the binned data in each panel. The contributions from individual \(N^{*}\) and \(\Delta^{*}\) states are also shown separately. The uncertainties for the resonant contributions are computed by propagating the electrocoupling uncertainties via a bootstrap approach [32].
butions at \(W>1.5\) GeV. This calls for further measurements at larger values of \(Q^{2}\) and \(W\), to investigate up to which QCD scales the resonant states remain sizeable and relevant.
Further, the need to confirm the findings in this work for \(g_{2}\) and \(A_{2}\) gives clear motivation for future large-acceptance measurements of these observables in experiments with polarized electron beams and for both longitudinal and transverse target polarizations.
###### Acknowledgements.
We thank S. Kuhn, V. Lagerquist, W. Melnitchouk, and P. Pandey for useful discussions and providing us with the experimental data shown here. This work was supported by the U.S. Department of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab, and by the Deutsche Forschungsgemeinschaft (DFG) through the Research Unit FOR 2926 (project number 40824754).
|
2308.05245
|
A study of dissipative models based on Dirac matrices
|
We generalize the recent work of Shibata and Katsura, who considered a S=1/2
chain with alternating XX and YY couplings in the presence of dephasing, the
dynamics of which are described by the GKLS master equation. Their model is
equivalent to a non-Hermitian system described by the Kitaev formulation in
terms of a single Majorana species hopping on a two-leg ladder in the presence
of a nondynamical Z_2 gauge field. Our generalization involves Dirac gamma
matrix `spin' operators on the square lattice, and maps onto a non-Hermitian
square lattice bilayer which is also Kitaev-solvable. We describe the
exponentially many non-equilibrium steady states in this model. We identify how
the spin degrees of freedom can be accounted for in the 2d model in terms of
the gauge-invariant quantities and then proceed to study the Liouvillian
spectrum. We use a genetic algorithm to estimate the Liouvillian gap and the
first decay modes for large system sizes. We observe a transition in the first
decay modes, similar to that found by Shibata and Katsura. The results we
obtain are consistent with a perturbative analysis for small and large values
of the dissipation strength.
|
Jyotsna Gidugu, Daniel P. Arovas
|
2023-08-09T22:33:34Z
|
http://arxiv.org/abs/2308.05245v2
|
# A study of dissipative models based on Dirac matrices
###### Abstract
We generalize the recent work of Shibata and Katsura [1], who considered a \(S=\frac{1}{2}\) chain with alternating \(XX\) and \(YY\) couplings in the presence of dephasing, the dynamics of which are described by the GKLS master equation. Their model is equivalent to a non-Hermitian system described by the Kitaev formulation [2] in terms of a single Majorana species hopping on a two-leg ladder in the presence of a nondynamical \(\mathbb{Z}_{2}\) gauge field. Our generalization involves Dirac gamma matrix'spin' operators on the square lattice, and maps onto a non-Hermitian square lattice bilayer which is also Kitaev-solvable. We describe the exponentially many non-equilibrium steady states in this model. We identify how the spin degrees of freedom can be accounted for in the 2d model in terms of the gauge-invariant quantities and then proceed to study the Liouvillian spectrum. We use a genetic algorithm to estimate the Liouvillian gap and the first decay modes for large system sizes. We observe a transition in the first decay modes, similar to that in ref. [1]. The results we obtain are consistent with a perturbative analysis for small and large values of the dissipation strength.
## I Introduction
Open quantum systems afford us the opportunity to study phenomena such as relaxational quantum dynamics for systems coupled to a bath [3]. Typically this involves 'integrating out' or eliminating in some way the bath degrees of freedom, resulting in a dynamics for the system itself in terms of its reduced density matrix: \(\hat{\varrho}=\mathcal{L}\varrho\), where \(\mathcal{L}\) is the Liouvillian operator. At long times, the system relaxes to a non-equilibrium steady state (NESS); the existence of a NESS is guaranteed by the dynamics, but under special circumstances owing to, for example, extra conserved quantities, the NESS may not be unique.
For noninteracting systems, hybridization with the bath degrees of freedom still results in a solvable (quadratic) model [4]. For interacting systems, solvable models are rare, and numerical approaches are challenging. This is especially true for density matrix evolution since one must keep track of not just populations \(\ket{\alpha}\bra{\alpha}\) but also the coherences \(\ket{\alpha}\bra{\beta}\) with \(\alpha\neq\beta\), effectively squaring the size of the problem _vis-a-vis_ the system's Hilbert space dimension.
Recently, Shibata and Katsura [1] (SK) described a model of open system dynamics based on the GKLS master equation [5] which, though interacting, is solvable in the sense of Kitaev's celebrated honeycomb lattice Hamiltonian model [2]. That is to say, the evolution of \(\varrho(t)\) under the Liouvillian \(\mathcal{L}\) is effectively described by a non-interacting dynamics in the presence of a static \(\mathbb{Z}_{2}\) gauge field. While in each gauge sector the evolution is described by a quadratic, albeit non-Hermitian, Hamiltonian, there are exponentially many gauge sectors to evaluate (which in general have no discrete space group symmetries), and in this sense the general problem is intractable. For the Hermitian Kitaev model, oftentimes the ground state may be ascertained with help from a remarkable theorem by Lieb [6], which provides valuable information regarding much of the gauge-invariant content of the ground state, _i.e._ the \(\mathbb{Z}_{2}\) plaquette fluxes. For the non-Hermitian case, however, we know of no generalization of Lieb's theorem which constrains the gauge-invariant content of, say, the longest-lived decaying density matrix. Thus, in general one must resort to numerics if one is interested in the complex spectrum of \(\mathcal{L}\).
The Shibata-Katsura construction involves a \(S=\frac{1}{2}\) chain where each site is coupled to an environmental bath. Within the GKLS formalism, this results in an effective two leg ladder system, where one leg corresponds to the bra states and the other to the ket states of the density matrix, and the rungs of the ladder contribute non-Hermitian terms which result from the effective elimination of the bath degrees of freedom. The ladder is thus three-fold coordinated, and the model is constructed so that it satisfies the Kitaev solvability criteria SSII.3). Our main goal is to introduce and analyze a generalization of the SK model to two space dimensions, based on a \(4\times 4\) gamma matrix generalization of the Hamiltonian Kitaev model [7; 8]. As the dissipative SK model is described by non-Hermitian Hamiltonian evolution on the ladder, our model is described by such an evolution on a square lattice bilayer. As we shall see, while our model is a direct analog of SK in some sense, it also entails some important differences - in particular, an extensive number of conserved quantities leading to exponentially many NESSes.
We first discuss various preliminaries, including the GLKS master equation, its vectorization and description in terms of non-Hermitian Hamiltonian evolution on a product Hilbert space, the Shibata-Katsura model, gamma matrix generalizations of the Kitaev honeycomb model, and finally our extension of SK to a dissipative square lattice model involving \(4\times 4\) Dirac matrix'spin' operators.
Note: While this paper was in the final stages of preparation, two analyses of a largely equivalent model ap
peared on the arXiv [9; 10].
## II Preliminaries
### The GKLS master equation
An open quantum system \(\mathsf{S}\) is one which unitarily co-evolves with an environment \(\mathsf{E}\) under a Hamiltonian \(H=H_{\mathsf{S}}+H_{\mathsf{E}}+H_{\mathsf{int}}\), where \(H_{\mathsf{int}}\) couples \(\mathsf{S}\) and \(\mathsf{E}\). The expectation of any operator \(\mathcal{O}\) restricted to \(\mathsf{S}\) is given by \(\langle\mathcal{O}(t)\rangle=\mathsf{Tr}\left(\varrho_{\mathsf{S}}(t)\, \mathcal{O}\right)\), where \(\varrho_{\mathsf{S}}(t)\) is the time-dependent reduced density matrix of \(\mathsf{S}\), _i.e._\(\varrho_{\mathsf{S}}(t)=\mathsf{Tr}_{\mathsf{E}}\,\,\varrho_{\mathsf{U}}(t)\), where \(\varrho_{\mathsf{U}}(t)\) is the full density matrix describing the 'universe' \(\mathsf{U}=\mathsf{S}\cup\mathsf{E}\). Under certain assumptions, the dynamics of the system's reduced density matrix is described by the GKLS master equation [3; 5],
\[\frac{d\varrho}{dt}=-i\big{[}H,\varrho\big{]}+\sum_{a}\Bigl{(}L_{a}\,\varrho \,L_{a}^{\dagger}-\tfrac{1}{2}L_{a}^{\dagger}L_{a}\,\varrho-\tfrac{1}{2} \varrho\,L_{a}^{\dagger}L_{a}\Bigr{)}\quad, \tag{1}\]
Here and henceforth we drop the subscript \(\mathsf{S}\) on \(\varrho_{\mathsf{S}}\). The \(\{L_{a}\}\) are the Lindblad jump operators, which describe the effects of the system-environment coupling on \(\varrho\) after the environment is traced out. \(H\) is the 'Lamb shift Hamiltonian', which commutes with \(H_{\mathsf{S}}\) and includes renormalizations of the system's unperturbed energy levels resulting from the environmental couplings. In the absence of all such couplings, we recover the usual Liouville evolution \(\dot{\varrho}=-i[H_{\mathsf{S}},\varrho]\).
The full GKLS evolution in eqn. 1 is of the form \(\dot{\varrho}=\mathcal{L}\varrho\). Assuming \(\mathcal{L}\) is time-independent, one may formally write \(\varrho(t)=\exp(\mathcal{L}t)\varrho(0)\), which defines for each \(t\) a map \(\Phi_{t}\colon\varrho(0)\mapsto\varrho(t)\) which possesses the following salient properties: (i) linearity, (ii) trace-preserving, (iii) Hermiticity preserving, and (iv) complete positivity [3]. Writing \(\varrho(t)=\sum_{j,k}\varrho_{jk}(t)\,|j\rangle\langle k|\) in terms of basis states, we may write \(\dot{\varrho}_{jk}=\mathcal{L}_{jk,lm}\,\varrho_{lm}\), where \(\mathcal{L}_{jk,lm}\) is a super-matrix of dimension \(N^{2}\), where \(N\) is the dimension of the basis and \((jk)/(lm)\) are composite indices. Generically \(\mathcal{L}\) is not a normal matrix, _i.e._\([\mathcal{L},\mathcal{L}^{\dagger}]\neq 0\), and its eigenvalues \(\Lambda_{a}\) may be complex. However, since the evolution is trace-preserving, one has that \(\delta_{jk}\) is a left-eigenvector of \(\mathcal{L}\) with eigenvalue zero. The corresponding right eigenvector is the NESS, \(\varrho_{lm}^{\text{HESS}}\). Under special circumstances there may be more than one NESS [11]. Positivity entails that \(\mathsf{Re}\,\Lambda_{a}\leq 0\) for each eigenvalue of the Liouvillian \(\mathcal{L}\).
When each jump operator is Hermitian, then from eqn. 1 we have that the infinite temperature state \(\varrho\propto\mathds{1}\) is a valid NESS. Furthermore, if \(H\) as well as all the jump operators commute with a set of independent projectors \(\{\mathsf{P}_{s}\}\) with \(s\in\{1,\ldots,K\}\), then any density matrix of the form
\[\varrho=c_{0}\,\mathds{1}+\sum_{s=1}^{K}c_{s}\,\mathsf{P}_{s} \tag{2}\]
is also a valid NESS. This shall be the case for the model we investigate below. Thus we shall describe a system where there is relaxation to a degenerate block of NESSes. While such solutions to GKLS depend on the form of \(H\) and the jump operators \(\{L_{a}\}\), they are independent of the various coupling constants (so long as they remain finite), and we shall consider them all to be infinite temperature states.
### Equivalent non-Hermitian Hamiltonian
Any density matrix \(\varrho=\sum_{m,n}\varrho_{mn}\,|\,m\,\rangle\,\langle\,n\,|\) may be represented in vector form as
\[\varrho\longrightarrow|\,\varrho\,\rangle\equiv\sum_{m,n}\varrho_{mn}\,|\,m \,\rangle\otimes|\,n\,\rangle\quad. \tag{3}\]
Thus, the bra vector \(\langle\,n\,|\) is replaced by the corresponding ket vector \(|\,n\,\rangle\), _i.e._\(|\,m\,\rangle\langle\,n\,|\rightarrow|\,m\,\rangle\otimes|\,n\,\rangle\). If \(B\) is any operator, then under vectorization we have
\[\begin{split}\langle\,n\,|\,B&=\sum_{k}\,\langle\,n \,|\,B\,|\,k\,\rangle\langle\,k\,|\\ &\longrightarrow\sum_{k}|\,k\,\rangle\langle\,k\,|\,B^{\mathsf{ T}}\,|\,n\,\rangle=B^{\mathsf{ T}}\,|\,n\,\rangle\quad.\end{split} \tag{4}\]
The GKLS master equation eqn. 1 then takes the vectorized form
\[i\,\frac{d}{dt}\,|\,\varrho\,\rangle=\mathcal{W}\,|\,\varrho\,\rangle\quad, \tag{5}\]
where [12]
\[\mathcal{W} =H\otimes\mathds{1}-\mathds{1}\otimes H^{\mathsf{T}}+ \tag{6}\] \[\quad i\sum_{r}\Bigl{(}L_{r}\otimes L_{r}^{*}-\tfrac{1}{2}\,L_{r}^ {\dagger}\,L_{r}\otimes\mathds{1}-\mathds{1}\otimes\tfrac{1}{2}\,L_{r}^{ \mathsf{T}}\,L_{r}^{*}\Bigr{)}\quad.\]
Note that operators \(\mathcal{O}\) acting on the \(|\,n\,\rangle\) component of the product \(|\,m\,\rangle\otimes|\,n\,\rangle\) appear as transposes \(\mathcal{O}^{\mathsf{T}}\), since they would normally act to the left on \(\langle\,n\,|\). Eqn. 5 takes the form of an effective Schrodinger equation, with \(|\,\varrho(t)\,\rangle\) evolving according to the non-Hermitian effective Hamiltonian \(\mathcal{W}\) acting on a doubled Hilbert space. For any operator \(\mathcal{O}\), we may compute the trace in the vectorized representation according to
\[\mathsf{Tr}(\mathcal{O}\varrho)=\langle\,\mathds{1}\,|\,\mathcal{O}\otimes \mathds{1}\,|\,\varrho\,\rangle\quad, \tag{7}\]
where \(\langle\,\mathds{1}\,|=\sum_{n}\,\langle\,n\,|\otimes\langle\,n\,|\)\). The eigenvalues of \(\mathcal{W}\), which we denote by \(\{E_{a}\}\), are related to those of the Liouvillian by \(E_{a}=i\Lambda_{a}\).
### Shibata-Katsura Model
The Shibata-Katsura (SK) model [1] describes a dissipative \(S=\tfrac{1}{2}\) chain. The Hamiltonian is
\[H=\sum_{n}\left(J_{x}\,X_{2n-1}X_{2n}+J_{y}\,Y_{2n}Y_{2n+1}\right) \tag{8}\]
and the jump operators are \(L_{n}=\sqrt{\gamma}\,Z_{n}\,\), with \(\gamma>0\). Thus, we have
\[\mathcal{W}(\gamma)=\sum_{n=1}^{N_{c}}\Big{(}J_{x}\,X_{2n-1}X_{2n}+J _{y}\,Y_{2n}Y_{2n+1}-J_{x}\,\widetilde{X}_{2n-1}\widetilde{X}_{2n}\\ -J_{y}\,\widetilde{Y}_{2n}\widetilde{Y}_{2n+1}\Big{)}+i\gamma\sum _{j=1}^{N}\Big{(}Z_{j}\widetilde{Z}_{j}-1\Big{)}\quad, \tag{9}\]
where the \((X,Y,Z)\) operators act on the first Hilbert space and \((\widetilde{X},\widetilde{Y},\widetilde{Z})\) act on the copy. The system is depicted in Fig. 1 and corresponds to a non-Hermitian two-leg ladder. \(N_{c}\) is the number of unit cells, and there are \(N=2N_{c}\) sites on each leg of the ladder. Note that \(\mathcal{W}^{*}(\gamma)=\mathcal{W}(-\gamma)\), and that if we define \(R\) as the reflection operator mapping one leg into the other, _i.e._\((X_{j},Y_{j},Z_{j})\leftrightarrow(\widetilde{X}_{j},\widetilde{Y}_{j}, \widetilde{Z}_{j})\) for all \(j\), then
\[R\,\mathcal{W}(\gamma)R=-\mathcal{W}(-\gamma)=-\mathcal{W}^{*}(\gamma)\quad. \tag{10}\]
This establishes that the eigenvalues of \(\mathcal{W}(\gamma)\) come in pairs \(\Lambda^{\pm}_{a}=\pm E_{a}+i\Gamma_{a}\,\). Total positivity requires that \(\mathsf{Im}\,(\Gamma_{a})\leq 0\). Any NESS satisfies \(\mathcal{W}(\gamma)\,|\,_{\theta_{\text{NESS}}}\,)=0\).
Introducing on each site four Majorana fermions \(\theta^{0,1,2,3}\) and expressing the Pauli matrices therefrom,
\[X_{j}=i\theta_{j}^{0}\theta_{j}^{1}\quad,\quad Y_{j}=i\theta_{j}^{0}\theta_{j }^{2}\quad,\quad Z_{j}=i\theta_{j}^{0}\theta_{j}^{3}\quad, \tag{11}\]
with corresponding expression for \((\widetilde{X}_{j},\widetilde{Y}_{j},\widetilde{Z}_{j})\), one may express \(\mathcal{W}(\gamma)\) as
\[\mathcal{W}(\gamma)=\sum_{n=1}^{N_{c}}\Big{\{}iJ_{x}\big{[}\mu_{2 n-1}^{x}\theta_{2n-1}^{0}\theta_{2n}^{0}-\tilde{\mu}_{j}^{x}\tilde{\theta}_{2n-1}^ {0}\tilde{\theta}_{2n}^{0}\big{]}\\ +iJ_{y}\big{[}\mu_{2n}^{y}\theta_{2n}^{0}\theta_{2n+1}^{0}-\tilde {\mu}_{2n}^{y}\tilde{\theta}_{2n}^{0}\tilde{\theta}_{2n+1}^{0}\big{]}\Big{\}} (12)
where
\[\mu_{2n-1}^{x}=-i\theta_{2n-1}^{1}\theta_{2n}^{1}\ \,\ \ \tilde{\mu}_{2n-1}^{x}=i\tilde{ \theta}_{2n-1}^{1}\tilde{\theta}_{2n}^{1} \tag{13}\] \[\mu_{2n}^{y}=-i\theta_{2n}^{2}\theta_{2n+1}^{2}\ \,\ \ \tilde{\mu}_{2n}^{y}=-i\tilde{ \theta}_{2n}^{2}\tilde{\theta}_{2n+1}^{2}\ \,\ \ \mu_{j}^{z}=-i\theta_{j}^{3}\tilde{\theta}_{j}^{3}\]
are \(\mathbb{Z}_{2}\) gauge fields on the links of the two leg ladder in fig. 1. These gauge fields commute with each other and with the \(\theta^{0}\) hopping terms, as well as with the constraints
\[\Lambda_{j}\equiv\theta_{j}^{0}\theta_{j}^{1}\theta_{j}^{2}\theta_{j}^{3}=+1 \quad,\quad\widetilde{\Lambda}_{j}\equiv\tilde{\theta}_{j}^{0}\tilde{\theta}_{ j}^{1}\tilde{\theta}_{j}^{2}\tilde{\theta}_{j}^{3}=+1\quad. \tag{14}\]
which must be imposed at each site in order to guarantee \(XY=iZ\). This is the magic of the Kitaev honeycomb lattice model, where the link lattice is also tripartite: the Hamiltonian corresponds to a single species (\(\theta^{0}\)) of Majorana fermion hopping in the presence of a nondynamical \(\mathbb{Z}_{2}\) gauge field. The gauge-invariant content of the theory is contained in the plaquette fluxes \(\Phi_{2n-1}=\mu_{2n-1}^{x}\mu_{2n}^{z}\tilde{\mu}_{2n-1}^{x}\mu_{2n-1}^{z}\) and \(\Phi_{2n}=\mu_{2n}^{y}\mu_{2n+1}^{z}\tilde{\mu}_{2n}^{y}\mu_{2n}^{z}\) and in the Wilson phases \(Q=\prod_{j=1}^{N}Z_{j}\) and \(\tilde{Q}=\prod_{j=1}^{N}\widetilde{Z}_{j}\). With periodic boundary conditions, \(Q\tilde{Q}=\prod_{j=1}^{N}\Phi_{j}\).
## III Dirac matrix SK model
### Gamma matrix Kitaev models
A Clifford algebra is defined by the anticommutation relations,
\[\big{\{}\Gamma^{a}\,,\,\Gamma^{b}\big{\}}=2\delta^{ab}\qquad\mu,\nu\in\{1, \ldots,n\}\quad. \tag{15}\]
When \(n=2k\), a representation of the algebra can be constructed by tensor products of \(k\) Pauli matrices, _viz._
\[\Gamma^{1} =X\otimes 1\otimes\cdots\otimes 1\quad\quad\Gamma^{2k-1}=Z\otimes Z \otimes\cdots\otimes X \tag{16}\] \[\Gamma^{2} =Y\otimes 1\otimes\cdots\otimes 1\quad\quad\Gamma^{2k}=Z\otimes Z \otimes\cdots\otimes Y\] \[\Gamma^{3} =Z\otimes X\otimes\cdots\otimes 1\quad\Gamma^{2k+1}=Z\otimes Z \otimes\cdots\otimes Z\]
The gamma matrices defined above are all Hermitian. In even dimensions, we define
\[\Gamma^{2k+1}=(-i)^{k}\,\Gamma^{1}\,\Gamma^{2}\cdots\Gamma^{2k}\quad. \tag{17}\]
Introducing \(2k+2\) Majorana fermions \(\theta^{a}\) with indices \(a\in\{0,\ldots,2k+1\}\) satisfying \(\big{\{}\theta^{a},\theta^{b}\big{\}}=2\delta^{ab}\), we define \(\Gamma^{\mu}=i\theta^{0}\theta^{\mu}\) with \(\mu>0\). Analogous to the constraint \(\theta^{0}\theta^{1}\theta^{2}\theta^{3}=1\) when \(k=1\), we demand
\[\theta^{0}\,\theta^{1}\cdots\theta^{2k+1}=i^{k-1}\quad. \tag{18}\]
The case \(k=1\) yields the \(2\times 2\) Pauli matrices, with \(\Gamma^{1}=X\), \(\Gamma^{2}=Y\), and \(\Gamma^{3}=-i\,\Gamma^{1}\Gamma^{2}=Z\). The case \(k=2\) yields the \(4\times 4\) Dirac matrices, with \(\Gamma^{5}=-\Gamma^{1}\Gamma^{2}\Gamma^{3}\Gamma^{4}\). For general \(k\) this yields \(2k+1\) matrices of rank \(2^{k}\). One can then form \(\Gamma^{\mu\nu}=i\Gamma^{\mu}\Gamma^{\nu}=i\theta^{\mu}\theta^{\nu}\) of which there are \(\binom{2k+1}{2}\) independent representatives (take \(\mu<\nu\)), and next \(\Gamma^{\mu\nu\rho}=-i\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho}=\theta^{0}\theta^{ \mu}\theta^{\nu}\theta^{\rho}\) and \(\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho}\Gamma^{\sigma}=\theta^{\mu}\theta^{\nu} \theta^{\rho}\theta^{\sigma}\), which yield \(\binom{2k+1}{3}\) and \(\binom{2k+1}{4}\) independent terms, respectively, _etc._ Proceeding thus only obtains at level \(k\) a basis of \(4^{k}\) Hermitian matrices of rank \(2^{k}\).
Analogs of Kitaev's honeycomb lattice model using these higher level Clifford algebras have been considered, _inter alia_ in refs. [7; 8], with interactions \(\Gamma_{i}^{\mu}\Gamma_{j}^{\mu}\) along the links. When the underlying lattice is such that each site lies at the confluence of \(2k+1\) distinctly labeled \(\mu\)-links, the'spin' Hamiltonian is again expressible as a single species (\(\theta^{0}\)) Majorana fermion hopping in the presence of a static \(\mathbb{Z}_{2}\) gauge field. Other generalizations, in which multiple species of Majoranas hop in the same \(\mathbb{Z}_{2}\) static gauge field and hybridize as well have also been constructed [7; 13].
Figure 1: The Shibata-Katsura ladder (see text for description).
### Dirac matrix SK model
We generalize the SK model to a dissipative \(\Gamma\)-matrix model defined on the square lattice, as depicted in fig. 2. We regard the square lattice as bipartite, with elementary direct lattice vectors \(\mathbf{a}_{1,2}=\hat{\mathbf{x}}\pm\hat{\mathbf{y}}\). Our Hamiltonian is
\[\begin{split} H=\sum_{\mathbf{R}}\Big{(}J_{1}\,\Gamma^{1}_{\mathbf{R}}\, \Gamma^{1}_{\mathbf{R}+\hat{\mathbf{x}}}+J_{2}\,\Gamma^{2}_{\mathbf{R}}\,\Gamma^{2}_{\mathbf{R} +\hat{\mathbf{y}}}\\ \qquad\qquad\qquad\qquad+J_{3}\,\Gamma^{3}_{\mathbf{R}}\,\Gamma^{3}_{ \mathbf{R}-\hat{\mathbf{x}}}+J_{4}\,\Gamma^{4}_{\mathbf{R}}\,\Gamma^{4}_{\mathbf{R}-\hat{\mathbf{y }}}\Big{)}\quad,\end{split} \tag{19}\]
where \(\mathbf{R}=n_{1}\mathbf{a}_{1}+n_{2}\mathbf{a}_{2}\) with \(n_{1,2}\in\mathbb{Z}\) are the A sublattice sites, which are \(N_{\rm c}\) in number. We use the symbol \(\mathbf{r}\) to denote a site which may be in either sublattice. Thus, on each site of the square lattice, a four-dimensional Hilbert space is acted upon by operators \(1_{\mathbf{r}}\), \(\Gamma^{\mu}_{\mathbf{r}}\), and \(\Gamma^{\mu\nu}_{\mathbf{r}}\), where \(\Gamma^{\mu}\) are \(4\times 4\) Dirac matrices, with \(\mu\in\{1,\ldots,5\}\).
Following SK, we take the Lindblad jump operators to be \(L_{\mathbf{r}}=\sqrt{\gamma}\,\Gamma^{5}_{\mathbf{r}}\) at each site. The GKLS master equation can then be written as a non-Hermitian Hamiltonian evolution of a model on a square lattice bilayer, with each layer corresponding to one copy of the Hilbert space. This Hamiltonian is
\[\mathcal{W}\big{(}\{J_{\delta}\},\gamma\big{)}=\sum_{\mathbf{R}\in \mathsf{A}}\sum_{\delta=1}^{4}J_{\delta}\Big{(}iu^{\delta}_{\mathbf{R}}\,\theta^{ 0}_{\mathbf{R}}\,\theta^{0}_{\mathbf{R}+\mathbf{\delta}}-i\hat{u}^{\delta}_{\mathbf{R}}\, \tilde{\theta}^{0}_{\mathbf{R}}\,\tilde{\theta}^{0}_{\mathbf{R}+\mathbf{\delta}}\Big{)} \tag{20}\] \[\qquad\qquad\qquad\qquad-\gamma\sum_{\mathbf{r}\in\mathsf{A},\mathsf{ B}}u^{5}_{\mathbf{r}}\,\theta^{0}_{\mathbf{r}}\,\tilde{\theta}^{0}_{\mathbf{r}}-2i \gamma N_{\rm c}\;\;,\]
where \(\mathbf{\delta}\in\{\hat{\mathbf{x}},\hat{\mathbf{y}},-\hat{\mathbf{x}},-\hat{\mathbf{y}}\}\) for \(\delta\in\{1,2,3,4\}\), respectively, and where (see fig. 3)
\[u^{\delta}_{\mathbf{R}}=-i\theta^{\delta}_{\mathbf{R}}\theta^{\delta}_{\mathbf{R}+\mathbf{ \delta}}\quad,\quad\tilde{u}^{\delta}_{\mathbf{R}}=-i\tilde{\theta}^{\delta}_{\bm {R}}\tilde{\theta}^{\delta}_{\mathbf{R}+\mathbf{\delta}}\quad,\quad u^{5}_{\mathbf{r}}=-i \theta^{5}_{\mathbf{r}}\tilde{\theta}^{5}_{\mathbf{r}} \tag{21}\]
are the nondynamical gauge fields in the bottom, top, and between layer regions. There are \(5N\) such gauge fields, but as we shall see the number of gauge-invariant quantities is \(3N+1\), _i.e._ there are \(2^{3N+1}\) gauge sectors, where \(N\) is the total number of sites in either layer.
#### ii.2.1 Conserved quantities
For the original SK model, the product \(Q=Z_{1}\cdots Z_{N}\) is conserved as it commutes with \(H\) and with each of the jump operators \(\sqrt{\gamma}\,Z_{j}\). This means that both \(1\) and \(Q\) are annihilated by the Liouvillian \(\mathcal{L}\), and that both
\[\varrho_{\pm}=2^{-N}\big{(}1\pm Q\big{)} \tag{22}\]
are thus valid NESSes, for all \(\gamma\)[1].
For our model of eqn. 20, there are vastly more conserved quantities. With periodic boundary conditions along both axes, there are \(N+1\) gauge-invariant quantities, which are the \(N\) plaquette fluxes (see fig. 2),
\[\Phi_{\mathbf{r}}\equiv\begin{cases}-\Gamma^{21}_{\mathbf{r}}\,\Gamma^{14}_{\mathbf{r}+ \mathbf{\delta}}\,\Gamma^{43}_{\mathbf{r}+\mathbf{\hat{y}}}\,\Gamma^{32}_{\mathbf{r}+\mathbf{\hat{ y}}}\quad\text{if}\,\,\mathbf{r}\in\mathsf{A}\\ -\Gamma^{43}_{\mathbf{r}+\mathbf{\hat{x}}}\,\Gamma^{21}_{\mathbf{r}+\mathbf{\hat{x}}+\mathbf{\hat{ y}}}\,\Gamma^{14}_{\mathbf{r}+\mathbf{\hat{y}}}\quad\text{if}\,\,\mathbf{r}\in\mathsf{B}\end{cases}\;, \tag{23}\]
where the \(\mathbb{Z}_{2}\) flux in plaquette \(\mathbf{r}\) is labeled by the lower left site of the plaquette [14]. Note that the product \(\prod_{\mathbf{r}}\Phi_{\mathbf{r}}=1\), hence there are \(N-1\) independent \(\mathbb{Z}_{2}\) plaquette fluxes. In addition, we have the two Wilson phases,
\[\begin{split} W_{x}&=-\Gamma^{13}_{1,1}\,\Gamma^{31}_{2,1} \,\cdots\,\Gamma^{13}_{N_{x}-1,1}\,\Gamma^{31}_{N_{x},1}\\ W_{y}&=-\Gamma^{24}_{1,1}\,\Gamma^{42}_{1,2}\,\cdots\,\Gamma^{24}_{1,N _{y}-1}\,\Gamma^{42}_{1,N_{y}}\quad,\end{split} \tag{24}\]
where both \(N_{x}\) and \(N_{y}\) are taken to be even, and with the total number of sites \(N\equiv N_{x}N_{y}\). (Note that \(\Gamma^{31}=-\Gamma^{13}\) and \(\Gamma^{42}=-\Gamma^{23}\); we choose to write the Wilson phases as above because the repetition of consecutive \(\Gamma\)-matrix indices is a useful mnemonic.) One can readily check that \(\Phi_{\mathbf{r}}\) commutes with both \(H\) and with all the jump operators. In addition, the operator \(Q=\prod_{\mathbf{r}}\Gamma^{5}_{\mathbf{r}}\) also commutes with the Hamiltonian and with all of the jump operators. However, if we examine the product of the \(\mathbb{Z}_{2}\) fluxes over the A plaquettes alone, _i.e._ over those plaquettes with an A site in their lower left corner, then from \(\Gamma^{43}\Gamma^{21}=-\Gamma^{1}\Gamma^{2}\Gamma^{3}\Gamma^{4}=\Gamma^{5}\), we conclude that \(\prod_{\mathbf{R}\in\mathsf{A}}\phi_{\mathbf{R}}=\prod_{\mathbf{r}}\Gamma^{5}_{\mathbf{r}}=Q\), and therefore \(Q\) is not an independent conserved quantity. Finally, as the jump operators are all Hermitian, according to eqn. 2 we have a \(2^{N+1}\)-dimensional subspace of \(T=\infty\) nonequilibrium steady states, since there are \(2^{N+1}\) projectors,
\[\begin{split}\Pi_{\eta_{x},\eta_{y},\{\eta_{x}\}}\equiv\bigg{(} \frac{1+\eta_{x}W_{x}}{2}\bigg{)}&\bigg{(}\frac{1+\eta_{y}W_{y}}{2} \bigg{)}\\ &\times\prod_{\mathbf{r}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(L_{\mathbf{r}}\). The prime on the product indicates that the final plaquette with \(\mathbf{r}=(N_{x},N_{y})\) is omitted. The total number of unnormalized density matrices is \((4^{2})^{N}=16^{N}\). _I.e._ any density matrix of the form
\[\varrho=\sum_{\eta_{x}}\sum_{\eta_{y}}\sum_{\{\mathbf{r}\}}C_{\eta_{x},\eta_{y},\{ \eta_{\mathbf{r}}\}}\Pi_{\eta_{x},\eta_{y},\{\eta_{\mathbf{r}}\}} \tag{26}\]
with \(\mathsf{Tr}\,\varrho=\sum_{\eta_{x}}\sum_{\eta_{y}}\sum_{\{\mathbf{\eta}_{\mathbf{r}} \}}C_{\eta_{x},\eta_{y},\{\eta_{\mathbf{r}}\}}=1\) and each \(C_{\eta_{x},\eta_{y},\{\eta_{\mathbf{r}}\}}\geq 0\) is a valid NESS.
### Analysis
We define a complex fermion living along each link between planes of the bilayer, _viz._
\[c_{\mathbf{r}}=\tfrac{1}{2}(\theta^{0}_{\mathbf{r}}+i\bar{\theta}^{0}_{\mathbf{r}})\qquad, \qquad c_{\mathbf{r}}^{\dagger}=\tfrac{1}{2}(\theta^{0}_{\mathbf{r}}-i\bar{\theta}^{ 0}_{\mathbf{r}})\quad, \tag{27}\]
and thus
\[\theta^{0}_{\mathbf{r}}=c_{\mathbf{r}}^{\dagger}+c_{\mathbf{r}}\qquad,\qquad\tilde{\theta }^{0}_{\mathbf{r}}=i(c_{\mathbf{r}}^{\dagger}-c_{\mathbf{r}})\quad. \tag{28}\]
The non-Hermitian Hamiltonian of eqn. 20 is then expressed in terms of these complex fermions as
\[\begin{split}\mathcal{W}=\sum_{\mathbf{R}\in\mathcal{A}}\sum_{\delta =1}^{4}\Big{\{}iJ_{\delta}\big{(}u^{\delta}_{\mathbf{R}}-\tilde{u}^{\delta}_{\bm {R}}\big{)}\big{(}c_{\mathbf{R}}^{\dagger}c_{\mathbf{R}+\mathbf{\delta}}+c_{\mathbf{R}+\mathbf{ \delta}}^{\dagger}c_{\mathbf{R}}\big{)}\\ \qquad\qquad+iJ_{\delta}\big{(}u^{\delta}_{\mathbf{R}}+\tilde{u}^{ \delta}_{\mathbf{R}}\big{)}\big{(}c_{\mathbf{R}}^{\dagger}c_{\mathbf{R}+\mathbf{\delta}}^{ \dagger}-c_{\mathbf{R}+\mathbf{\delta}}c_{\mathbf{R}}\big{)}\Big{\}}\\ \qquad\qquad\qquad+i\gamma\sum_{\mathbf{r}\in\mathsf{A},\mathsf{B}}u^{ \mathrm{5}}_{\mathbf{r}}\big{(}2c_{\mathbf{r}}^{\dagger}c_{\mathbf{r}}-1\big{)}-2iN_{ \mathrm{c}}\gamma\quad.\end{split} \tag{29}\]
### Counting degrees of freedom
Associated with each \(\mathsf{A}\) sublattice site in the bottom layer are eight square plaquette \(\mathbb{Z}_{2}\) fluxes (see fig. 4). These fall into three groups. First are the fluxes through the \((x,y)\) plaquettes. For the bottom layer we have
\[\begin{split}\Phi^{+}_{\mathbf{R}}&=u^{1}_{\mathbf{R}}\,u^{ 4}_{\mathbf{R}+\mathbf{a}_{2}}u^{3}_{\mathbf{R}+\mathbf{a}_{2}}u^{2}_{\mathbf{R}}=-\Gamma^{21}_{ \mathbf{R}}\,\Gamma^{14}_{\mathbf{R}+\mathbf{\varepsilon}}\,\Gamma^{43}_{\mathbf{R}+\mathbf{a}_{2} }\,\Gamma^{32}_{\mathbf{R}+\mathbf{\hat{y}}}\\ \Phi^{-}_{\mathbf{R}}&=u^{4}_{\mathbf{R}}\,u^{3}_{\mathbf{R}+\mathbf{ a}_{1}}u^{2}_{\mathbf{R}+\mathbf{a}_{1}}u^{1}_{\mathbf{R}}=-\Gamma^{14}_{\mathbf{R}}\,\Gamma^{ 43}_{\mathbf{R}-\mathbf{\hat{y}}}\,\Gamma^{32}_{\mathbf{R}+\mathbf{a}_{1}}\,\Gamma^{21}_{\mathbf{R} +\mathbf{\hat{x}}}\end{split} \tag{30}\]
with corresponding expressions involving \(\widetilde{\Phi}^{\pm}_{\mathbf{R}}\), \(\widetilde{\Gamma}_{\mathbf{r}}\), and \(\tilde{u}^{\delta}_{\mathbf{R}}\) in the top layer. Next, the \((x,z)\) plaquette fluxes \(\Psi^{\pm}_{\mathbf{R}}\),
\[\Psi^{+}_{\mathbf{R}} =u^{1}_{\mathbf{R}}\,u^{5}_{\mathbf{R}+\mathbf{\hat{x}}}u^{2}_{\mathbf{R}}\,u^{5}_ {\mathbf{R}}=-\Gamma^{51}_{\mathbf{R}}\,\Gamma^{15}_{\mathbf{R}+\mathbf{\hat{x}}}\,\widetilde{ \Gamma}^{51}_{\mathbf{R}+\mathbf{\hat{x}}}\,\widetilde{\Gamma}^{15}_{\mathbf{R}} \tag{31}\] \[\Psi^{-}_{\mathbf{R}} =u^{5}_{\mathbf{R}}\,\tilde{u}^{3}_{\mathbf{R}}\,u^{5}_{\mathbf{R}-\mathbf{\hat{x }}}u^{3}_{\mathbf{R}}=-\Gamma^{35}_{\mathbf{R}}\,\Gamma^{53}_{\mathbf{R}}\,\widetilde{ \Gamma}^{35}_{\mathbf{R}-\mathbf{\hat{x}}}\,\widetilde{\Gamma}^{53}_{\mathbf{R}-\mathbf{\hat{x}} }\quad.\]
Finally, the \((y,z)\) plaquette fluxes \(\Omega^{\pm}_{\mathbf{R}}\) are given by
\[\Omega^{+}_{\mathbf{R}} =u^{2}_{\mathbf{R}}\,u^{5}_{\mathbf{R}+\mathbf{\hat{y}}}u^{2}_{\mathbf{R}}\,u^{5}_ {\mathbf{R}}=-\Gamma^{52}_{\mathbf{R}}\,\Gamma^{25}_{\mathbf{R}+\mathbf{\hat{y}}}\,\widetilde{ \Gamma}^{52}_{\mathbf{R}+\mathbf{\hat{y}}}\,\widetilde{\Gamma}^{52}_{\mathbf{R}} \tag{32}\] \[\Omega^{-}_{\mathbf{R}} =u^{5}_{\mathbf{R}}\,\tilde{u}^{4}_{\mathbf{R}}\,u^{5}_{\mathbf{R}-\mathbf{\hat{y }}}u^{4}_{\mathbf{R}}=-\Gamma^{45}_{\mathbf{R}}\,\widetilde{\Gamma}^{54}_{\mathbf{R}}\, \widetilde{\Gamma}^{45}_{\mathbf{R}-\mathbf{\hat{y}}}\,\Gamma^{54}_{\mathbf{R}-\mathbf{\hat{y}}}\quad.\]
There are also the Wilson phases,
\[\begin{split} W_{x}&=u^{1}_{1,1}\,(-u^{3}_{3,1})\,u^{1} _{3,1}\,(-u^{3}_{5,1})\cdots u^{1}_{N_{x}-1,1}\,(-u^{3}_{1,1})\\ &=-\Gamma^{13}_{1,1}\,\Gamma^{31}_{2,1}\cdots\Gamma^{31}_{N_{x},1} \\ W_{y}&=u^{2}_{1,1}\,(-u^{4}_{3,1})\,u^{2}_{1,3}\,(-u^{4}_{ 1,5})\cdots u^{2}_{1,N_{y}-1}\,(-u^{4}_{1,1})\\ &=\Gamma^{24}_{1,1}\,\Gamma^{42}_{1,2}\cdots\Gamma^{42}_{1,N_{y}} \quad,\end{split} \tag{33}\]
again with corresponding expressions for \(\widetilde{W}_{x}\) and \(\widetilde{W}_{y}\). At this point it appears that we have \(4N+4\) gauge-invariant \(\mathbb{Z}_{2}\) degrees of freedom. However, the total flux through each of the \(N\) cubes must be trivial, providing \(N\) constraints. There is an additional constraint \(\prod_{\mathbf{R}}\Phi^{+}_{\mathbf{R}}\Phi^{-}_{\mathbf{R}}=1\) due to periodic boundary conditions; the corresponding expression in the top layer does not yield new information given the condition on each of the cubes. Finally, there are two constraints relating the products of the Wilson phases in each of the layers to the \(\Omega\) and \(\Psi\) plaquette fluxes (see eqn. 41 below.) Thus, there are \(N+3\) independent constraints, and therefore \(3N+1\) independent gauge-invariant configurations of the fluxes and Wilson phases. We must also acknowledge the constraints imposed by the projectors which enforce \(\Lambda_{\mathbf{\tau}}=\widetilde{\Lambda}_{\mathbf{\tau}}=1\), with \(\Lambda_{\mathbf{\tau}}=\theta^{0}_{\mathbf{\tau}}\,\theta^{1}_{\mathbf{\tau}}\,\theta^{2 }_{\mathbf{\tau}}\,\theta^{3}_{\mathbf{\tau}}\,\theta^{4}_{\mathbf{\tau}}\,\theta^{5}_{ \mathbf{\tau}}=-i\). Taking the product over all sites, we obtain [7]
\[\prod_{\mathbf{r}}i\theta^{0}_{\mathbf{r}}\tilde{\theta}^{0}_{\mathbf{r}}\times\prod_{\bm {R},\delta}u^{\delta}_{\mathbf{R}}\tilde{\theta}^{\delta}_{\mathbf{R}}\times\prod_{\bm {r}}u^{5}_{\mathbf{r}}=1\quad. \tag{34}\]
This expression includes a product over all the itinerant fermion parities \(2c^{\dagger}_{\mathbf{r}}c_{\mathbf{r}}-1\) as well as over each of the \(5N\)\(\mathbb{Z}_{2}\) gauge fields which reside on the links of the bilayer structure. It thereby constrains the parity of the \(c\)-fermions, which are constructed from \(\theta^{0}\) and \(\tilde{\theta}^{0}\) on each of the interplane links. Thus rather than \(N\) freedoms for the dynamical fermion states, there are \(N-1\), and the total number of states in our doubled Hilbert space is \(2^{3N+1}\times 2^{N-1}=16^{N}\), which is the correct number of density matrices for an \(N\)-site system described by \(4\times 4\) gamma matrices [15].
### Choosing a gauge
Given the \(3N+1\) independent plaquette fluxes and Wilson phases, how can we pick a gauge? Let us first consider the planar fluxes \(\Phi^{\pm}_{\mathbf{R}}\) in the bottom layer and the sketch in fig. 5. The coordinates of the A sublattice site in the lower left corner are \(\mathbf{r}=(x,y)=(1,1)\). The Wilson phase fluxes are defined to be \(u^{1}_{1,1}\equiv W_{x}\) and \(u^{4}_{1,1}\equiv-W_{y}\). We then define the remaining unassigned gauge fields as follows:
\[u^{3}_{2,2} =\Phi^{+}_{1,1}\,u^{1}_{1,1} u^{1}_{2,2} =\Phi^{-}_{2,2}\] \[u^{1}_{1,3} =\Phi^{-}_{1,3}\,u^{3}_{2,2} u^{3}_{3,3} =\Phi^{+}_{2,2}\,u^{1}_{2,2}\quad\ldots\] \[u^{1}_{1,N_{y}-1} =\Phi^{-}_{1,N_{y}-1}\,u^{2}_{2,N_{y}-2} u^{3}_{3,N_{y}-1} =\Phi^{+}_{2,N_{y}-2}\,u^{1}_{2,N_{y}-2}\] \[u^{3}_{2,N_{y}} =\Phi^{+}_{1,N_{y}-1}\,u^{1}_{1,N_{y}-1} u^{1}_{2,N_{y}} =\Phi^{-}_{2,N_{y}}\,u^{3}_{3,N_{y}-1}\] \[u^{2}_{2,N_{y}} =\Phi^{-}_{1,1}\,u^{4}_{1,1}\,u^{3}_{2,N_{y}} u^{4}_{3,1} =\Phi^{-}_{2,N_{y}}\,u^{2}_{1,N_{y}}\,u^{2}_{2,N_{y}}\]
and
\[u^{3}_{4,2} =\Phi^{+}_{3,1} u^{1}_{N_{x},2} =\Phi^{-}_{N_{x},2} \tag{35}\] \[u^{3}_{3,3} =\Phi^{-}_{3,3}\,u^{3}_{4,2} u^{3}_{1,3} =\Phi^{+}_{N_{x},2}\,u^{1}_{N_{x},2}\] \[u^{3}_{4,4} =\Phi^{+}_{3,3}\,u^{1}_{3,3} u^{1}_{1,x} =\Phi^{-}_{N_{x},4}\,u^{3}_{1,3}\quad\ldots\] \[u^{1}_{3,N_{y}-1} =\Phi^{-}_{3,N_{y}-1}\,u^{3}_{4,N_{y}-2} u^{3}_{1,N_{y}-1} =\Phi^{+}_{N_{x},N_{y}-2}\,u^{1}_{N_{x},N_{y}-2}\] \[u^{3}_{4,N_{y}} =\Phi^{+}_{3,N_{y}-1}\,u^{1}_{3,N_{y}-1} u^{1}_{N_{x},N_{y}} =\Phi^{-}_{N_{x},N_{y}}\,u^{3}_{1,N_{y}-1}\] \[u^{4}_{3,1} =\Phi^{+}_{2,N_{y}}\,u^{3}_{2,N_{y}}\,u^{1}_{2,N_{y}} u^{4}_{1,1} =-W_{y}\quad.\]
Thus, we can iteratively obtain all the unassigned \(\mathbb{Z}_{2}\) gauge fields \(u^{\delta}_{\mathbf{R}}\) from the plaquette phases and the Wilson phases. Again, corresponding expressions hold in the upper layer for the quantitied \(\left\{\tilde{u}^{\delta}_{\mathbf{R}},\Phi^{\pm}_{\mathbf{R}},\widetilde{W}_{x}, \widetilde{W}_{y}\right\}\).
Next, we consider the \(u^{5}_{\mathbf{R}}\) gauge fields and the plaquette fluxes \(\left\{\Psi^{\pm}_{\mathbf{R}},0^{\pm}_{\mathbf{R}}\right\}\). From eqn. 30 we may iteratively determine the values of \(u^{5}_{m,k}\) for odd values of \(k\) given the value \(u^{5}_{1,k}\equiv 1\):
\[u^{5}_{2,k} =u^{1}_{1,k}\,\tilde{u}^{1}_{1,k}\,\Psi^{+}_{1,k}\cdot u^{5}_{1,k} \tag{36}\] \[u^{5}_{3,k} =u^{3}_{3,k}\,\tilde{u}^{3}_{3,k}\,\Psi^{-}_{3,k}\cdot u^{5}_{2,k}\] \[u^{5}_{4,k} =u^{3}_{3,k}\,\tilde{u}^{3}_{3,k}\,\Psi^{+}_{3,k}\cdot u^{5}_{3,k}\quad\ldots\] \[u^{5}_{N_{x},k} =u^{1}_{N_{x}-1,k}\,\tilde{u}^{1}_{N_{x}-1,k}\,\Psi^{+}_{N_{x}-1,k }\cdot u^{5}_{N_{x}-1,k}\] \[u^{5}_{1,k} =u^{1}_{1,k}\,\tilde{u}^{1}_{1,k}\,\Psi^{-}_{1,k}\cdot u^{5}_{N_{x},k}\quad.\]
Figure 5: Labels of the unit cells \(\Phi^{\pm}_{\mathbf{R}}\). The black arrows indicate \(u^{\delta}_{\mathbf{R}}=+1\) in the direction of the arrow. There are \(N+1\) colored arrows, which are determines by the \(N-1\) independent plaquette fluxes and the two Wilson loops. A corresponding assignment pertains to the upper layer with fluxes \(\tilde{\Phi}^{\pm}_{\mathbf{R}}\) and gauge fields \(\tilde{u}^{\delta}_{\mathbf{R}}\). Dotted lines indicate periodicity boundaries.
For even values of \(k\), we have
\[u_{2,k}^{5} =u_{2,k}^{3}\,\tilde{u}_{2,k}^{3}\,\Psi_{2,k}^{-}\cdot u_{1,k}^{5} \tag{37}\] \[u_{3,k}^{5} =u_{2,k}^{1}\,\tilde{u}_{2,k}^{1}\,\Psi_{2,k}^{+}\cdot u_{2,k}^{5}\] \[u_{4,k}^{5} =u_{4,k}^{3}\,\tilde{u}_{4,k}^{3}\,\Psi_{4,k}^{-}\cdot u_{3,k}^{5}\quad\ldots\] \[u_{N_{x},k}^{5} =u_{N_{x},k}^{3}\,\tilde{u}_{N_{x},k}^{3}\,\Psi_{N_{x},k}^{-}\cdot u _{N_{x}-1,k}^{5}\] \[u_{1,k}^{5} =u_{N_{x},k}^{1}\,\tilde{u}_{N_{x},k}^{1}\,\Psi_{N_{x},k}^{+}\cdot u _{N_{x},k}^{5}\quad.\]
To obtain \(u_{1,k+1}^{5}\) from \(u_{1,k}^{5}\), we use the relations
\[u_{1,2n}^{5} =u_{1,2n-1}^{2}\,\tilde{u}_{1,2n-1}^{2}\,\Omega_{1,2n-1}^{+}\cdot u _{1,2n-1}^{5} \tag{38}\] \[u_{1,2n+1}^{5} =u_{1,2n}^{4}\,\tilde{u}_{1,2n}^{4}\,\Omega_{1,2n+1}^{-}\cdot u _{1,2n}^{5}\quad.\]
Eqns. 36, 37, and 38 entail the relations
\[\prod_{{j=1\atop(k\ {\rm odd})}}^{N_{x}}\Psi_{j,k}^{-}\,\Psi_{j,k}^{+ }=\prod_{m=1}^{N_{x}/2}u_{2m-1,k}^{1}\,u_{2m-1,k}^{3}\,\tilde{u}_{2m-1,k}^{1} \,\tilde{u}_{2m-1,k}^{3}\] \[\prod_{{j=1\atop(k\ {\rm even})}}^{N_{x}}\Psi_{j,k}^{-}\,\Psi_{j,k}^{+ }=\prod_{m=1}^{N_{x}/2}u_{2m,k}^{1}\,u_{2m,k}^{3}\,\tilde{u}_{2m,k}^{1}\, \tilde{u}_{2m,k}^{3}\quad, \tag{39}\]
for \(k\) odd and even, respectively, as well as
\[\prod_{{k=1\atop(j\ {\rm odd})}}^{N_{y}}\Omega_{j,k}^{-}\, \Omega_{j,k}^{+} =\prod_{n=1}^{N_{y}/2}u_{j,2n-1}^{2}\,u_{j,2n-1}^{4}\,\tilde{u}_{j,2n-1}^{2}\] \[\prod_{{k=1\atop(j\ {\rm even})}}^{N_{y}}\Omega_{j,k}^{-}\, \Omega_{j,k}^{+} =\prod_{n=1}^{N_{y}/2}u_{j,2n}^{2}\,u_{j,2n}^{4}\,\tilde{u}_{j,2n}^{2}\, \tilde{u}_{j,2n}^{4}\quad, \tag{40}\]
for \(j\) odd and even, respectively. Restricting to the cases \(j=1\) and \(k=1\), we can relate these products to the Wilson phases in eqn. 33, _viz._
\[\prod_{k=1\atop(j\ {\rm even})}^{N_{y}}\Omega_{1,k}^{-}\, \Omega_{1,k}^{+} =W_{x}\widetilde{W}_{x} \tag{41}\] \[\prod_{j=1}^{N_{x}}\Psi_{j,1}^{-}\,\Psi_{j,1}^{+} =W_{y}\widetilde{W}_{y}\quad.\]
We showed previously in SSIII.4 that, considering all the \(\mathbb{Z}_{2}\) gauge degrees of freedom, we have a total of \(3N+1\) independent plaquette fluxes and Wilson phases. In each layer, there are \(N+1\) free gauge fields \(u_{\mathbf{R}}^{\delta}\), as depicted in fig. 5. Between the layers, there are \(N-1\) free gauge fields \(u_{\mathbf{r}}^{5}\), with \(u_{1,1}^{5}\equiv 1\). Thus, our gauge assignment accounts for all the independent gauge-invariant quantities.
### Counting the NESSes
Referring to eqn. 29, in order to obtain an eigenvalue of zero, we must have each \(u_{\mathbf{r}}^{5}=+1\) and \(c_{\mathbf{r}}^{4}c_{\mathbf{r}}=1\). (The case \(u_{\mathbf{r}}^{5}=-1\) for all \(\mathbf{r}\) is impossible since we have, without loss of generality (_i.e._ up to a gauge transformation), set \(u_{1,1}^{5}\equiv 1\).) We then must eliminate the BCS pairing terms, which would allow for the simultaneous annihilation of two neighboring \(c\)-fermions. This is accomplished by setting \(u_{\mathbf{R}}^{\delta}+\tilde{u}_{\mathbf{R}}^{\delta}=0\) for all \(\mathbf{R}\) and \(\mathbf{\delta}\). While this may seem inconsistent with the assignment of the fixed gauge fields (black arrows) in the two layers as depicted in fig. 5, in fact we are free to redefine \(\tilde{u}_{\mathbf{R}}^{\delta}\to-\tilde{u}_{\mathbf{R}}^{\delta}\) for the purposes of counting the NESSes. Thus, there are a total of \(N+1\) independent values of the planar (\(\delta\in\{1,2,3,4\}\)) gauge fields associated with the NESS block, and therefore \(2^{N+1}\) degenerate NESSes.
It can be seen that for these NESSes \(\Phi_{\mathbf{R}}^{+}=\widetilde{\Phi}_{\mathbf{R}}^{+}\) and \(\Phi_{\mathbf{R}}^{-}=\widetilde{\Phi}_{\mathbf{R}}^{-}\), as well as \(\Psi_{\mathbf{R}}^{+}=\Psi_{\mathbf{R}}^{-}=-1\) and \(\Omega_{\mathbf{R}}^{+}=\Omega_{\mathbf{R}}^{-}=-1\), for all \(\mathbf{R}\). Since \(\prod_{\mathbf{R}}\Phi_{\mathbf{R}}^{+}\Phi_{\mathbf{R}}^{-}=1\), this accounts for \(N-1\) freedoms associated with the plaquette fluxes. The Wilson phases \(W_{x}\) and \(W_{y}\) are also free (but \(\widetilde{W}_{x}\) and \(\widetilde{W}_{y}\) are then fixed by eqn. 41), and so again we see that there are \(2^{N+1}\) NESSes.
We numerically verified this counting for the case \(N_{x}=N_{y}=2\) (\(N=4\)) by choosing the \(J_{\delta}\) couplings to be all different. However, when \(J_{1}=J_{2}=J_{3}=J_{4}\) there is an enlarged translational symmetry, and we find a degeneracy of 90 rather than \(2^{N+1}=32\). We also find that these additional degenerate states do not satisfy the flux conditions described in the previous paragraph.
## IV Computational results
To calculate the spectrum within a given gauge sector, we use Prosen's method for complex antisymmetric matrices [4]. We note that the constraint in equation 34 is implemented as a constraint on the parity of the complex fermions that arise from Prosen's generalized Bogoliubov transformation.
Implementing the field assignments mentioned in section III.5, we calculate the gap \(g\) to the smallest relaxation rate, _i.e._ the negative of the real part of the eigenvalues of \(\mathcal{L}\), by searching over all the sectors of the \(\mathcal{L}\) corresponding to the different plaquette flux and Wilson phase configurations, for the case \(N_{x}=N_{y}=2\). For this system there are \(N=4\) sites and thus \(2^{16}\) (unnormalized) density matrices. We observe a transition in the first decay modes. The plot showing \(g\) as a function of \(\gamma\) for \(J_{1}=J_{2}=J_{3}=J_{4}=1\) is shown in fig. 6. The gauge-invariant quantities corresponding to the first decay modes are shown in figures 7 and 8.
(A reversed-flux plaquette is a \(\mathbb{Z}_{2}\) vortex.) There being \(N_{\rm g}=3N+1\) gauge degrees of freedom, the number of such configurations \(\binom{N_{\rm s}}{N_{\rm s}^{\rm v}}\) rapidly becomes computationally unwieldy with growing \(N_{\rm g}\) and \(N_{\rm v}\). We searched exhaustively for the smallest nonzero relaxation rates for up to \(N_{\rm v}=4\) total \(\mathbb{Z}_{2}\) defects for \(N_{x}=N_{y}=4\) and only up to \(N_{\rm v}=2\) for \(N_{x}=N_{y}=6\) relative to a particular NESS (one with all gauge-invariant \(\mathbb{Z}_{2}\) data set to \(-1\)). We also performed a Monte Carlo searches using both simulated annealing and a genetic algorithm (GA) capable of finding states with arbitrary numbers of defects. These both yielded similar results, and below we show data only for the GA computations when comparing with the \(N_{\rm v}\)-limited searches.
Some details regarding the GA are provided in SSB below. We found the results to be satisfactory both in terms of convergence of the longest nonzero relaxation rate \(g\) as well as the computational run time for systems up to size \(6\times 6\) (\(2^{144}\) density matrices). We cannot estimate the full spectrum of first decay modes through this method, _i.e._ enumerating all their degeneracies as in the \(2\times 2\) case, however. The results obtained by taking the minimum value from different runs (see SSB) of the genetic algorithm are shown in fig. 9. The result for \(N_{x}=N_{y}=6\) is subject to more error since we used a fewer number of runs than in the \(4\times 4\) case. This estimate can be improved by using the decay modes obtained from the genetic algorithm.
To obtain better estimates, we collect the best individuals from different GA runs (field configurations with minimum gap, _i.e._ the configurations corresponding to \(g_{\rm min}\) in fig. 22 in B), and for different values of \(\gamma\). We then use this set of field configurations as our pool to be tried for each value of \(\gamma\) in order to obtain an estimate of the minimum gap, \(g\), by optimizing the relaxation rate gap with respect to allowed configurations. This yields the curve labeled GA in fig. 10. For the system sizes we have examined, the \(g(\gamma)\) curves
Figure 6: Liouvillian gap in the 2d generalized SK model with periodic boundary conditions for \(N_{x}=N_{y}=2\) and all \(J_{\delta}=1\): The Liouvillian gap, \(g\), as a function of \(\gamma\). There is a transition in the first decay modes at the cusp seen at \(\gamma=\gamma_{\rm c}(2,2)\), depicted by the blue vertical line.
Figure 7: A first decay mode of the 2d generalized SK model with periodic boundary conditions for \(N_{x}=N_{y}=2\) and \(J_{1}=J_{2}=J_{3}=J_{4}=1\) corresponding to the ‘phase’ where \(\gamma\leq\gamma_{\rm c}(2,2)\). For the mode shown in this figure, we have \(W_{x}=1\), \(W_{y}=-1\), \(\widetilde{W}_{x}=-1\), \(\widetilde{W}_{y}=-1\). There are 15 other configurations of the flux plaquettes and Wilson phases corresponding to the same eigenvalue as the first decay mode shown here.
ior at small \(\gamma\), crossing over to a \(1/\gamma\) behavior at large \(\gamma\), as found by SK for their model [1]. While the gauge configurations obtained in this manner can vary from one \(\gamma\) value to the next, we found that the curves are largely unchanged by partitioning the \(\gamma\) line into three regimes, each of which is governed by a particular configuration of the \(N_{\mathrm{g}}\) gauge-invariant quantities, as reflected in the figure. Note also the relatively small difference between the \(4\times 4\) and \(6\times 6\) results.
SK found a sharp transition in the first decay modes between two regimes of dissipation strength, regardless of system size. (SK examined their model with open boundaries, but we have confirmed this result when periodic boundary conditions are applied to their model as well.) We cannot conclude whether or not this is the case for our model, but the intermediate regime we find could result from a failure of the GA to reach the block of true first decay modes. A clear intermediate regime is apparent in the \(N_{\mathrm{v}}\)-limited data of fig. 16, with all \(J_{\delta}=1\), for \(N_{\mathrm{v}}=1,2\) on size \(4\times 4\). (See also fig. 17 for the case when all the \(J_{\delta}\) are different.) For \(N_{\mathrm{v}}=3,4\) the effect is far less pronounced. Note that the \(N_{\mathrm{v}}=4\) results are in good agreement with the GA results. For the \(6\times 6\) case, the \(N_{\mathrm{v}}=1,2\) results in fig. 18 are quite far from the GA curve.
From the genetic algorithm, the optimal flux configurations for the lowest nonzero decay seem to contain many defects. We list some of these configurations in the appendix SSC below. For example, for the \(4\times 4\) system with all \(J_{\delta}=1\), the optimal excited state we obtained had 14 defects relative to the fiducial NESS with all \(\mathbb{Z}_{2}\) data set to \(-1\). However, since we start the GA from a population of random \(\mathbb{Z}_{2}\) data, it may well be that a configuration with 14 defects with respect to a particular NESS might be described by fewer defects with respect to a different state in the \(2^{3N+1}\)-
Figure 11: Behavior at small \(\gamma\) for the largest system size we used in our calculations (\(6\times 6\)), with all \(J_{\delta}=1\). We obtained this curve by using the first decay modes we used to explain the small \(\gamma\) regime in fig. 10.
Figure 12: Behavior at large \(\gamma\) for the largest system size we used in our calculations (\(6\times 6\)), with all \(J_{\delta}=1\). We obtained this curve by using the first decay modes we used to explain the large \(\gamma\) regime in fig. 10.
Figure 10: GA and the regimes are described in the text. All \(J_{\delta}=1\). The regimes for size \(4\times 4\) are given by \(\gamma<0.41\), \(\gamma=0.41\) (a single point) and \(\gamma>0.41\). The regimes for size \(6\times 6\) are separated at \(\gamma_{1}^{*}=0.36\) and \(\gamma_{2}^{*}=0.41\).
Figure 9: Minimal relaxation rate \(g\)_versus_\(\gamma\) obtained over different runs of the genetic algorithm (all \(J_{\delta}=1\)) for \(4\times 4\) and \(6\times 6\) system sizes. The population size was 100 and the number of runs was 10 for \(4\times 4\) and 5 for \(6\times 6\).
are now attempting to better clarify the minimal defect content of the degenerate excited states.
We try fitting the \(g(\gamma)\) curves to identify their behavior for small and large values of the dissipation strength \(\gamma\) (see figs. 11 and 12). The shape of the \(g(\gamma)\) curves is similar to that found by Shibata and Katsura [1], rising linearly from zero at small \(\gamma\) and decaying as \(1/\gamma\) for large \(\gamma\). In the appendix SSA, we provide analytical support for these behaviors.
We also investigate the behavior of the gap for the case \((J_{1},J_{2},J_{3},J_{4})=(3,4,1,2)\), which breaks certain discrete translation and rotation symmetries present in the model when all \(J_{\delta}\) are equal. The results are shown in figs. 13, 14 and 15. GA and the phases are obtained as described above. While the shape of the curves is similar, we find two significant differences. First, from fig. 15 it appears that the GA has not found the lowest decay mode in the small \(\gamma\) regime, where the \(4\times 4\) and \(6\times 6\) GA results differ substantially. Second, as shown in fig. 17, the \(N_{\rm v}\leq 4\) sector does not yield a good approximation to the GA results, as it did for the case with all \(J_{\delta}=1\).
tion of, the dissipative one-dimensional Pauli matrix spin model of Shibata and Katsura [1]. It is in the'solvable' class of models exemplified by Kitaev's celebrated honeycomb lattice model [2], equivalent to a single species of Majorana fermion hopping in a nondynamical \(\mathbb{Z}_{2}\) background gauge field. It is solvable in the sense that for any given configuration of the gauge-invariant plaquette fluxes and Wilson phases, the non-Hermitian Hamiltonian \(\mathcal{W}\) is quadratic and solvable by Prosen's method [4]. However, there are exponentially many such configurations, and when the gauge field structure is not translationally invariant, the Hamiltonian must be diagonalized numerically. Furthermore, there is no analog of Lieb's theorem [6] to assist us in identifying the longest lived decaying eigenmodes.
In the infinite time limit, the system approaches one of an exponentially large number of nonequilibrium steady states, with a spectrum \(\{-\mathsf{Im}\,E_{a}\}\) of relaxation rates. The minimum relaxation rate \(g(\gamma)\) is typically achieved for different \(\mathbb{Z}_{2}\) flux configurations in the small and large \(\gamma\) limits, a feature also observed by Shibata and Katsura.
We have not indicated in our plots the spectrum \(\{\mathsf{Re}\,E_{a}\}\) of the real parts of the eigenvalues of \(\mathcal{W}\). This is because in almost all cases studied we have found the imaginary parts of the first decay mode eigenvalues to be zero. The only exception we observed was in the \(6\times 6\) case with all \(J\) couplings equal and for \(\gamma<\gamma_{\mathrm{c}}\), as shown in fig. 20 [16] (When all \(J\)'s are different, we find \(\mathsf{Re}\,E_{a}=0\) for the lowest decay modes, for all \(\gamma\) and all sizes.)
Our model can further be generalized to other lattices. The Kitaev solvability of the SK model is associated with the fact that their model is equivalent to non-Hermitian Hamiltonian evolution on a two leg ladder, where each site lies at the confluence of three distinct classes of links. For the dimension \(k\) Clifford algebra, we have \(2k+1\) gamma matrices of dimension \(2^{k}\), and a Kitaev Hamilto
Figure 19: The GA curve and the curves obtained by considering configurations with a given number of vortices \(N_{v}\), as described in the text. Here all \((J_{1},J_{2},J_{3},J_{4})=(3,4,1,2)\) and the system size is \(6\times 6\).
Figure 17: The GA curve and the curves obtained by considering configurations with a given number of vortices \(N_{v}\), as described in the text. Here \((J_{1},J_{2},J_{3},J_{4})=(3,4,1,2)\) and the system size is \(4\times 4\).
Figure 20: Imaginary (\(g\)) and real parts of the lowest decay modes as a function of \(\gamma\) for \(4\times 4\) and \(6\times 6\) lattices, with \((J_{1},J_{2},J_{3},J_{4})=(1,1,1,1)\). \(\mathsf{Re}\,E_{a}=0\) in all cases except for \(6\times 6\) with \(\gamma<\gamma_{\mathrm{c}}\).
Figure 18: The GA curve and the curves obtained by considering configurations with a given number of vortices \(N_{v}\), as described in the text. Here all \(J_{\delta}=1\) and the system size is \(6\times 6\).
nian (Hermitian or not) can be constructed on any lattice where each site lies at the confluence of \((2k+1)\) distinct classes of links [8]. Thus, for \(k=2\), our square lattice bilayer is five-fold coordinated. A corresponding model could thus be constructed on the kagome lattice, leading to a non-Hermitian Dirac matrix Hamiltonian \(\mathcal{W}\) on the kagome bilayer. (Further generalizations of this construction can result in multiple species of hopping and hybridizing Majoranas in the presence of a background nondynamical gauge field, as in refs. [7; 13].) Thus, proceeding to \(k=3\) with its seven \(8\times 8\) gamma matrices, a corresponding model can be constructed on a cubic lattice (bipartite NaCl structure) with \(\Gamma_{\mathbf{R}+\mathbf{\delta}}^{\delta}\) interactions on each class \(\delta\) link with \(\delta\in\{1,\dots,6\}\) and Lindblad jump operators \(\sqrt{\gamma}\,\Gamma_{\mathbf{r}}^{\delta}\) at each site. Again, there will be an exponentially large block of NESS density matrices owing to the conserved plaquette fluxes.
Note: Some of this work was presented in a poster at the KITP Workshop, Topology, Symmetry and Interactions in Crystals: Emerging Concepts and Unifying Themes (KITP UC Santa Barbara, April 3-6, 2023) [17]. While this article was in the final stages of preparation, an analysis of two largely equivalent models appeared on the arXiv [9; 10].
## VI Acknowledgements
We gratefully acknowledge conversations with Tarun Grover and John McGreevy. We thank Debanjan Chowdhury for alerting us to ref. [10]. This research was funded in part by General Campus Research Award RG104654 from the UC San Diego Academic Senate.
|
2301.07159
|
DC electric field generation and distribution in magnetized plasmas
|
Very large DC and AC electric fields cannot be sustained between conducting
electrodes because of volume gas breakdown and/or surface field emission.
However, very large potential fields are now routinely generated in plasma
structures such as laser generated wake in unmagnetized plasmas. In magnetized
plasmas, large DC fields can also be sustained and controlled perpendicular to
the magnetic field, but the metallic end plates limiting the plasma,
terminating the magnetic field lines and usually providing the voltage drop
feed between the field lines, impose severe restrictions on the maximum field.
However, it is shown that very large radial DC voltage drops can be sustained
by injecting waves of predetermined frequencies and wave vectors, traveling
along the azimuthal direction of an axially magnetized plasma cylinder, or by
injecting fast neutral particles beams along this azimuthal direction. The
large conductivity along the magnetic field lines and the small conductivity
between the field lines then distribute this voltage drop. The global power
balance and control parameters of wave and beam generated large DC electric
fields in magnetized plasmas are identified, described and analyzed.
|
Jean-Marcel Rax, Renaud Gueroult, Nathaniel J. Fisch
|
2023-01-17T20:07:24Z
|
http://arxiv.org/abs/2301.07159v1
|
# DC electric field generation and distribution in magnetized plasmas
###### Abstract
Very large DC and AC electric fields cannot be sustained between conducting electrodes because of volume gas breakdown and/or surface field emission. However, very large potential fields are now routinely generated in plasma structures such as laser generated wake in unmagnetized plasmas. In magnetized plasmas, large DC fields can also be sustained and controlled perpendicular to the magnetic field, but the metallic end plates limiting the plasma, terminating the magnetic field lines and usually providing the voltage drop feed between the field lines, impose severe restrictions on the maximum field. However, it is shown that very large radial DC voltage drops can be sustained by injecting waves of predetermined frequencies and wave vectors, traveling along the azimuthal direction of an axially magnetized plasma cylinder, or by injecting fast neutral particles beams along this azimuthal direction. The large conductivity along the magnetic field lines and the small conductivity between the field lines then distribute this voltage drop. The global power balance and control parameters of wave and beam generated large DC electric fields in magnetized plasmas are identified, described and analyzed.
## I Introduction
The quest for very large electric fields is mainly driven by the need for more compact particles accelerators, but it is also important in other fields such as: (_i_) mass separation envisioned for nuclear waste cleanup [1], spent nuclear fuel reprocessing [2; 3; 4; 5; 6; 7] and rare earth elements recycling [8], (_ii_) advanced \(E\) cross \(B\) plasma configurations for the purpose of ions acceleration [9; 10; 11], and (_iii_) thermonuclear fusion with rotating tokamak [12; 13] or rotating mirrors [14; 15; 16; 17; 18].
Two fields configurations can sustain a DC electric field in a magnetized plasma : (_i_) the _Brillouin configuration_ with an axial magnetic field and a radial electric field and (_ii_) the _Hall configuration_ with a radial magnetic field and an axial electric field. This last configuration is the one at work in stationary plasmas thrusters where ions are unmagnetized; the former one, where ions are magnetized, is used in mass separator devices and advanced thermonuclear traps.
This study is devoted to this last type of configuration. Brillouin type of rotating plasmas have been widely studied since the early proposal of Lehnert to take advantage of the isopotential character of magnetic field lines and surfaces to sustain a voltage drop through external biasing at the edge of a plasma column with concentric electrodes [19; 20; 21; 22; 23]. These rotating configurations have since then been explored both theoretically and experimentally for mass separation [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37], thermonuclear confinement [14; 15; 16; 17; 18] and the study astrophysical phenomena in laboratory experiments [38; 39].
In this new study, rather than focusing specifically on separation or fusion applications, we will address the generic issues of the power balance and the field structure of unconventional radial electric field sustainment, with waves or neutral beams, in a cylindrical plasma shell confined in a magnetized column. We will present new promising results in terms of efficiency and control of these advanced wave and beam schemes.
Three mains principles can be considered with respect to very high electric field generation:
1. Accelerator technologies [40] such as electrostatic, Van de Graff type, accelerators where metallic electrodes are charged up to create a voltage drop of typically a few MV. These DC type of devices are limited by electrons emission at metallic surfaces under high electric fields and/or breakdown of the insulating gas. Modern RF and microwave accelerators bypass this drawback of metallic surface through the use of high frequency fields and can reach far higher AC electric fields values, but even at high frequencies, metallic structures display an unavoidable electric field threshold above which massive electrons emission takes place. To address breakdown and emission problems, the use of fully ionized plasma has been put forward.
2. Laser-Plasma accelerators bypass these problems through the use of plasma rather than metals to sustain the electric charges separation, and have reached voltage gradients in the GV per meter range. The basics of such schemes is the generation of a travelling electrons-ions charge separation with the ponderomotive force of an ultrashort laser pulse acting on the electron population. Indeed, a short laser pulse of length \(L\), described by its vector potential \(A\), will push the electrons in the propagation direction and generate a charge separation with amplitude \(q^{2}A^{2}L/2m^{2}c^{241}\)[42; 41], where \(q\) and \(m\) are the electron charge and mass and \(c\) the velocity
of light. Such a charge separation, of the order of tens of \(\mu\)m in underdense plasmas, generates large traveling fields which then will oscillate at the electron plasma frequency \(\omega_{pe}\) behind the pulse as a wake. A well phased, and well shaped, charged particles bunch, following the laser pulse, can gain energy in such a laser generated electrostatic waves.
3. Besides these mature conventional and advanced accelerator technologies, an overlooked physical principle can be put at work to generate large DC electric field : using a magnetized plasma in which we induce a steady state charge separation perpendicular to the magnetic field through the continuous absorption of a resonant wave or the continuous ionization of a fast neutral beam.
That a magnetic field can inhibit the relaxation of the charges separation sustaining a very large voltage drop across a magnetic field is suggested by the energy associated with both electric and magnetic fields : (_i_) \(\varepsilon_{0}E^{2}V/2\) for an electric field \(E\) in a volume \(V\) and (_ii_) \(B^{2}V/2\mu_{0}\) for a magnetic field \(B\) in a volume \(V\). A large electric field of say 10 [MV/m] is associated with a density of energy (pressure) of the order of few [kJ/m\({}^{3}\)], although a typical magnetic field of say 1 [T] is associated with a density of energy (pressure) of the order of few [MJ/m\({}^{3}\)]. This very strong ordering between magnetic and electric pressure suggests why the free charges, which are attached to the magnetic field through the cyclotron motion, can resist the tendency to relaxation and (quasi-) neutralization driven by an electric field perpendicular to the magnetic field.
The wave and beam schemes considered in this study to drive an electric field in a magnetized plasma are to be compared with the more classical scheme where a voltage drop between field lines is imposed with external voltage generators connected to the field lines edges, as illustrated on Fig. 1(a). As we will demonstrate, an important conceptual difference is that in the classical scheme the electric field \(E\left(z\right)\) has to penetrate the plasma column from the edge, and is decreasing along the \(z\) axis from the left and right edges toward the center. On the other hand, wave or beam power can in principle be deposited at the center of a plasma column, as shown respectively in Figure 1(b) and Figure 1(c). In these new schemes the maximum voltage drop thus occurs in the center while the minimum voltage drop is found the endplates, in contrast with the classical scheme. By allowing the electric field to be localized more inside the plasma than at the edge, with a weaker interaction with any solid material, the risk of breakdown and emission near metallic endplates are reduced, and larger values can be envisioned.
Practically, the upper limit for the amplitude of electric field generated by a laser pulse in underdense plasmas is known to be associated with the occurrence of cavitation behind the pulse. This phenomena has been observed numerically and experimentally. On the other hand, the upper limit for the amplitude of the DC electric field generated by wave or beam power absorption in magnetized plasmas has never been explored. Moreover, the possibility to isolate this large DC electric field from the plasma facing end plate in order to avoid breakdown or electron emission has never been considered. Both of these issues are considered here. We will identify the constraint arising from the plasma (_i_) inherent anisotropic dissipation and (_ii_) finite size, and then translate it into realistic conditions for large field generation, distribution and dissipation, thus identifying upper bounds on power consumption for DC high voltage generation across magnetized plasmas. We will show that upper bounds in the GV/m range can be envisioned from the proposed models of waves and beam generation under optimal conditions, but that a few MV/m already provides the necessary conditions for the very fast supersonic rotations of a fully ionized hot plasma columns (required for instance in thermonuclear trap) and is accessible with wave or beam power of the order of few tens of MW.
This paper is organized as follows. First, in section II, we present a heuristic view of the formation of a voltage drop using waves and beams, and address the issue of dissipation in a magnetized plasma. Then, in section III, we briefly review the principle of charge transport driven by resonant waves in a magnetized plasma, and identify from these results an upper bound for DC electric field wave driven generation. Then, in section IV, we describe the principle of charge separation driven by fast neutral beam injection. The expression of the sustained DC electric field is established through three different methods giving the very same result. The order of magnitude of the maximum achievable electric field through this method is also estimated. The steady state balance between wave/beam driven charge separation/generation and dissipative charge dispersion and (quasi-) neutralization is considered in section V. Specifically, a steady state model is obtained by considering the balance between (_i_) wave/beam driven charge separation/generation, (_ii_) fast distribution/spreading along the field lines and (_iii_) slow relaxation across the field lines. This model is then solved in section VI to identify both the plasma resistance \(R\) and the attenuation length \(\lambda\) which describe the steady state of a wave, or beam, driven magnetized and polarized plasma slab. The results are then used to address in section VII the issue of finite size plasmas in the case where the attenuation length is too long to ensure a good confinement of the electric field near the wave or beam active plasma zone and away from the plasma edges. We show that a decrease the voltage drop at the edge of the plasma can be achieved at the cost of a certain loss of the efficiency of the generating process. Finally, the last section, section VIII, summarizes our new findings and point towards the optimization of these DC electric field generation and confinement schemes when additional constraints are considered, either for thermonuclear control in rotating mirrors or mass separation purposes.
## II Formation of voltage drop inside a magnetized plasma
This section provides a heuristic presentation of the problem of electric field generation in a plasma.
Consider a magnetized plasma and a Cartesian set of coordinates \(\left(x,y,z\right)\) and a Cartesian basis \(\left(\mathbf{e}_{x},\mathbf{e}_{y},\mathbf{e}_{z}\right)\). A wave propagating along the \(y\) direction, perpendicular to the magnetic field \(B\mathbf{e}_{z}\), with wave vector \(k_{\perp}\mathbf{e}_{y}\) and frequency \(\omega\), generates a charge separation of the resonant population and pushes each resonant particle by an amount
\[\delta x_{G}=\frac{k_{\perp}}{q\omega B}\delta\mathcal{E} \tag{1}\]
where \(\delta\mathcal{E}\) is the amount of energy absorbed by the resonant particle and \(x_{G}\) its guiding center position. This process is illustrated on Fig. 2(a).
When the quantum of energy \(\delta\mathcal{E}=\hbar\omega\) is absorbed, the quantum of perpendicular momentum \(\hbar k_{\perp}\) along \(y\) is also absorbed and through a continuous absorption this provides a secular force \(\hbar k_{\perp}/\delta t\) which drives a drift along \(x\) : \(\hbar k_{\perp}/\delta tqB\). During a time \(\delta t\) the shift in position is thus equal to \(\hbar k_{\perp}/qB\), which eliminating \(\hbar=\delta\mathcal{E}/\omega\) gives Eq. (1). This relation Eq. (1) will be reviewed in the next section.
If, rather than \(\delta\mathcal{E}\left[\mathrm{J}\right]\), we consider a stationary (density of) power absorption \(P_{RF}\left[\mathrm{W/m}^{3}\right]\), then Eq. (1) shows that a continuous wave drive will generate a continuous guiding center current density \(J_{\perp}\mathbf{e}_{x}\) perpendicular to the magnetic field
\[J_{\perp}\left[\frac{\mathrm{A}}{\mathrm{m}^{2}}\right]=\frac{k_{\perp}}{ \omega B}\cdot P_{RF}\left[\frac{\mathrm{W}}{\mathrm{m}^{3}}\right] \tag{2}\]
where \(P_{RF}\) is the density of power absorbed by the resonant population. This perpendicular drift current generation has been proposed to confine toroidal plasmas [12; 13] and, for unstable waves, to provide a free energy extraction mechanism from thermonuclear plasmas through _alpha channeling_ both in tokamaks and mirrors [43; 44; 45; 16; 46].
Rather than a wave, we consider now a fast neutral beam as a momentum source, with velocity \(v\mathbf{e}_{y}\), injected in a magnetized plasma as illustrated in Fig. 2(b). When a fast neutral particle is ionized inside the plasma, the electron and the ion rotate in opposite direction and the value of their Larmor radius is so different that these two charges are separated on average by an amount
\[\delta x_{G}\approx\frac{Mv}{qB}=\rho_{i}\gg\rho_{e} \tag{3}\]
where \(\rho_{e/i}\) is the electron/ion Larmor radius and \(M\) and
Figure 1: (a) The classical method to sustain a perpendicular electric field in a magnetized plasma (P) column with biased edge electrodes, (b) wave driven charge separation in a magnetized plasma (P) and (c) beam driven charge separation in a magnetized plasma (P). \(E\left(z\right)\) is the radial electric field between the axis and the outer cylindrical shell.
Figure 2: (a) Neutral beam driven perpendicular electric polarization and (b) wave driven perpendicular electric current generation.
\(q\) are the ion mass and charge.
The balance between the ionization rate of the fast neutral and the slowing down of the fast ions provides a steady state density of fast ions \(N_{F}\). The associated steady state charge separation can be described by an electric polarization \(P_{\perp}\mathbf{e}_{x}\) perpendicular to the magnetic field
\[P_{\perp}\left[\frac{\mathrm{C}}{\mathrm{m}^{2}}\right]=\frac{Mv}{B}\cdot N_{F }\left[\frac{1}{\mathrm{m}^{3}}\right] \tag{4}\]
This electric polarization \(P_{\perp}\) is the source of a voltage drop between magnetic filed lines, which will be analyzed in section IV.
In this study we will identify, describe and analyze schemes to use this wave driven current \(J_{\perp}\) Eq. (2) or this beam driven polarization \(P_{\perp}\) Eq. (4) to generate a large voltage drop across the magnetic field lines in the core of the plasma. Core generation provides a way to mitigate breakdown and/or emission at the edge of the plasma when both the plasma and the field lines encounter the end plates.
A picture of the build-up phase of a growing electric field in a plasma slab can be described as follows. Note that in the following model we do not consider the interplay between the adiabatic and resonant response of the particles [47; 48; 49], and consider the final global momentum balance. A wave, or a neutral beam, moves some minority charges across the magnetic field as shown by Eqs. (1, 3), and thus sets up a current \(\mathbf{J}_{0}\left(t\right)\) such that \(\mathbf{J}_{0}\left(t=-\infty\right)=\mathbf{0}\) and \(\mathbf{J}_{0}\left(t=0\right)=\mathbf{J}_{0}\) (dissipation is switched off for \(t<0\)). From an electrical point of view this phase correspond to a capacitive electric field build up in a non dissipative dielectric media : the charging of a capacitor. The plasma, which displays a low frequency permittivity \(\varepsilon=1+\omega_{pi}^{2}/\omega_{ci}^{2}\approx\omega_{pi}^{2}/\omega_{ci} ^{2}\), adjusts an electric field \(\mathbf{E}\left(t\right)\) such that the electrostatic limit of Maxwell-Ampere equation is fulfilled
\[\varepsilon_{0}\frac{\omega_{pi}^{2}}{\omega_{ci}^{2}}\frac{\partial\mathbf{E }}{\partial t}+\mathbf{J}_{0}\left(t\right)=\mathbf{0}. \tag{5}\]
From a mechanical point of view this build-up phase corresponds to a momentum input through the \(\mathbf{J}_{0}\left(t\right)\times\mathbf{B}\) force and this momentum ends up in the plasma \(E\) cross \(B\) drift, guaranteeing momentum conservation
\[\int_{-\infty}^{0}\mathbf{J}_{0}\left(t\right)\times\mathbf{B}dt+N_{p}M\frac{ \mathbf{E}_{0}\times\mathbf{B}}{B^{2}}=\mathbf{0} \tag{6}\]
where \(\mathbf{E}\left(t=0\right)=\mathbf{E}_{0}\), \(M\) is the ion mass and \(N_{p}\) the ion density.
Then, for \(t>0\) that is in the steady state dissipative regime, the charge separation associated with \(\mathbf{J}_{0}\) is short circuited by the plasma conductivity through the conduction current \(\mathbf{J}_{\mathrm{conduction}}\) in the magnetized plasma, as well as the boundary condition at the edge of the magnetic field lines. After this build up phase, the steady state is reached when
\[\nabla\cdot\left(\mathbf{J}_{0}+\mathbf{J}_{\mathrm{conduction}}\right)=0 \tag{7}\]
This steady state regime will be described within a framework where the plasma is modeled as a slab of an anisotropic conductor, and the end plates at the outer edges of the magnetic field lines will be modeled by a resistive load \(R_{L}\).
Consider the magnetized plasma slab, illustrated on Fig. 3, with the following dimensions : \(a\) along \(x\), \(b\) along \(y\) and \(l\) along \(z\). This plasma slab is magnetized along \(z\), \(\mathbf{B}=B\mathbf{e}_{z}\), and we assume that a wave or beam driven steady state electric current \(I_{0}\) flows along the face \(S_{1}\) from the lower magnetic surface \(S_{2}\) up to the upper magnetic surface \(S_{3}\). The two magnetic surface \(S_{2}\) and \(S_{3}\) are thus charged like a capacitor, but the electric conductivity along the magnetic field line \(\eta_{\perp}\) and across the magnetic field line \(\eta_{\perp}\ll\eta_{\short
asymptotic limits, we will calculate the equivalent resistance of the slab \(R_{e}\), Eq. (59), and the power balance of the wave or beam generation process Eq. (63). These are the main new results presented in this article. The new expression for \(R_{e}\) involves both what we call the plasma resistance \(R\) and a penetration length \(\lambda\) describing the spatial decay of the voltage drop away from the source region.
## III Wave-driven resonant charge separation
In this section we derive the relations Eqs. (1) and (2) and briefly review the main relations describing the dynamics of wave driven resonant charges separation in a plasma. This phenomena has been proposed to provide free energy extraction in thermonuclear plasma [43; 44; 45; 46] and to help toroidal confinement in tokamak [12; 13].
The Cartesian plasma slab considered in the following is magnetized along \(z\), \(\mathbf{B}=B\mathbf{e}_{z}\) and polarized along \(x\), \(\mathbf{E}=-E\mathbf{e}_{x}\). A wave with wave vector \(\mathbf{k}=k_{\perp}\mathbf{e}_{y}+k_{\parallel}\mathbf{e}_{z}\) and frequency \(\omega\) propagates in this plasma along \((z)\) and across \((y)\) the magnetic field. We restrict the following argument to an unspecified components of this wave oscillating with the phase \((\omega t-k_{\perp}y-k_{\parallel}z)\). In order to identify the wave-particle resonances, we plug into the phase of this wave the unperturbed motion of a charged particle characterized by the invariants (\(x_{G}\), \(v_{\parallel}\), \(v_{c}\))
\[x =x_{G}+\frac{v_{c}}{\omega_{c}}\cos\left(\omega_{c}t\right), \tag{10}\] \[y =\frac{E}{B}t+\frac{v_{c}}{\omega_{c}}\sin\left(\omega_{c}t\right),\] (11) \[z =v_{\parallel}t. \tag{12}\]
Here \(\omega_{c}\) is the cyclotron frequency, \(v_{c}\) the cyclotron velocity, \(v_{\parallel}\) the velocity along the field lines and \(x_{G}\) the guiding center position along \(x\). The phase seen by a particle is thus
\[\cos\left(\omega t-k_{\perp}y-k_{\parallel}z\right) \sim\cos\left(\omega t-k_{\perp}\frac{E}{B}t\right.\] \[\left.-k_{\perp}\frac{v_{c}}{\omega_{c}}\sin\omega_{c}t-k_{ \parallel}v_{\parallel}t\right). \tag{13}\]
This result can be rearranged with the classical Euler Bessel expansion
\[\cos(a+b\sin\phi)=\sum_{N=-\infty}^{N=+\infty}\mathrm{J}_{N}(b)\sin(a+N\phi) \tag{14}\]
so that the field seen by the particle becomes a series of harmonics with Bessel function amplitudes
\[\cos\left(\omega t-k_{\perp}y-k_{\parallel}z\right)\sim\sum_{N=- \infty}^{N=+\infty}\mathrm{J}_{N}\left(k_{\perp}\frac{v_{c}}{\omega_{c}}\right) \\ \times\sin\left(\omega t-k_{\perp}\frac{E}{B}t-N\omega_{c}t-k_{ \parallel}v_{\parallel}t\right). \tag{15}\]
Thus a resonance might occur with the \(N\) component of this spectral expansion if this oscillating amplitude becomes stationary :
\[\omega-k_{\perp}E/B-N\omega_{c}-k_{\parallel}v_{\parallel}=0. \tag{16}\]
When this condition is fulfilled the topology of the particles motion phase portrait changes and particles trapped in the wave experience a large variation of the invariants of the free motion \(\left(x_{G},v_{\parallel},v_{c}\right)\). When this condition is not fulfilled the particles oscillate and this oscillation is associated with a reactive power so that no active power is exchanged with non resonant (adiabatic) particles.
For such resonances, if an amount \(\delta\mathcal{E}\) of RF energy is absorbed by a resonant particle, then the unperturbed motion invariants \(\left(x_{G},v_{\parallel},v_{c}\right)\) are no longer invariant. Because of the resonant interaction with the wave they become \(\left(x_{G}+\delta x_{G},v_{\parallel}+\delta v_{\parallel},v_{c}+\delta v_{ c}\right)\) where \(\left(\delta x_{G},\delta v_{\parallel},\delta v_{c}\right)\) are proportional to \(\delta\mathcal{E}\), a simple dynamical analysis allows to write the set of relations :
\[\delta x_{G} =\frac{k_{\perp}}{q\omega B}\delta\mathcal{E}, \tag{17}\] \[m\delta v_{\parallel} =\frac{k_{\parallel}}{\omega}\delta\mathcal{E},\] (18) \[mv_{c}\delta v_{c} =N\frac{\omega_{c}}{\omega}\delta\mathcal{E}. \tag{19}\]
Equation (17) is associated with the conservation of the canonical momentum along \(y\). Eq. (18) is associated with the conservation of classical momentum along \(z\). Finally, Eq. (19) describes harmonic cyclotron heating. These relations can be rederived from an Hamiltonian analysis [50], or simply from the quantum photon picture described in the previous section.
Global (wave + particle) energy conservation can be simply checked as follows. The complete variation of a resonant particle kinetic \(mv_{\parallel}\delta v_{\parallel}+mv_{c}\delta v_{c}\) and potential \(qE\delta x_{G}\) energy is
\[qE\delta x_{G}+mv_{\parallel}\delta v_{\parallel}+mv_{c}\delta v _{c} =\frac{\delta\mathcal{E}}{\omega}\left(\frac{k_{\perp}E}{B}+k_{ \parallel}v_{\parallel}+N\omega_{c}\right)\] \[=\delta\mathcal{E} \tag{20}\]
where we have used the resonance condition Eq. (16) to obtain the final identity.
From these results we can identify a theoretical maximum electric field \(E^{*}\) that can be sustained _in situ_ in a plasma with this type of resonant charge separation process. The optimal wave, such that all the energy \(\delta\mathcal{E}\)
goes to the charge separation and ends up in the form of potential, \(qE\delta x_{G}\), rather than kinetic, \(mv_{\parallel}\delta v_{\parallel}+mv_{c}\delta v_{c}\), energy, is a wave displaying no Landau and cyclotron absorptions such that \(k_{\parallel}=N=0\) (we do not consider here anomalous Doppler resonances where the wave transfer energy between degrees of freedom). Equation (16) thus becomes a simple drift resonance : \(\omega=k_{\perp}E^{*}/B\). This last relation is confirmed by the energy balance restricted to potential energy \(\delta\mathcal{E}=qE^{*}\delta x_{G}\). Then, with the help of Eq. (17) we eliminate \(\delta\mathcal{E}\) to find the constraint on the DC electric field \(E_{RF}^{*}\) :
\[\frac{E_{RF}^{*}}{B}=\frac{\omega}{k_{\perp}}. \tag{21}\]
Very large \(E_{RF}^{*}\) can thus in principle be reached for very large \(B\) field values, though it is to be noted that the wave dispersion \(\omega\left(k_{\perp}\right)\) is also a function of \(B\). Taking a moderate value of \(B\) of the order of few tesla and a high frequency wave with a velocity of the order of the velocity of light, which is the case in tenuous plasmas, we end up with electric fields values of the order of 1GV/m. The relation Eq. (21) however only offers a partial view of the problem because if we want to drive the plasma drift motion we need waves with a large momentum \(k_{\perp}\), whereas Eq. (21) suggest that small \(k_{\perp}\) are preferable for large electric field. Equation (21) is an upper bound associated with an optimal use of the wave power in term of efficiency. It is a kinematical constraint associated with optimal resonance. This large value is only achieved if dissipation (charges relaxation) is neglected. In the following we will assume that the wave driven charge separation takes place in a narrow region around \(z=0\) and that this RF region is hot and collisionless but the neighboring region are assumed collisional, and we will analyze the impact of dissipative charge relaxation in a plasma slab.
## IV Neutral-beam-driven charge separation
In this section we derive the relations Eqs. (3) and (4) and set up and solve a simple model describing beam driven charges separation and electric field generation in a magnetized plasma. This phenomena is illustrated on Fig. 2(a) : a beam of fast neutral atoms with velocity \(v\mathbf{e}_{y}\) and density \(N_{B}\) is directed toward a plasma magnetized with \(\mathbf{B}=B\mathbf{e}_{z}\). These fast atoms are ionized through collisions with the plasma electrons and ions and also through charges exchange with slow ions. Both processes provide fast ions generation from these fast neutral.
The rate of fast ion generation from fast neutral is \(\nu\) and it takes into account both ionization and charge exchange. As soon as a fast ion is generated in the plasma, it start to slow down with a typical slowing down time \(\tau\). If we consider fast hydrogen atom in a thermonuclear \(pB11\) plasma, \(\tau\) also accounts for fast proton pitch angle scattering on boron ions. The density of fast ions in the plasma, \(N_{F}\), is thus given by the solution of the particles balance
\[\frac{dN_{F}}{dt}=\nu N_{B}-\frac{N_{F}}{\tau} \tag{22}\]
Considering a steady state injection, the relation between the density of fast ions, i.e. ions with a large Larmor radius, and the density of injected neutral is
\[N_{F}=N_{B}\nu\tau \tag{23}\]
Three methods are considered below to calculate the DC electric field sustained by steady state neutral beam injection.
First, the conservation of linear momentum in the \(y\) direction can be used to calculate the electric field \(E\mathbf{e}_{x}\) generated by the beam. If we neglect the electron mass \(m\) in front of the ion mass \(M\), the beam density of momentum \(N_{B}Mv\) which is coupled to the plasma at a rate \(\nu\) provide a density of force \(N_{B}Mv\nu\). This density of force acts during a time \(\tau\) on the plasma. The corresponding density of momentum \(N_{B}Mv\nu\tau\mathbf{e}_{y}\) is absorbed in the form of plasma linear momentum along \(y\). If we write \(N_{P}\) the plasma density the linear momentum balance can be written :
\[N_{B}Mv\nu\tau\mathbf{e}_{y}=N_{p}M\frac{E\mathbf{e}_{x}\times B\mathbf{e}_{z} }{B^{2}} \tag{24}\]
The very same relation can be obtained from an electrical analysis rather than from a mechanical point of view. If we neglect the electron Larmor radius in front of the ion Larmor radius, the steady state density of fast ions \(N_{F}\) is associated with an electric polarization Eq. (4) \(N_{F}q\rho_{i}\mathbf{e}_{x}=N_{F}\left(Mv/B\right)\mathbf{e}_{x}\). In response to this electric polarization, the plasma, which displays a low frequency permittivity \(\varepsilon=1+\omega_{pi}^{2}/\omega_{ci}^{2}\approx\omega_{pi}^{2}/\omega_{ci}^ {2}\), sets up a reverse polarization through an electric field generation \(E\mathbf{e}_{x}\). The condition for this dielectric dipole screening is
\[N_{F}\frac{Mv}{B}\mathbf{e}_{x}+\varepsilon_{0}\frac{\omega_{pi}^{2}}{\omega_ {ci}^{2}}E\mathbf{e}_{x}=\mathbf{0}. \tag{25}\]
Here \(\omega_{pi}\) is the ion plasma frequency and \(\omega_{ci}\) the ion cyclotron frequency. Taking the cross product of this last relation with \(\mathbf{B}\) we find the condition
\[-N_{B}\nu\tau Mv\mathbf{e}_{y}+MN_{p}\frac{E\mathbf{e}_{x}\times B\mathbf{e}_{ z}}{B^{2}}=\mathbf{0}, \tag{26}\]
which is Eq. (24).
Finally, as a third demonstration of this result, we can consider Maxwell-Ampere equation with \(\left(i\right)\) the polarization current \(d\mathbf{P}_{\perp}/dt=\left(N_{B}Mv/B\right)\nu\mathbf{e}_{x}\), describing the generation of fast ions and \(\left(ii\right)\) the displacement current \(\varepsilon_{0}\varepsilon\partial\mathbf{E}/\partial t=\varepsilon_{0} \varepsilon\mathbf{E}/\tau\) associated with the decay of the electric field due to these fast ions slowing down. In writing Maxwell-Ampere equation we neglect the diamagnetic effect of the fast ions and consider \(\mathbf{B}_{fast\ ions}=\mathbf{0}\) such that
\(\nabla\times\mathbf{B}_{fast\ ions}=\mathbf{0}\) which implies \(\partial\mathbf{P}_{\perp}/\partial t\) + \(\varepsilon_{0}\varepsilon\partial\mathbf{E}/\partial t=\mathbf{0}\). In this case
\[N_{B}\frac{Mv}{B}\nu\mathbf{e}_{x}+\varepsilon_{0}\frac{\omega_{pi}^{2}}{ \omega_{ci}^{2}}\frac{E}{\tau}\mathbf{e}_{x}=\mathbf{0}, \tag{27}\]
which is again identical to Eqs. (24) and (26).
Thus, no matter the point of view, (_i_) mechanical with the momentum balance Eq. (24), (_ii_) electrostatic with the dielectric dipole screening Eq. (26), and (_iii_) electrodynamic with Maxwell-Ampere Eq. (27), we find that the continuous injection of a neutral beam along \(y\) will sustain a DC electric field along \(x\) :
\[\frac{E_{NB}}{B}=v\frac{N_{B}}{N_{p}}\nu\tau. \tag{28}\]
To obtain an order of magnitude estimate we can take values typical of large tokamak plasmas experiments : \(N_{B}/N_{p}\sim 10^{-4}-10^{-5}\), \(\nu\tau\sim 10^{5}-10^{6}\) and \(v\sim 10^{6}-10^{7}\) [m/s]. In all these relations both \(\nu\) and \(\tau\) are average as they are function of the neutrals and fast ions velocities. With these values, an upper bound of tens or up to a few hundreds of MV/m is found for the DC electric field generation in magnetized plasma with neutral beam. The power flux in the plasma from the neutral beam is given by : \(P_{NB}\left[\mathrm{W/m}^{2}\right]=N_{B}Mv^{3}/2\) so that the electric field Eq. (28) can be rewritten as
\[E_{NB}\left[\frac{\mathrm{V}}{\mathrm{m}}\right]=\frac{2B\nu\tau}{Mv^{2}N_{p} }\cdot P_{NB}\left[\frac{\mathrm{W}}{\mathrm{m}^{2}}\right]. \tag{29}\]
To identify the limit of this generation process we can consider the simple density requirements for the previous ionization/slowing down model, that is \(N_{p}\geq 10\times N_{B}\). For this density ratio the maximum electric field achievable with this scheme is
\[\frac{E_{NB}^{*}}{B}=v\frac{\nu\tau}{10}. \tag{30}\]
Both relations Eq. (21) and Eq. (30) are ultimate upper bound when the longitudinal and transverse conductivities of the finite size plasma slab can be ignored and the power deposition is optimized. The relations Eq. (21) and Eq. (30) provide rough estimates of the theoretical maximum values achievable with waves and beams, and are not associated with a breakdown threshold but with an optimal power deposition processes. Importantly, these relations predict very large upper bounds for the electric field both for wave and beam driven schemes, typically larger than tens of MV/m.
Because the typical values we have in mind for advanced high energy supersonic rotating plasmas applications are in the range of few tens of MV/m, we can consider the full picture for such configurations and address the issue of voltage distribution in the next section. The issue of dissipation in the bulk of a finite size plasma slab, far from the wave or beam active regions, is also addressed in this coming section. Note finally that Eq. (21) does not involve dissipative time scales, whereas Eq. (30) involves the dissipative time scales \(\nu\) and \(\tau\). This difference is due to the fact that a wave can kick thermal particles so that, if we ignore temperature gradients, this does not perturb the thermal equilibrium. On the other hand, the fast ions must ultimately thermalize and isotropize in the neutral beam case.
## V Voltage Drop Distribution in a Plasma
Consider a cylindrical plasma shell uniformly magnetized along the \(z\) axis. In addition to the axial magnetic field \(\mathbf{B}=B\mathbf{e}_{z}\) we consider a radial electric field generated in a cylindrical shell of magnetic field lines, with width \(a\) and radius \(b/2\pi\), depicted in grey on Fig. 4(a). The radial electric field is generated in this cylindrical shell to sustain a rotation around the \(z\) axis for the purpose of thermonuclear confinement or mass separation
In order to simplify the analysis, which can be also carried in cylindrical coordinates, we will neglect curvature effects (\(b>a\)) and describe the grey plasma zone of Fig. 4(a) as a slab plasma depicted in Fig. 4(b). This transformation is just an unfolding of the cylindrical shell and displays the advantage of simplifying the physical picture and results. Following this unfolding,
Figure 4: Geometrical characteristics of the Cartesian plasma slab (b) modeling the cylindrical plasma shell (a).
the Cartesian plasma slab considered in the following is both magnetized along \(z\), \(\mathbf{B}=B\mathbf{e}_{z}\), and polarized along \(x\), \(\mathbf{E}=-E\mathbf{e}_{x}\).The magnetized plasma slab is of finite size : (_i_) \(a\) along \(x\), (_ii_) \(b\) along \(y\) and (_iii_) \(l\) along \(z\), as illustrated in Fig. 5(b).
The electric field is described by an electrostatic potential \(V\) such that \(\mathbf{E}=\) - \(\left(\partial V/\partial x\right)\mathbf{e}_{x}\) - \(\left(\partial V/\partial z\right)\mathbf{e}_{z}\) where \(\partial V/\partial z<\partial V/\partial x=V/a\). The equivalent DC current generator (wave or beam), located at \(z=0\), sustains a current between \(x=0\) and \(x=a\). As a result of charges depletion at \(x=0\) and charges accumulation at \(x=a\), a voltage drop \(V_{0}=V\left(z=0\right)\) is sustained between the magnetic surfaces \(x=0\) and \(x=a\). This voltage drop will decay away for \(z>0\) because of the finite conductivities along \(z\) and across \(x\). These finite conductivities will provide a fast dispersion of the charges along \(z\) and a slow relaxation across \(\mathbf{B}\) along \(x\).
We assume (_i_) that the amplitude of the wave is shaped such that the wave equivalent current generator is driven from \(x=0\) up to \(x=a\) near \(z=0\) and (_ii_) that the density of the neutral beam is shaped such that the beam equivalent voltage generator sets up a voltage drop between \(x=0\) and \(x=a\) near \(z=0\). In order to describe dissipative processes in the slab \(z>0\), we consider an infinitesimal slice of magnetized plasma : \(dz\) along \(z\), \(a\) along \(x\) and \(b\) along \(y\). This elementary slab, depicted on Fig. 5(b), displays two properties: (_i_) a large conductivity along \(dz\) and (_ii_) a large resistivity along \(x\). We assumed cylindrical symmetry of the original problem which translates into homogeneity along \(y\) of the unfolded slab. In particular, as the wave and beam travel in the \(y\) direction, we assume homogeneous wave or beam power deposition along \(y\) near \(z=0\), which means homogeneous current generation and electric field generation along \(y\).
We describe the dissipative dynamics of the charges by the current \(I\left(z\right)\) which flow easily along \(z\) and the small short circuited current resulting from the small conductivity along \(x\). In a slice \(dz\) this short circuiting of the initial charges separation is described by \(dI/dz\). This model allows to describe the volume charges relaxation and the steady state large voltage drop generation across the magnetic field. To calculate the small conductivity \(Gdz\) along \(x\) (across \(B\)) and the small resistivity \(Sdz\) along \(z\) (along \(B\)) we apply the classical formula describing the resistance/conductance of the elementary parallelepiped depicted in Fig. 5(b),
\[Sdz =\frac{dz}{\eta_{\perp}ba}, \tag{31}\] \[Gdz =\frac{\eta_{\perp}bdz}{a}, \tag{32}\]
where we have introduced the classical conductivities \(\eta_{\perp}\) and \(\eta_{\perp}\) along and across the field lines in a magnetized plasma [51; 52; 53; 54; 55; 56]. Note that taking into account curvature effects would change the expression of \(G\) but not \(S\), with for the cylindrical shell illustrated on Fig. 4(a)
\[G=2\pi\eta_{\perp}/\ln\frac{1+(\pi a/b)}{1-(\pi a/b)}, \tag{33}\]
and we recover the previous expression if \(a\ll b\). Then we apply Ohm's law to the transmission line like model illustrated in Fig. 6(a) to write the equations fulfilled by the voltage \(V\) across \(x\) and the current \(I\) along \(z\) :
\[dV =-SIdz, \tag{34}\] \[dI =-GVdz. \tag{35}\]
In order to obtain the various scalings and order of magnitude estimates of the final results we use the classical formula for the longitudinal and transverse conductivities used in Eqs. (31, 32).
Assuming first that the plasma is not fully ionized and that collisions with neutrals at rest are the dominant dissipative process :
\[\eta_{\short
collision frequency with neutrals in a cold plasma or the turbulent decorrelation frequency in a turbulent plasma.
On the other hand, if the plasma is fully ionized, the conductivity along the field lines is given by the Spitzer conductivity. It is independent of the density but scales as \(T^{-3/2}\) with the temperature,
\[\eta_{i}=\varepsilon_{0}\frac{\omega_{pe}^{2}}{\nu_{ei}}. \tag{37}\]
Across the field lines no relative velocity between electrons and ions is observed in the \(\mathbf{E}\times\mathbf{B}\) rest frame. This means that we have to consider additional effect to find a dissipative channel. Among these processes (_i_) inertia, (_ii_) viscosity and (_iii_) inhomogeneity are usually put forward [24; 25; 57]. We will consider here the effect of inhomogeneity which displays the same scaling as viscosity [57]. In an inhomogeneous electric field, the expression of the electric drift velocity \(\mathbf{v}_{E\times B}\) is given by :
\[\mathbf{v}_{E\times B}=\left(1+\frac{\rho^{2}}{4}\frac{d^{2}}{dx^{2}}\right) \frac{\mathbf{E}\times\mathbf{B}}{B^{2}} \tag{38}\]
where \(\rho\) is the Larmor radius. We will assume \(d^{2}E/dx^{2}\sim E/a^{2}\). This velocity is along \(y\) and, because of the difference in Larmor radius \(\rho_{e}\ll\rho_{i}\), Coulomb collisions, at a rate \(\nu_{ie}\), provides a friction force \(F\) between the electron and ion populations. As a result the ion population experiences an \(y\) directed force \(F\)
\[F=\nu_{ie}\frac{k_{B}T_{i}}{4\omega_{ci}^{2}}\frac{E}{a^{2}B} \tag{39}\]
where \(\omega_{ci}\) is the ion cyclotron frequency. This force \(F\) along \(y\) is the source of a \(\mathbf{F}\times\mathbf{B}/QB^{2}\) drift along \(x\) and this drift gives the equivalent conductivity \(\eta_{\perp}\) associated with inhomogeneity :
\[\eta_{\perp}=n_{i}\frac{\nu_{ie}}{\omega_{ci}}\frac{\rho_{i}^{2}}{a^{2}}\frac {Q}{dB}=\frac{\varepsilon_{0}}{4}\nu_{ie}\frac{\omega_{pi}^{2}}{\omega_{ci}^{2 }}\frac{\rho_{i}^{2}}{a^{2}}. \tag{40}\]
The strong scaling with respect to the magnetic field \(\rho_{i}^{2}/\omega_{ci}^{2}\sim B^{-4}\) is to be noted. The effect of viscosity displays the same scaling and we will consider Eq. (40) as the approximate perpendicular conductivity of a fully ionized plasma [57]. In the following, to evaluate the power dissipation with Eq. (37, 40), we will use the following estimate for a fully ionized hydrogen plasma
\[\nu_{ei}=\ln\Lambda\left[\frac{mc^{2}}{3k_{B}T}\right]^{\frac{3}{2}}\frac{r_{ e}}{c}\omega_{pe}^{2}\sim\left[\frac{mc^{2}}{3k_{B}T}\right]^{\frac{3}{2}} \left[\frac{\omega_{pe}}{10^{11}\mathrm{Rd/s}}\right]^{2} \tag{41}\]
where \(r_{e}=2.8\times 10^{-15}\) m is the classical electron radius, \(mc^{2}=511\) KeV the electron rest energy and \(c=2.9\times 10^{8}\) m/s the velocity of light. The ion-electron collision frequency is given by \(\nu_{ie}\) = \(m\nu_{ei}/M\).
## VI Attenuation length and plasma resistance
In order to analyze Eqs. (34, 35), it turns out to be more convenient to introduce what we will call the _plasma slab resistance_\(R\) defined as
\[Rb=\frac{1}{\sqrt{\eta_{\perp}\eta_{i}}}, \tag{42}\]
and the _attenuation length_\(\lambda\) defined as
\[\frac{\lambda}{a}=\sqrt{\frac{\eta_{i}}{\eta_{\perp}}}. \tag{43}\]
These two global characteristics, \(R\) and \(\lambda\), capture all the electrical properties of the plasma slab needed to describe the charge relaxation for \(z>0\) of the \(z=0\) wave or beam driven perpendicular current.
For a fully ionized plasma the transverse conductivity is a second order effect described by Eq. (40) and the plasma resistance and attenuation length are given by
\[\frac{\lambda}{a}=\frac{2}{\sqrt{\nu_{ei}\nu_{ie}}}\frac{\omega _{pe}\omega_{ci}}{\omega_{pi}}\frac{a}{\rho_{i}}\sim\frac{\omega_{ci}}{\nu_{ ie}}\frac{a}{\rho_{i}} \tag{44}\] \[\frac{1}{Rb}=\frac{\varepsilon_{0}}{2}\frac{\omega_{pe}\omega_{pi} }{\omega_{ci}}\sqrt{\frac{\nu_{ie}}{\nu_{ei}}}\frac{\rho_{i}}{a}\sim \varepsilon_{0}\frac{\omega_{pe}^{2}}{\omega_{ce}}\frac{\rho_{i}}{a} \tag{45}\]
The attenuation length \(\lambda\) is thus far larger than the size of the device for a fully ionized plasma of the thermonuclear type. Note also that while the definition of the attenuation length \(\lambda\), Eq. (43), already appears in the literature in the few studies addressing the issue of field penetration from the edge [53; 54; 56], the definition of
\[R=\frac{\omega_{ce}a}{be\varepsilon_{0}\rho_{i}\omega_{pe}^{2}}\]
for a fully ionized plasma, Eq. (45), does not seem to have attracted some previous specific attention despite its importance to understand DC voltage distribution in a fully ionized magnetized plasma.
With these definitions Eqs. (34, 35) become simply
\[\lambda\frac{dV}{dz}=-RI, \tag{46}\] \[\lambda\frac{dI}{dz}=-\frac{V}{R}. \tag{47}\]
We further define the new variables \(s=z/\lambda\) and \((u,v)\) such that
\[\left(\begin{array}{c}u\\ v\end{array}\right)=\left(\begin{array}{c}\frac{V}{\sqrt{R}}I\\ \frac{\sqrt{R}}{\sqrt{R}}-\sqrt{R}I\end{array}\right), \tag{48}\]
so that
\[\frac{d}{ds}\left(\begin{array}{c}u\\ v\end{array}\right)=\left(\begin{array}{c}-u\\ +v\end{array}\right). \tag{49}\]
The solutions of Eq. (49) which are simply a _forward decay_\(u=u_{0}\exp-s\) and a _backward decay_\(v=v_{0}\exp s\).
Note for completeness that Eq. (49) was derived by assuming that the plasma is homogeneous. A simple model taking into account the \(z\) variation of \(\lambda\left(z\right)\) and \(R\left(z\right)\) can be studied in a way similar to the analysis of the previous homogeneous model but by considering this time the change of variable
\[s\left(z\right)=\int_{0}^{z}du/\lambda\left(u\right). \tag{50}\]
With this change of variables Eq. (49) becomes
\[\frac{d}{ds}\left(\begin{array}{c}u\\ v\end{array}\right)=\left(\begin{array}{c}-u\\ +v\end{array}\right)-\left(d\ln\sqrt{R}/ds\right)\left(\begin{array}{c}v\\ u\end{array}\right). \tag{51}\]
and the forward and backward solution are coupled by the inhomogeneities. This inhomogeneities \(\lambda\left(z\right)\) and \(R\left(z\right)\) play the role of an additional dissipative term, for example, when the magnetic field lines are diverging. Although interesting generalizations, the tapering effect of inhomogeneous plasma and magnetic field properties will not be considered here, and we will restrict the analysis to the solutions of Eq. (49).
The general solution of Eqs. (46, 47) is a linear combination of the forward and backward solutions \(\exp+z/\lambda\) and \(\exp-z/\lambda\). In the following we consider the general solution
\[I\left(z\right) =I_{-}\exp\left(-\frac{z}{\lambda}\right)+I_{+}\exp\left(+\frac{z }{\lambda}\right) \tag{52}\] \[V\left(z\right) =RI_{-}\exp\left(-\frac{z}{\lambda}\right)-RI_{+}\exp\left(+ \frac{z}{\lambda}\right) \tag{53}\]
where the amplitudes \(I_{\pm}\) are given by the two boundary conditions (_i_) at \(z=0\) with the wave or beam driven generators, and (_ii_) at \(z=l\) with a load \(R_{L}\) describing how we choose to terminate the field lines and the plasma. This is illustrated in Fig. 6(b). The \(\exp+z/\lambda\) solution is associated with the reflection on the load at \(z=l\) when there is an impedance mismatch of this load \(R_{L}\) with the plasma resistance \(R\).
The boundary condition at \(z=0\) depends on whether wave or neutral beam is considered. For the wave case, as the effect of the wave is to move already existing charges, we consider an equivalent perfect current generator \(\left.I_{0}\right|_{RF}\) localized at \(z=0\). For the neutral beam case, as the beam brings and separates charges with opposite signs, we consider an equivalent perfect voltage generator \(\left.V_{0}\right|_{NB}\) localized at \(z=0\). We call \(I_{0}=I\left(z=0\right)\) the current of the generator equivalent to the wave, and \(V_{0}=V\left(z=0\right)\) the voltage drop in the beam active region near \(z=0\). These current and voltage generators can be respectively related to the injected RF power and beam momentum as follows.
Writing \(\mathcal{P}_{RF}\left[\mathrm{W}\right]\) the total power absorbed by the plasma from the wave at \(z=0\) where the wave power deposition is localized, one gets
\[P_{RF}\left[\mathrm{W}/\mathrm{m}^{3}\right]=\frac{\mathcal{P}_{RF}\delta \left(z\right)}{ab} \tag{54}\]
where \(\delta\left(z\right)\) is the Dirac distribution. Then from Eq. (2) we can define the equivalent current generator \(\left.I_{0}\right|_{RF}\) associated with the wave drive at \(z=0\) through the relation \(J_{\perp}=\left.I_{0}\right|_{RF}\delta\left(z\right)/b\), so that
\[\left.I_{0}\right|_{RF}=\frac{k_{\perp}}{\omega}\frac{1}{Ba}\mathcal{P}_{RF}. \tag{55}\]
Similarly, we can define from Eq. (28) the equivalent voltage generator \(\left.V_{0}\right|_{NB}=E_{NB}a\) associated with the beam drive at \(z=0\)
\[\left.V_{0}\right|_{NB}=aB\nu\tau\frac{N_{B}}{N_{p}}v. \tag{56}\]
For the wave case the power of the wave equivalent generators is \(\left.I_{0}\right|_{RF}V_{0}\). Under optimal conditions such as discussed in section II, energy conservation implies that the input RF power is equal to the dissipated DC power : \(\left.I_{0}\right|_{RF}V_{0}=\mathcal{P}_{RF}\). Eliminating \(\mathcal{P}_{RF}\) between this last relation and Eq. (55) we recover Eq. (21) as expected.
Because of dissipation the current \(\left.I_{0}\right|_{RF}\) and voltage \(\left.V_{0}\right|_{NB}\) are progressively shunted by the plasma, away from \(z=0\), as a result of the high conductivity along \(z\) and the weak conductivity along \(x\). This decrease is described by the solution Eqs. (52, 53) under the appropriate boundary conditions \(I\left(z=0\right)=I_{0}\) or \(V\left(z=0\right)=V_{0}\) given by Eqs. (55, 56) and \(V\left(z=l\right)=R_{L}I\left(z=l\right)\) at the end of the field lines for a plasma column of length \(l\).
## VII Power dissipation in a loaded plasma slab
### Power requirement
We consider Eqs. (52, 53) with the wave or beam driven generator Eq. (55) or Eq. (56) at \(z=0\), and with the plasma being terminated at \(z=l\) by a resistive load \(R_{L}\) as illustrated on Fig. 6(b). These boundary conditions can be written as
\[I_{-}+I_{+}=I_{0} \tag{57}\]
and
\[R\left(I_{-}\exp-\frac{l}{\lambda}-I_{+}\exp+\frac{l}{\lambda} \right)\\ =R_{L}\left(I_{-}\exp-\frac{l}{\lambda}+I_{+}\exp+\frac{l}{ \lambda}\right). \tag{58}\]
After some elementary algebra, we solve Eqs. (57, 58) for the amplitudes \(I_{\pm}\) and express \(V\left(z=0\right)\) as a function of \(I\left(z=0\right)\) through the definition of \(R_{e}\): \(V_{0}=R_{e}I_{0}\). This resistance \(R_{e}\) is the equivalent resistance of the plasma slab as seen from \(z=0\), and writes
\[\frac{R_{e}}{R}=\frac{R_{L}+R\tanh l/\lambda}{R+R_{L}\tanh l/\lambda}. \tag{59}\]
For the wave case, Eq. (55) relates the current \(\left.I_{0}\right|_{RF}\) to the RF power \(\mathcal{P}_{RF}\). This power is used to sustain the steady state current and voltage pattern in the plasma slab \(\left(a,b,l\right)\) against relaxation. The maximum voltage drop in the wave active region \(z=0\) is thus
\[\left.V_{0}\right|_{RF}=R_{e}\frac{k_{\perp}}{a\omega B}\mathcal{P}_{RF}\leq \frac{R}{\tanh l/\lambda}\frac{k_{\perp}}{a\omega B}\mathcal{P}_{RF} \tag{60}\]
where the right hand side of the inequality, \(R_{e}=R/\tanh l/\lambda\), is associated withe optimal choice for the load at \(z=l\), that is \(R_{L}\rightarrow+\infty\). As \(\tanh l/\lambda\) increases from zero up to one when \(l\) increases, a shorter plasma column displays a larger voltage drop for the same power because the charges are more concentrated on the field lines, in the limit that \(l<\lambda\). With the expansion:
\[\left.R_{e}\right|_{R_{L}\rightarrow+\infty}=\frac{R}{\tanh l/\lambda}\approx \frac{\lambda R}{l}=\frac{a}{bl\eta_{\perp}}, \tag{61}\]
the plasma slab behaves as an isotropic conductor with conductivity \(\eta_{\perp}\) and Eq. (60) becomes :
\[\left.V_{0}\right|_{RF}\approx\frac{k_{\perp}}{bl\eta_{\perp}\omega B} \mathcal{P}_{RF} \tag{62}\]
Dissipation across the field lines is ultimately responsible for the limit described by Eq. (62). For such a favorable limit, even if \(\eta_{\perp}\to 0\) or \(\mathcal{P}_{RF}\rightarrow+\infty\) the optimum voltage \(V_{0}\) is limited by the relation Eq. (21) which is a constraint imposed by the wave-particle resonance if we want to optimize the generation process and avoid to waste power into Landau and cyclotron heating.
Using Eq. (40) the power requirement \(\mathcal{P}\sim bl\eta_{\perp}V_{0}^{2}/a\) for a given voltage drop and a given fully ionized plasma under optimal conditions is
\[\left[\frac{\mathcal{P}}{\mathrm{W}}\right]\sim\left[\frac{V_{0}}{\mathrm{MV }}\right]^{2}\left[\frac{\omega_{pe}}{10^{11}\ \mathrm{rad.s}^{-1}}\right]^{2}\left[\frac{l}{ \mathrm{m}}\right]\left[\frac{b}{a}\right]\left[\frac{k_{B}T}{mc^{2}}\right]^ {-\frac{2}{2}}\left[\frac{\rho_{i}}{a}\right]^{2}\left[\frac{\omega_{pe}}{ \omega_{ce}}\right]^{2}, \tag{63}\]
where we assumed \(\ln\Lambda=10\). This result suggests that megavolt voltage drops are accessible for rather low driving power in thermonuclear hydrogen plasmas where typically \(b\sim a\), \(\omega_{pe}\sim\omega_{ce}\) and \(a\geq 10\rho_{i}\).
Up to now we have only considered a current source (equivalent to the wave or the beam) localized near \(z=0\). For wave drive this is true if the resonant particles are chosen with a zero parallel velocity, and/or if the plasma column is very long, and/or if the quasilinear wave diffusion from \(x=0\) to \(x=a\) is fast enough compared to the other processes. This issue of the radial current deposition by a wave must be addressed within the framework of a collisional/quasilinear kinetic model. Similarly the issue of the neutral beam current deposition is to be addressed within a kinetic model. Rather than going this route we consider here for completeness the previous fluid model but the complementary and more general problem of a broad current deposition profile. Specifically, the wave or beam current deposition is assumed to be broadly distributed all along the field lines, \(0<x<l\), and described by an infinitesimal current source, \(\mathcal{I}dz=\left(I_{0}/l\right)dz\), in each infinitesimal section \(dz\) along \(z\). We consider the equivalent circuit associated with an infinitesimal section \(dz\) as illustrated in Fig. 7(a). The electrical properties of a slice \(\left(a,b,dz\right)\) then take into account a \(\mathcal{I}dz\) current source.
The transmission line equations describing the slab \(\left(a,b,l\right)\) with load \(R_{L}\) at \(z=l\) as illustrated in Fig. 7(b) are
\[\lambda\frac{dV}{dz}= -RI, \tag{64}\] \[\lambda\frac{dI}{dz}= -\frac{V}{R}+\lambda\mathcal{I}. \tag{65}\]
Note that Eqs. (64, 65) will still hold true if considering plasma conductivities and power deposition profiles that are inhomogeneous along \(z\). With the boundaries conditions \(I\left(z=0\right)=0\) and \(R_{L}I\left(z=l\right)=V\left(z=0\right)\), the solutions are given by
\[I\left(z\right)= \mathcal{I}\lambda\frac{R\sinh\left(z/\lambda\right)}{R\cosh \left(l/\lambda\right)+R_{L}\sinh\left(l/\lambda\right)}, \tag{66}\] \[V\left(z\right)= R\mathcal{I}\lambda\left[1-\frac{R\cosh\left(z/\lambda \right)}{R\cosh\left(l/\lambda\right)+R_{L}\sinh\left(l/\lambda\right)} \right]. \tag{67}\]
With these solutions we can now define two equivalent resistances. The first one is simply the ratio of the voltage \(V_{0}=V\left(z=0\right)\) to the total wave or beam driven current \(I_{0}=\int_{0}^{l}\mathcal{I}dz\),
\[\frac{V_{0}}{I_{0}}=R\frac{\lambda}{l}\left[1-\frac{R}{R\cosh\left(l/\lambda \right)+R_{L}\sinh\left(l/\lambda\right)}\right]\underset{R_{L}\rightarrow+ \infty}{\approx}R\frac{\lambda}{l}. \tag{68}\]
The second resistance is more instructive and is associated with the integrated global power balance
\[R_{e}^{\prime}=\frac{\int_{0}^{l}V\left(z\right)\mathcal{I}dz}{\left(\int_{0}^ {l}\mathcal{I}dz\right)^{2}}. \tag{69}\]
Indeed, similarly to what was discussed for the localised source, this is this resistance \(R_{e}^{\prime}\) which now determines
the power balance of the wave or beam driven rotation process for a broad power deposition profile. Using Eq. (67) this resistance rewrites
\[R_{e}^{\prime}=R\frac{\lambda}{l}\left[1-\frac{\lambda}{l}\frac{R\sinh\left(l/ \lambda\right)}{R\cosh\left(l/\lambda\right)+R_{L}\sinh\left(l/\lambda\right)} \right]. \tag{70}\]
Interestingly, we find that
\[R_{e}^{\prime}\underset{R_{L}\rightarrow+\infty}{\approx}R\frac{\lambda}{l}, \tag{71}\]
so that the same result is obtained for distributed and localized drives under optimal condition \(R_{L}\rightarrow+\infty\). In other words, the power requirement is rather insensitive to the current deposition profile along field lines \(0\leq x\leq l\) when \(R_{L}\rightarrow+\infty\) or \(l<\lambda\).
### Voltage shaping
Besides the power requirement, the model developed here can also be used to study the voltage shaping issue. Indeed, while a careful shaping of the radial power deposition profile can be used to control the radial structure of the electric field, its axial structure is determined by the plasma properties \(\lambda\), and strategies to control this axial distribution are to be identified. An issue here is that while the assumption \(\eta_{\text{\tiny{i}}}=\eta_{\text{Spitzer}}\) is confirmed by experiments in fully ionized plasmas, there exists no large experimental data basis for \(\eta_{\perp}\) in fully ionized, magnetized, (supersonic) rotating plasmas. As a result, we can not accurately calculate the attenuation length \(\lambda\) and the resistance \(R_{e}\) in a fully ionized plasma column of length \(l\). We can however, as we will do now, identify trends.
Consider first the limit \(\lambda>l\). In this limit the plasma column is not highly dissipative and the power needed to sustain a large radial electric field is small if \(R_{L}\) is large. The large voltage drop is however to be handled at the left and right edge of the column with concentric circular end plates, and the issue of the management of high voltage between conductors must then to be solved. Consider now the opposite limit \(\lambda<l\). In this limit the plasma column is rather dissipative and the power needed to sustain a large radial electric field will be large. On the other hand the insulation of the endplates terminating the field lines will not be a problem. The former situation, that is limited dissipation \(\lambda>l\), is the one we will focus on in the remaining of this section.
Consider a plasma column of length \(l\) as illustrated in Fig. 8. The wave driven current generator \(I_{0}=\mathcal{P}_{RF}k_{\perp}/a\omega B\) is assumed to be localized around \(z=0\) (\(w\)), and the transverse conductivity \(\eta_{\perp}\) is assumed to become very large near \(z=\pm l\). This end zone (\(e\)) in Fig. 8 can be considered as a short circuit such that \(R_{L}=0\). With these two boundaries conditions, \(V\left(z=l\right)=0\) and \(I\left(z=0\right)=I_{0}\), and focusing on the region \(z>0\), the solutions Eqs. (52, 53) give
\[I\left(z\right) =I_{0}\cosh\frac{l-z}{\lambda}\left(\cosh\frac{l}{\lambda} \right)^{-1}, \tag{72}\] \[V\left(z\right) =RI_{0}\sinh\frac{l-z}{\lambda}\left(\cosh\frac{l}{\lambda} \right)^{-1}. \tag{73}\]
Symmetrical solutions are expected for \(z<0\), as illustrated in Fig. 8. Note also that we should take \(2I_{0}\) as the wave driven current flows both on the left and right sides of the central region (\(w\)).
Although the important problem of how to implement the condition \(R_{L}=0\) at \(z=\pm l\) is left for a future study, we briefly discuss here local ergodization of the magnetic field lines. The required magnetic modulations can be achieved with external coils producing radial and azimuthal components of the magnetic field. The magnetic field lines then display the property of being an Hamiltonian system where the time is replaced by the \(z\) coordinate, so that if the local modulations have several resonances and enter the regime where the Chirikov criterion is fulfilled. The field lines, which are basically the wire along which the free charges flow, will then explore the full radial extent of the zone depicted in grey (e) on Fig. 8, which will provide an almost perfect short circuit between \(x=0\) and \(x=a\) in the slab model. Ergodization of magnetic field lines is common in plasma physics and particularly in tokamak plasma where the principle of magnetic island overlapping has been put forward and tested successful with the concept of _ergodic divertor_. Yet, the use of this strategy for the problem
Figure 8: A magnetized plasma column with two ergodized zone (\(e\)) and a central wave/beam driven zone (\(w\)).
Figure 7: (a) Equivalent circuit of a \(dz\) slice (\(a,b,dz\)) of the plasma. (b) Equivalent model of wave absorption and charge separation and charge dissipation in the plasma slab (\(a,b,l\)) terminated with loaded endplates at \(z=l\).
at hand raises two problems. First, the short circuit at \(z=l\) implies that the power needed to sustain the radial electric field to be very large. From Eq. (73), the power sustaining the generation and confinement of the electric field is
\[I_{0}V_{0}\approx\frac{RI_{0}^{2}l}{\lambda}=\frac{I_{0}^{2}l}{ab\eta_{\text{i}}} \tag{74}\]
The plasma slab thus behaves as an isotropic conductor with conductivity \(\eta_{\text{i}}\). Second, it is not clear that an ergodic zone near the endplates will really protect them from damages as the short circuit will be the source of an intense Joule heating.
Beyond ergodization, alternative strategies to minimize the risk of high voltage damages at the edges of the plasma and to lower the power requirement will have to be established on the specific material and power constraints of each configuration. Eq. (59) provides the basis for such analysis. For very large electric fields, and if we let some part of the voltage drop reach the end plates, a preferential combination of electrodes could possibly be used to set up a classical energy recovery system outside the plasma. This part of tolerable voltage will again have to be analyzed with respect to the electrodes properties. Finally, we note that the occurrence of inhomogeneity described by Eq. (51), such as the divergence of magnetic field lines, can in principle be used to shape the axial voltage profile and reduce the electric field on the conducting plates. The examination of these possibilities is left for future studies.
## VIII Discussion and conclusion
In this first study on wave and beam large electric field generation and control in the core of a magnetized plasmas, we have derived and solved the equation for the axial variation of the voltage drop. We identified \(R\) and \(\lambda\) as the control parameters of the problem. We then used these results to address the issue of the power balance, and of field shaping in the asymptotic regime \(l<\lambda\).
To summarize our findings:
* We have identified, proposed and analyzed two mechanisms for large DC electric field generation inside a magnetized plasma: waves and neutral beams, which are control tools that are already routinely used on modern tokamaks at power levels of the order of tens of Megawatts [55]. The relations Eq. (21) and Eq. (30) provide upper bounds for the electric field theoretically achievable with these wave and beam schemes. These upper bound are in the GV/m range, which authorizes to consider tens of MV/m electric field generation in magnetized plasmas.
* We have set up a model of the plasma stationary response to wave and beam power absorption. This model predicts both the electric field penetration from the edge in the classical scheme Fig. 1(a), and the electric field escape from the core central part of a column in the wave or beam driven scheme Fig. 1(b) and Fig. 1(c).
* We have derived the voltage drop equation for an axially inhomogeneous plasma Eq. (51).
* We have identified the three fundamental characteristics of a plasma slab: \(R\), Eq. (42), and \(\lambda\), Eq. (43), and then calculated the input impedance of the plasma slab \(R_{e}\), Eq. (59).
* We derived in Eq. (63) the minimal power required to sustain a given voltage drop \(\mathcal{P}a\sim bl\eta_{\perp}V_{0}^{2}\), and showed that MV/m fields are within the power range of existing wave and beam control devices in large tokamak.
To extend this set of new results, other schemes to localize the voltage drop inside the plasma column, far from the edge, can be explored on the basis of Eq. (51) which is to be completed by appropriate loading or biasing conditions at \(s=\int_{0}^{\pm l}dz/\lambda\left(z\right)\).
###### Acknowledgements.
The authors would like to thank Dr. I. E. Ochs, E. J. Kolmes, T. Rubin, and M. E. Mlodik for constructive discussions. This work was supported by ARPA-E Grant No. DE-AR001554. JMR acknowledge Princeton University and the Andlinger Center for Energy + the Environment for the ACEE fellowship which made this work possible.
|
2306.03398
|
Minimum intrinsic dimension scaling for entropic optimal transport
|
Motivated by the manifold hypothesis, which states that data with a high
extrinsic dimension may yet have a low intrinsic dimension, we develop refined
statistical bounds for entropic optimal transport that are sensitive to the
intrinsic dimension of the data. Our bounds involve a robust notion of
intrinsic dimension, measured at only a single distance scale depending on the
regularization parameter, and show that it is only the minimum of these
single-scale intrinsic dimensions which governs the rate of convergence. We
call this the Minimum Intrinsic Dimension scaling (MID scaling) phenomenon, and
establish MID scaling with no assumptions on the data distributions so long as
the cost is bounded and Lipschitz, and for various entropic optimal transport
quantities beyond just values, with stronger analogs when one distribution is
supported on a manifold. Our results significantly advance the theoretical
state of the art by showing that MID scaling is a generic phenomenon, and
provide the first rigorous interpretation of the statistical effect of entropic
regularization as a distance scale.
|
Austin J. Stromme
|
2023-06-06T04:28:12Z
|
http://arxiv.org/abs/2306.03398v2
|
# Minimum intrinsic dimension scaling for entropic optimal transport
###### Abstract
Motivated by the manifold hypothesis, which states that data with a high extrinsic dimension may yet have a low intrinsic dimension, we develop refined statistical bounds for entropic optimal transport that are sensitive to the intrinsic dimension of the data. Our bounds involve a robust notion of intrinsic dimension, measured at only a single distance scale depending on the regularization parameter, and show that it is only the minimum of these single-scale intrinsic dimensions which governs the rate of convergence. We call this the Minimum Intrinsic Dimension scaling (MID scaling) phenomenon, and establish MID scaling with no assumptions on the data distributions so long as the cost is bounded and Lipschitz, and for various entropic optimal transport quantities beyond just values, with stronger analogs when one distribution is supported on a manifold. Our results significantly advance the theoretical state of the art by showing that MID scaling is a generic phenomenon, and provide the first rigorous interpretation of the statistical effect of entropic regularization as a distance scale.
## 1 Introduction
Optimal transport (OT) is a powerful paradigm for comparing probability distributions based on a minimum-energy criterion, and has recently been employed throughout applied science, including statistics, computer science, biology and beyond [22, 2, 23, 24, 25, 26]. Rather than comparing distributions pointwise, OT searches for the most efficient way to transform one into the other, and thus gives practitioners a geometrically meaningful method of contrasting and interpolating data that can be represented as probability distributions. For probability measures \(\mu,\nu\) on \(\mathbb{R}^{d}\) the OT problem, with respect to the squared Euclidean cost \(\|\cdot\|^{2}\), is defined as
\[W_{2}^{2}(\mu,\nu):=\min_{\pi\in\Pi(\mu,\nu)}\int\|x-y\|^{2}\mathrm{d}\pi(x,y), \tag{1.1}\]
where \(\Pi(\mu,\nu)\) denotes the set of probability distributions \(\pi\) on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) which couple \(\mu\) to \(\nu\), namely such that their marginal on the first \(d\) coordinates is \(\mu\) and their marginal on the last \(d\) coordinates is \(\nu\). The fundamental theorem of OT guarantees that, so long as \(\mu,\nu\) have finite second moments, there is a solution to (1.1); moreover if \(\mu\) has a density
with respect to the Lebesgue measure, there is a unique minimizer which is supported on the graph of a deterministic map [21]. For many applications, such as domain adaptation [13] or cellular biology [14, 15], it is the coupling minimizing (1.1), rather than the cost \(W_{2}^{2}(\mu,\nu)\) itself, which is of greatest interest.
Entropic optimal transport.When discretized, the OT problem (1.1) becomes a linear program (LP) with two equality constraints. Although this problem can be feasibly solved at moderate scale with specialized LP solvers which have computational complexity scaling cubically in the support size [17], in practice, OT is most often approximated with an entropic regularization term [18]. This entropically regularized problem is then solved with a simple iterative rounding algorithm known as Sinkhorn's algorithm [19, 15], and is preferred for its quadratic scaling in the support size, simplicity, and parallelizability. The entropically regularized OT (entropic OT) problem is defined, for a regularization parameter \(\varepsilon>0\), as
\[S_{\varepsilon}(\mu,\nu):=\min_{\pi\in\Pi(\mu,\nu)}\int\|x-y\|^{2}\mathrm{d} \pi(x,y)+\varepsilon\,\mathrm{KL}(\pi\,\|\,\mu\otimes\nu), \tag{1.2}\]
where \(\mu\otimes\nu\) denotes the joint law of \((x,y)\) when \(x\sim\mu\) and \(y\sim\nu\) are independent, and the Kullback-Leibler divergence is defined as
\[\mathrm{KL}(P\,\|\,Q):=\begin{cases}\int\ln\big{(}\frac{\mathrm{d}P}{ \mathrm{d}Q}(x)\big{)}\mathrm{d}P(x)&P\ll Q,\\ \infty&P\ll Q,\end{cases}\]
and the notation \(P\ll Q\) means \(P\) is absolutely continuous with respect to \(Q\).
### Statistical aspects of (entropic) OT
Unfortunately, the un-regularized OT problem (1.1) is known to suffer from a statistical curse of dimensionality. To describe this barrier, consider the practical situation in which one does not have access to the entire distributions \(\mu,\nu\), and instead only has access to iid samples of size \(n\) from each distribution, which we write as \(\mathcal{X}:=(x_{1},\ldots,x_{n})\sim\mu^{\otimes n}\) and \(\mathcal{Y}:=(y_{1},\ldots,y_{n})\sim\nu^{\otimes n}\). Let \(\hat{\mu}\) and \(\hat{\nu}\) denote the empirical measures supported on \(\mathcal{X}\) and \(\mathcal{Y}\), respectively, namely
\[\hat{\mu}:=\frac{1}{n}\sum_{i=1}^{n}\delta_{x_{i}}\qquad\text{ and }\qquad\hat{\nu}:=\frac{1}{n}\sum_{j=1}^{n}\delta_{y_{j}}.\]
Given the empirical measures \(\hat{\mu},\hat{\nu}\), the most natural way to estimate the population quantity \(W_{2}^{2}(\mu,\nu)\) is with the plug-in estimator \(W_{2}^{2}(\hat{\mu},\hat{\nu})\). The discrepancy between the empirical OT value and its population counterpart is an old and well-studied area, and it has long been understood that rates like \(n^{-2/d}\) are typical - for instance, if \(d\geqslant 5\), and \(\mu,\nu\) are absolutely continuous with respect to the Lebesgue measure on \([0,1]^{d}\)[16, 17]. It is natural to wonder whether these rates can be improved with other estimators, but it was recently shown they are essentially un-improvable, both for OT value estimation [17, 18] and OT map estimation [19].
Statistical entropic OT.Motivated by its computational benefits and corresponding prevalence in practice, as well as the curse of dimensionality for its un-regularized counterpart, a line of recent work has endeavored to understand the statistical consequences of entropic regularization [1, 1, 19, 20, 21]. This work has shown that the entropic OT problem (1.2) offers significantly improved statistical performance, essentially transferring the curse of dimensionality from the sample size \(n\) to the regularization parameter \(\varepsilon\); the following result describes the current state of the art when the measures are bounded and \(\varepsilon\) is small.
**Theorem 1** ([1, 20]).: _Suppose \(\mu,\nu\) are probability measures on \(\mathbb{R}^{d}\) with support contained in the unit ball \(B(0,1)\). Then for a constant \(C_{d}\) depending only on the dimension \(d\),_
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \leqslant C_{d}\cdot\frac{1+\varepsilon^{-\lfloor d/2\rfloor}}{\sqrt{n}},\]
_where the expectation is taken over the iid samples \(\mathcal{X}\sim\mu^{\otimes n},\mathcal{Y}\sim\nu^{\otimes n}\)._
By giving an explicit trade-off between regularization and statistical error, this result offers a strong and flexible description of the performance of entropic OT in practice. In this work, we build on Theorem 1 and develop a refined theory of the statistical behavior of entropic OT.
### Effective statistical dimension of entropic OT
To describe our refinements, observe that the estimate in Theorem 1 is _extrinsic_ in the sense that the dimension \(d\) is appearing because the ambient space is \(\mathbb{R}^{d}\), rather than anything to do with the _intrinsic_ dimensions of \(\mu\) and \(\nu\). This extrinsic dimension dependence is fundamental to the proof, and can be very pessimistic. For a toy example, suppose \(\mu\) and \(\nu\) are supported on the same \(k\)-dimensional hyperplane in \(\mathbb{R}^{d}\): then Theorem 1 itself implies that the statistical rate has the potentially much milder dependence \(\varepsilon^{-\lfloor k/2\rfloor}\) on the regularization parameter \(\varepsilon\). In such a case, we can say that \(\mu,\nu\) have extrinsic dimension \(d\) yet have intrinsic dimension (at most) \(k\). Well beyond toy settings, the widely believed manifold hypothesis states that natural data with a high extrinsic dimension is typically near to or on a low-dimensional manifold embedded in Euclidean space, and so has low intrinsic dimension [1, 1]. Given the ubiquity of entropic OT in practice, it is therefore of major interest to determine the effective statistical dimension of entropic OT, and identify how it relates to the extrinsic dimension \(d\) as well as the intrinsic dimensions of \(\mu\) and \(\nu\). In this work, we thus study the following question:
_Suppose \(\mu\) and \(\nu\) are supported on \(\mathbb{R}^{d}\), yet have (informally speaking) intrinsic dimensions \(d_{\mu}\) and \(d_{\nu}\). How does the statistical rate of convergence of entropic OT depend on \(d,d_{\mu}\), and \(d_{\nu}\)?_
Our results will measure intrinsic dimension through the covering numbers of the supports of \(\mu\) and \(\nu\). To introduce covering numbers, write the closed ball centered at \(z_{0}\in\mathbb{R}^{d}\) with radius \(r>0\) as
\[B^{\mathrm{cl}}(z_{0},r):=\big{\{}z\in\mathbb{R}^{d}\colon\|z-z_{0}\|\leqslant r \big{\}}.\]
Then the covering number of a set \(A\subset\mathbb{R}^{d}\) at scale \(\delta>0\) is defined as
\[\mathcal{N}(A,\|\cdot\|,\delta):=\min\Big{\{}K\in\mathbb{N}_{>0}\ \Big{|}\ \exists z_{1},\ldots z_{K}\in\mathbb{R}^{d}\,:\ A\subset\bigcup_{k=1}^{K}B^{ \mathrm{cl}}(z_{k},\delta)\Big{\}}. \tag{1.3}\]
It will be convenient to define
\[\mathcal{N}(\mu,\delta):=\mathcal{N}(\mathrm{supp}(\mu),\|\cdot\|,\delta), \qquad\mathcal{N}(\nu,\delta):=\mathcal{N}(\mathrm{supp}(\nu),\|\cdot\|, \delta),\qquad\delta>0.\]
Contributions.Using this notation, we can give our answer to the above question:
**MID scaling:**: _the statistical rate of convergence of entropic OT can be upper bounded by quantities whose dimension-dependence is purely contained within the minimum covering number at scale \(\varepsilon\), namely \(\mathcal{N}(\mu,\varepsilon)\wedge\mathcal{N}(\nu,\varepsilon)\)._
We call this the minimum intrinsic dimension scaling (MID scaling) phenomenon. We emphasize that the MID scaling phenomenon encapsulates two related yet distinct phenomena:
1. **Minimum:** only the _minimum_ of the intrinsic dimensions governs the convergence
2. **Intrinsic Dimension:** the dimension-dependence is intrinsic to \(\mu\) and \(\nu\), and in fact is intrinsic _at a single distance scale depending on the regularization parameter._
As we elaborate on in section 3, the MID scaling phenomenon provides a strong statistical justification for entropic OT in the context of the manifold hypothesis. It shows that distributions with support which may not be smooth or even locally low-dimensional can still have significantly improved dimension-dependence in their statistical rates, and clearly describes the statistical role of the regularization parameter \(\varepsilon\) as a distance scale. Moreover, it identifies the surprising fact that the dimension-dependence of entropic OT is driven by the lower-dimensional (in this single-scale sense) distribution.
The above statement of MID scaling is in the setting of Theorem 1, but our main results establish MID scaling more generally for bounded, Lipschitz costs, and for entropic OT quantities beyond values, including maps and densities. As our main results in this setting, we prove MID scaling for entropic OT values in Theorem 2, for entropic OT maps in empirical norms in Theorem 7, and for entropic OT densities in empirical norms in Theorem 8. Under the additional assumption that one distribution is supported on an embedded manifold, we show that MID scaling also holds for entropic dual potentials with fast, population norm convergence in Theorem 9, and apply this result to prove stronger analogs of the previous results on value, map, and density estimation.
Organization of the paper.We conclude this section with a discussion of related work and a summary of our notation. In section 2, we state our assumption on the cost function and give background and more notation for the entropic OT problem. In section 3, we state our main results and give examples and discussion. In section 4, we describe some preliminary observations that form the foundation of our approach. In section 5, we prove our main results on MID scaling with a thorough exposition of our proof strategy in the
case of value estimation. In section 6, we prove our main results in the case where one distribution is supported on an embedded manifold. In Appendix A, we collect some background on embedded manifolds that we use in section 6, and in Appendix B we include some deferred proofs.
Related work.Taken as a whole, our results provide strong evidence of the MID scaling phenomenon for entropic OT. In fact, there is additional evidence for MID scaling and related phenomena in the literature. Recent work studied the continuous to discrete case, and established that entropic OT maps achieve dimension-free rates of convergence, consistent with MID scaling [23]. And in the un-regularized setting, a similar phenomenon was recently established for value estimation, where it was dubbed "lower complexity adaptation" [13]. The lower complexity adaptation phenomenon also states that the minimum of the intrinsic dimensions of \(\mu\) and \(\nu\) dictates the rate of convergence of un-regularized OT, but their notion of intrinsic dimension is different from ours. Indeed, as is common in the un-regularized literature, the intrinsic dimensions \(d_{\mu}\) and \(d_{\nu}\) in lower complexity adaptation are defined using the covering numbers at many distance scales, and so are distinct from the single-scale intrinsic dimensions in MID scaling. Also, their results require some structural assumptions, whereas our results show that MID scaling in the entropic setting is quite generic. In light of these results, it is natural to expect that the minimum part of MID scaling holds in greater generality, potentially including un-regularized maps and plans, and alternate forms of regularization.
To the best of our knowledge, there is only one prior work which considers the sample complexity of regularized OT and intrinsic dimension [1]. Those results apply to more general forms of regularization, but are only for value estimation and incur worse \(n\) dependence without identifying MID scaling. Concurrently with this work, [12] extended the lower complexity adaptation phenomenon to entropic OT values and Gromov-Wasserstein distances, yielding bounds which are similar to those of the un-regularized problem, with the curse of dimensionality residing primarily in the sample size, and which are mostly incomparable to our own.
As we discuss in detail in section 5, our technical approach is an intrinsic dimension-sensitive refinement of [10] where fully dimension-free bounds were established for entropic OT values, maps, and densities, but with exponential factors in \(1/\varepsilon\). An alternative estimator of the entropic OT map which achieves sub-exponential dependence on \(1/\varepsilon\) was proposed in [11]. Other works have proven convergence of the dual potentials [11], used entropic OT quantities as computationally efficient estimators for their un-regularized counterparts [12, 23], considered the convergence of entropic OT maps [22, 21], and studied the sample complexity of entropically regularized Gromov-Wasserstein distances [10]. Significant recent effort has been devoted to developing central limit theorems for entropic OT [1, 20, 13, 14, 15, 16]. Finally, a related form of minimum dimension-dependence appears in the study of asymptotics for entropic OT as \(\varepsilon\to 0^{+}\)[17].
Regarding the provenance of Theorem 1, we remark that it was originally proven for smooth, Lipschitz costs beyond \(\ell_{2}^{2}\), but with an exponential dependence on \(1/\varepsilon\)[18]. The follow-up work [22] showed how to remove this exponential factor in the com
pactly supported case that we study here, but primarily concentrated on extending Theorem 1 to un-bounded distributions.
Finally, our work can be seen as an entropic analog to the long line of work on the convergence of the un-regularized OT problem, which dates back to Dudley [1] and encompasses precise asymptotics [1] as well as finer behavior in lower dimensions [1, 2]. In fact, the analysis of un-regularized OT is naturally sensitive to intrinsic dimension, at least when intrinsic dimension is measured through covering numbers at many scales [1, 1, 2]. The discrepancy between Theorem 1 and the natural appearance of intrinsic dimension in the un-regularized OT problem provides part of the motivation for this work. More discussion of the similarities and differences between this line of work and our results is provided in section 3.
Notation.Given \(a,b\in\mathbb{R}\), we write the minimum \(a\wedge b:=\min(a,b)\), and the maximum \(a\lor b:=\max(a,b)\). For a non-negative integer \(K\in\mathbb{N}_{>0}\), we write \([K]=\{1\ldots,K\}\). We always work with Borel probability distributions. Given probability distributions \(P,Q\) on \(\mathbb{R}^{d}\), we denote their trivial coupling \(P\otimes Q\), which is uniquely defined by taking Borel sets \(A,B\subset\mathbb{R}^{d}\) to \((P\otimes Q)(A\times B):=P(A)Q(B)\). The support of \(P\) is denoted \(\operatorname{supp}(P)\), and is defined to be the set of all points \(x\in\mathbb{R}^{d}\) such that \(P\) assigns positive value to all open sets containing \(x\). The \(\ell_{2}\) Euclidean norm is always written \(\|\cdot\|\), without a subscript. Given a Borel measurable \(\alpha\colon\mathbb{R}^{d}\to\mathbb{R}^{k}\) and \(p\in[1,\infty)\), we define the \(L^{p}(P)\) norm as
\[\|\alpha\|_{L^{p}(P)}:=\Big{(}\int\|\alpha(x)\|^{p}\mathrm{d}P(x)\Big{)}^{1/p}.\]
The \(\sup\) norm \(\|\alpha\|_{L^{\infty}(P)}\) is defined to be the essential supremum of \(\|\alpha\|\) with respect to \(P\). For a Borel measurable \(\beta\colon\mathbb{R}^{d}\to\mathbb{R}\), we will variously write
\[\int\beta(x)\mathrm{d}P(x)=\mathbb{E}_{P}[\beta]=P(\beta).\]
We will also write
\[\operatorname{Var}_{P}(\beta):=\mathbb{E}_{P}[(\beta-\mathbb{E}_{P}[\beta])^ {2}].\]
The notation \(u\lesssim v\) indicates \(u\leqslant Cv\) for constant \(C\); whether the constant \(C\) is numerical or problem-dependent is a matter of context. The suppressed constants in our main results on MID scaling, described in sections 3.1 and 3.2 are numerical, while the suppressed constants for our results on embedded manifolds, described in section 3.3, depend on the low-dimensional distribution.
Given a metric \(\operatorname{dist}_{N}\) on a set \(N\subset\mathbb{R}^{d}\), we will write the \(\operatorname{dist}_{N}\)-ball of radius \(r\) around \(z_{0}\in N\) as
\[B_{\operatorname{dist}_{N}}(z_{0},r):=\{z\in N\colon\operatorname{dist}_{N}(z,z_{0})<r\}.\]
If \(\operatorname{dist}_{N}=\|\cdot\|\) and \(N=\mathbb{R}^{d}\), we simply write \(B(z_{0},r)\). The closed ball is written
\[B_{\operatorname{dist}_{N}}^{\mathrm{cl}}(z_{0},r):=\{z\in N\colon\operatorname {dist}_{N}(z,z_{0})\leqslant r\},\]
and again, if \(\operatorname{dist}_{N}=\|\cdot\|\) and \(N=\mathbb{R}^{d}\), we simply write \(B^{\mathrm{cl}}(z_{0},r)\).
Throughout, we will assume that \(\mu,\nu\) are probability measures on \(\mathbb{R}^{d}\), and we have iid samples \(\mathcal{X}:=(x_{1},\ldots,x_{n})\sim\mu^{\otimes n}\) and \(\mathcal{Y}:=(y_{1},\ldots,y_{n})\sim\nu^{\otimes n}\). Expectations \(\mathbb{E}\) without a subscript will always refer to integration over both of these samples.
Assumptions and background on entropic OT
In this section, we state our assumption on the cost function, and then establish important notation while reviewing duality for the entropic OT problem.
### Assumptions on cost
Our results are most naturally stated for bounded, Lipschitz costs rather than the squared Euclidean cost \(\|\cdot\|^{2}\) and so we will work at this level of generality throughout the remainder of the paper. We make the following formal assumption on our cost function \(c\).
**Assumption 1** (Bounded and uniformly Lipschitz cost).: _We assume the cost \(c\) is Borel measurable and \(c\in L^{\infty}(\mu\otimes\nu)\). By re-scaling the problem we assume without loss of generality that_
\[\|c\|_{L^{\infty}(\mu\otimes\nu)}\leqslant 1.\]
_Also, we assume there exists \(L>0\) such that for all \(y\in\operatorname{supp}(\nu)\),_
\[|c(x,y)-c(x^{\prime},y)|\leqslant L\|x-x^{\prime}\|\qquad\forall x,x^{\prime} \in\operatorname{supp}(\mu),\]
_and, similarly, for all \(x\in\operatorname{supp}(\mu)\),_
\[|c(x,y)-c(x,y^{\prime})|\leqslant L\|y-y^{\prime}\|\qquad\forall y,y^{\prime} \in\operatorname{supp}(\nu).\]
We remark that in the case where \(c\) is the squared Euclidean cost \(\|\cdot\|^{2}\), Assumption 1 just means the supports of \(\mu\) and \(\nu\) are compact.
### Strong duality for entropic OT
Given the cost function \(c\), we define the population entropic OT problem as
\[S_{\varepsilon}(\mu,\nu):=\inf_{\pi\in\Pi(\mu,\nu)}\big{\{}\pi(c)+\varepsilon \operatorname{KL}(\pi\,\|\,\mu\otimes\nu)\big{\}}. \tag{2.1}\]
The empirical entropic OT problem is then defined
\[S_{\varepsilon}(\hat{\mu},\hat{\nu}):=\inf_{\pi\in\Pi(\hat{\mu},\hat{\nu})} \big{\{}\pi(c)+\varepsilon\operatorname{KL}(\pi\,\|\,\hat{\mu}\otimes\hat{\nu })\big{\}}. \tag{2.2}\]
There are unique optimal solutions to the primal problems (2.1) and (2.2), which we write as \(\pi_{\varepsilon}\) and \(\hat{\pi}_{\varepsilon}\), respectively. We distinguish between \(c\) and \(S_{\varepsilon}(\mu,\nu)\) by referring to the former as the _cost function_, and the latter as the _entropic OT value_.
Strong duality.Under Assumption 1 on the cost \(c\), a form of strong duality holds for both the population and empirical entropic OT problems. We refer to [10] for these results as well as a thorough review of the literature on this topic. For the population entropic OT problem, strong duality is
\[S_{\varepsilon}(\mu,\nu)=\sup_{(f,g)\in L^{\infty}(\mu)\times L^{\infty}(\nu) }\Phi_{\varepsilon}(f,g):=\mu(f)+\nu(g)-\varepsilon(\mu\otimes\nu)(e^{- \varepsilon^{-1}(c-f-g)}-1).\]
The function \(\Phi_{\varepsilon}\colon L^{\infty}(\mu)\times L^{\infty}(\nu)\to\mathbb{R}\) is the population entropic OT dual function. Similarly for the empirical entropic OT problem, strong duality is stated
\[S_{\varepsilon}(\hat{\mu},\hat{\nu})=\sup_{(f,g)\in L^{\infty}(\hat{\mu})\times L ^{\infty}(\hat{\nu})}\hat{\Phi}_{\varepsilon}(f,g):=\hat{\mu}(f)+\hat{\nu}(g)- \varepsilon(\hat{\mu}\otimes\hat{\nu})(e^{-\varepsilon^{-1}(c-f-g)}-1).\]
And this \(\hat{\Phi}_{\varepsilon}\colon L^{\infty}(\hat{\mu})\times L^{\infty}(\hat{ \nu})\to\mathbb{R}\) is the empirical entropic OT dual function.
Population and empirical dual potentials.The population and empirical dual problems have solutions \((f_{\varepsilon},g_{\varepsilon})\in L^{\infty}(\mu)\times L^{\infty}(\nu)\) and \((\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})\in L^{\infty}(\hat{\mu})\times L ^{\infty}(\hat{\nu})\), respectively. These solutions are unique up to the translation \((f,g)\mapsto(f-a,g+a)\) for \(a\in\mathbb{R}\), and we thus specify the solutions we consider by assuming \(\nu(g_{\varepsilon})=0\) and \(\hat{\nu}(\hat{g}_{\varepsilon})=0\).
Population and empirical densities.For notational convenience, we also define the population density, for \(\mu\)-almost every \(x\) and \(\nu\)-almost every \(y\), as
\[p_{\varepsilon}(x,y):=\frac{\mathrm{d}\pi_{\varepsilon}}{\mathrm{d}(\mu\otimes \nu)}(x,y)=e^{-\varepsilon^{-1}(c(x,y)-f_{\varepsilon}(x)-g_{\varepsilon}(y))}. \tag{2.3}\]
And we similarly define, for all \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\), the empirical density
\[\hat{p}_{\varepsilon}(x,y):=\frac{\mathrm{d}\hat{\pi}_{\varepsilon}}{\mathrm{ d}(\hat{\mu}\otimes\hat{\nu})}(x,y)=e^{-\varepsilon^{-1}(c(x,y)-\hat{f}_{ \varepsilon}(x)-\hat{g}_{\varepsilon}(y))} \tag{2.4}\]
Marginal constraints for the dual potentials.The marginal constraints \(\pi_{\varepsilon}\in\Pi(\mu,\nu)\) and \(\hat{\pi}_{\varepsilon}\in\Pi(\hat{\mu},\hat{\nu})\) in fact define necessary and sufficient optimality conditions for the corresponding dual potentials through equations (2.3) and (2.4), respectively. The resulting system of equations is sometimes known as the _Schrodinger system_, and is fundamental to entropic optimal transport. For the population potentials, the marginal constraints imply that for \(\mu\)-almost every \(x_{0}\) and \(\nu\)-almost every \(y_{0}\),
\[1=\int e^{-\varepsilon^{-1}(c(x,y)-f_{\varepsilon}(x_{0})-g_{\varepsilon}(y)) }\mathrm{d}\nu(y),\qquad 1=\int e^{-\varepsilon^{-1}(c(x,y)-f_{\varepsilon}(x)-g_{ \varepsilon}(y_{0}))}\mathrm{d}\mu(x). \tag{2.5}\]
The empirical potentials \(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon}\) satisfy the analogous marginal equations: for all \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\),
\[1=\frac{1}{n}\sum_{j=1}^{n}e^{-\varepsilon^{-1}(c(x,y_{j})-\hat{f}_{ \varepsilon}(x)-\hat{g}_{\varepsilon}(y_{j}))},\qquad 1=\frac{1}{n}\sum_{i=1}^{n}e^{- \varepsilon^{-1}(c(x_{i},y)-\hat{f}_{\varepsilon}(x_{i})-\hat{g}_{\varepsilon }(y))}. \tag{2.6}\]
Canonical extensions of the dual potentials.In fact, the marginal equations (2.6) yield a canonical means of extending the empirical entropic dual potentials to functions on all of \(\mathbb{R}^{d}\)[1, 20, 21]. We observe that solving for \(\hat{f}_{\varepsilon}\) in (2.6) yields, for \(x\in\mathcal{X}\),
\[\hat{f}_{\varepsilon}(x)=-\varepsilon\ln\big{(}\frac{1}{n}\sum_{j=1}^{n}e^{- \varepsilon^{-1}(c(x,y_{j})-\hat{g}_{\varepsilon}(y_{j}))}\big{)}. \tag{2.7}\]
Since \(c\) is defined everywhere, this equation actually makes sense for all \(x\in\operatorname{supp}(\mu)\), and so we thus _define_\(\hat{f}_{\varepsilon}\) on all of \(\operatorname{supp}\mu\) with this formula. Similarly, we put,
\[\hat{g}_{\varepsilon}(y):=-\varepsilon\ln\big{(}\frac{1}{n}\sum_{i=1}^{n}e^{- \varepsilon^{-1}(c(x_{i},y)-\hat{f}_{\varepsilon}(x_{i}))}\big{)}\qquad y\in \operatorname{supp}(\nu). \tag{2.8}\]
Using Equation (2.4), we extend the empirical density \(\hat{p}_{\varepsilon}\) to all of \(\operatorname{supp}(\mu)\times\operatorname{supp}(\nu)\) as well.
## 3 Main results
In this section we describe our main results on the effective statistical dimension of entropic optimal transport. In section 3.1, we introduce and discuss MID scaling in the context of entropic OT value estimation. In section 3.2, we state our results which establish MID scaling for entropic OT map and density estimation. And in section 3.3, we show how these results can be strengthened in the case where one of the measures is supported on an embedded manifold. Throughout, we work under Assumption 1 on the cost function.
### Introduction to MID scaling with value convergence
In this section, we introduce the reader to MID scaling in the context of the convergence of entropic OT values. We state our main result on MID scaling for values, Theorem 2, give several examples, and then conclude with discussion of its statistical significance and tightness.
MID scaling for value estimation.Recall the definition of covering numbers from (1.3), and that we define
\[\mathcal{N}(\mu,\delta):=\mathcal{N}(\operatorname{supp}(\mu),\|\cdot\|, \delta),\qquad\mathcal{N}(\nu,\delta):=\mathcal{N}(\operatorname{supp}(\nu), \|\cdot\|,\delta),\qquad\delta>0.\]
Our main result on entropic OT value estimation follows.
**Theorem 2** (MID scaling for values).: _For numerical constants independent of all problem parameters,_
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim(1+\varepsilon)\sqrt{\frac{\mathcal{N}(\mu,\frac{\varepsilon}{L}) \wedge\mathcal{N}(\nu,\frac{\varepsilon}{L})}{n}}.\]
The only dimensional quantity in this estimate is contained in the minimum covering numbers at scale \(\varepsilon\), demonstrating the MID scaling phenomenon. We emphasize that this result only requires that the cost \(c\) is bounded and Lipschitz (Assumption 1), in contrast to the smoothness assumptions in most of the previous literature.
Examples of MID scaling.To gain a feeling for Theorem 2, let us consider some examples.
**Example 3** (Generic distributions in \(B(0,1)\)).: For generic probability distributions, we can apply Theorem 2 with standard upper bounds on covering numbers in Euclidean space [23, Proposition 4.2.12]. We find that if \(\mu,\nu\) are supported in \(B(0,1)\subset\mathbb{R}^{d}\), then for numerical constants independent of all problem parameters, and for all \(\varepsilon>0\),
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim(1+\varepsilon)\cdot\left(1+\frac{2L}{\varepsilon}\right)^{d/2}\cdot \frac{1}{\sqrt{n}}.\]
Specializing to the case of the \(\ell_{2}^{2}\) cost, this bound nearly recovers Theorem 1, being worse by a factor of \(\varepsilon^{-1/2}\) in the case where \(d\) is odd. In fact, Theorem 1 applies more generally to costs which are both Lipschitz and smooth to degree \(\lceil d/2\rceil\)[1]. The authors of that work observed empirically that their smoothness assumption seemed unnecessary and left it as an open problem to remove that assumption. Theorem 2 therefore resolves this problem.1
Footnote 1: For large \(\varepsilon\), our bound diverges while Theorem 1 becomes \(O(1/\sqrt{n})\), but it is straightforward to modify our proofs to fully recover Theorem 1 in this case.
In cases where just one of the measures is assumed to have low intrinsic dimension, we can obtain bounds which only depend on that measure.
**Example 4** (Semi-discrete).: Suppose \(\nu\) is supported on \(K\) points. Then for numerical constants independent of all problem parameters, and for all \(\varepsilon>0\),
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim(1+\varepsilon)\cdot\sqrt{\frac{K}{n}}.\]
**Example 5** (Embedded manifold).: When \(\nu\) is supported on a \(d_{\nu}\)-dimensional compact, smooth, embedded Riemannian manifold without boundary, we can apply Theorem 2 with the bound \(\mathcal{N}(\nu,\delta)\leqslant C_{\nu}\delta^{-d_{\nu}}\), valid for some \(C_{\nu}>0\) and \(\delta\) sufficiently small (Proposition 43 in Appendix A formally verifies this fact). In this case, Theorem 2 implies that for \(\varepsilon>0\) sufficiently small and numerical constants independent of all problem parameters,
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim\sqrt{C_{\nu}}(1+\varepsilon)\cdot\left(\frac{L}{\varepsilon}\right) ^{d_{\nu}/2}\cdot\frac{1}{\sqrt{n}}.\]
Because MID scaling involves only a single distance scale, the above examples can be generalized to sets which are only low dimensional at some scales.
**Example 6** (\(\delta\)-fattening of sets [24]).: For \(\delta\geqslant 0\) and \(A\subset\mathbb{R}^{d}\), the \(\delta\)-fattening of \(A\) is
\[A_{\delta}:=\bigcup_{a\in A}B^{\mathrm{cl}}(a,\delta).\]
Suppose \(\mathrm{supp}(\nu)\subset A_{\delta}\) for some \(\delta>0\) and \(A\) such that \(\mathcal{N}(A,\|\cdot\|,\tau)\leqslant C_{A}\tau^{-k}\) for all \(\tau\) sufficiently small and some constants \(C_{A},k\geqslant 0\). Note that covering numbers on \(A_{\delta}\) can be compared to those on \(A\) itself, since for \(\tau\geqslant\delta\) we have \(\mathcal{N}(A_{\delta},\|\cdot\|,\tau)\leqslant\mathcal{N}(A,\|\cdot\|,\tau-\delta)\). Hence for \(\varepsilon>L\delta\) sufficiently small,
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim\sqrt{C_{A}}(1+\varepsilon)\cdot\left(\frac{L}{\varepsilon-L\delta} \right)^{k/2}\cdot\frac{1}{\sqrt{n}}.\]
The above example illustrates that, so long as the ratio \(\varepsilon/L\) is significantly larger than the fattening scale \(\delta\), the rates are essentially the same as if \(\operatorname{supp}(\nu)\) were actually contained in \(A\), despite the fact that \(\operatorname{supp}(\nu)\) may be full-dimensional at scale \(\delta\). In this way, the dimension-dependence of MID scaling is oblivious to features below scale \(\varepsilon/L\).
Discussion of MID scaling.We emphasize that MID scaling is similar to, but distinct from, the intrinsic dimension-dependence in the un-regularized OT literature [11, 12, 13]. While in both settings the minimum of the intrinsic dimensions governs the rate of convergence, the notion of intrinsic dimension is different. In the un-regularized setting, intrinsic dimension is characterized through uniform covering number control at small scales, whereas in MID scaling the covering numbers play a role, but only at a single distance scale. The convergence of un-regularized OT is, in fact, adaptive to multi-scale behavior, but the relevant scales are determined by the sample size \(n\), meaning that milder covering numbers at some scales only translate to improved rates while \(n\) is not too large; in fact, such rates are known to be tight [12]. In contrast, MID scaling shows that entropic OT benefits from better covering number control at some scales _for all sample sizes_\(n\). Essentially, entropic regularization decouples the sample size and the distance scale of the problem, allowing for a flexible trade-off between the intrinsic dimension of the data distribution and the amount of regularization.2
Footnote 2: Thanks to Yann Ollivier for stimulating comments on this point.
MID scaling helps clarify the statistical role of entropic regularization, showing that beyond its well-known computational virtues, entropy also provides statistical regularization by specifying a distance scale, allowing the user to balance the intrinsic curse of dimensionality of the data with the statistical difficulty of the problem. Because of the manifold hypothesis, which states that natural data is typically supported on or near a low-dimensional embedded manifold, we expect data to have significantly smaller intrinsic dimension than extrinsic dimension. And so a means to flexibly adapt optimal transport to such intrinsic low-dimensional structure, particularly approximate low-dimensional structure, is of major interest. MID scaling demonstrates that entropic regularization provides this benefit for optimal transport.
Remarks on tightness.While the upper bounds in this work substantiate MID scaling and the statistical benefits of entropic regularization, a complete statistical account of entropic OT requires lower bounds. We leave a thorough study of this interesting direction to future work, but do give some indications on the tightness of Theorem 2 here. For this reason, we note that the CLT for entropic OT states that the asymptotic variance of \(\sqrt{n}(S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu))\) is \(\operatorname{Var}_{\mu}(f_{\varepsilon})+\operatorname{Var}_{\nu}(g_{ \varepsilon})\)[10]. In particular, it follows that the dependence on \(n\) in Theorem 2 is optimal, and moreover that Theorem 2 is tight in the case where \(\nu\) is supported on two points (see Example 4).
However, this example doesn't rule out the possibility that the dependence on the covering numbers of \(\mu\) and \(\nu\) can be improved. To reason about such issues, we can reduce to lower bounds for the un-regularized OT problem. For example, it is known that the approximation error \(|S_{\varepsilon}-S_{0}|\) is \(O(\varepsilon)\), up to logarithmic factors [10]. Combining this approximation error with minimax lower bounds for estimating un-regularized OT
values then implies a firm speed limit on statistical bounds for entropic OT values. Arguing along these lines, we show in Appendix B.1 that Theorem 2 implies that entropic estimators can estimate \(W_{1}\) distances at the rate \(n^{-1/(d+2)}\), close to the minimax optimal \(n^{-1/d}\) rate [13], and that moreover the covering number dependence in the bound of Theorem 2 cannot be improved in general.
### MID scaling for maps and densities
Theorem 2 gives a strong instance of the MID scaling phenomenon for entropic optimal transport, yet in many applications of entropic OT, such as trajectory reconstruction [16] and domain adaptation [17, 18], it is of greater interest to estimate the solution to the entropic OT problem than the value of the entropic OT objective alone. Motivated by this fact, we also study the performance of empirical plug-in estimators for the problems of entropic OT map estimation as well as density estimation, and develop results analogous to Theorem 2, showing that natural plug-in estimators also sport MID scaling.
We define the entropic OT map
\[T_{\varepsilon}(x):=\mathbb{E}_{\pi_{\varepsilon}}[y\,|\,x].\]
Note that the analogous map defined over \(y\) is symmetric with this one, and so we study only this case without loss of generality. The map \(T_{\varepsilon}\) is an entropic analog of the OT map, and has been the subject of much recent work as it offers greater computational and statistical efficiency than its un-regularized counterpart [15, 16, 17, 18]. We study the empirical analog of \(T_{\varepsilon}(x)\):
\[\hat{T}_{\varepsilon}(x):=\mathbb{E}_{\hat{\pi}_{\varepsilon}}[y\,|\,x]= \frac{1}{n}\sum_{j=1}^{n}y_{j}\hat{p}_{\varepsilon}(x,y_{j}). \tag{3.1}\]
We consider \(\hat{T}_{\varepsilon}\) as an estimator for its population analog \(T_{\varepsilon}\) and show it enjoys a \(1/\sqrt{n}\) rate of convergence with MID scaling.
**Theorem 7**.: _Suppose the diameter of \(\operatorname{supp}(\nu)\) is at most \(R\). Then for numerical constants independent of all problem parameters,_
\[\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}] \lesssim R^{2}\big{(}1+\frac{1}{\varepsilon}\big{)}\sqrt{\frac{\mathcal{N}( \mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon}{L})}{n}}.\]
We note that this result is measured with respect to the empirical norm \(\|\cdot\|_{L^{2}(\hat{\mu})}\), rather than the population norm \(\|\cdot\|_{L^{2}(\mu)}\). In the next section we assume that one distribution is supported on an embedded manifold and show that this result can be strengthened to population norm convergence with fast \(1/n\) rates. As before, we may derive bounds from this result in cases where we have _a priori_ control on the relevant covering numbers. Previous work on the convergence of \(\hat{T}_{\varepsilon}\) to \(T_{\varepsilon}\) for generic distributions incurred an exponential dependence on \(1/\varepsilon\)[14, 18]. And an alternative estimator which achieves sub-exponential dependendence on \(1/\varepsilon\) was considered in [19], but at the cost of worse rates in \(n\). Theorem 2 avoids exponential factors and achieves MID scaling with a \(1/\sqrt{n}\) rate.
We finally remark that Theorem 7 only uses the boundedness of \(y\), and can be extended to bound the convergence of other conditional expectations \(\mathbb{E}_{\pi_{\varepsilon}}[\alpha(x,y)\,|\,x]\) for bounded \(\alpha\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{k}\). Such generality is of interest, since some researchers study alternative definitions of entropic OT maps when the cost is not \(\ell_{2}^{2}\). For example, when \(c(x,y)=h(x-y)\) for strictly convex \(h\), a very recent work suggested a different definition of \(T_{\varepsilon}\) based on analogy to the un-regularized OT map in this case [11]. If \(h\) satisfies Assumption 1 and is additionally strongly convex and differentiable everywhere, it is not hard to check that Theorem 7 also holds for the maps introduced in that work.
MID scaling for entropic OT density estimation.We can also consider estimating the full entropic OT density \(p_{\varepsilon}\), defined in (2.3). A natural way to estimate \(p_{\varepsilon}\) from samples is with its plug-in counterpart, \(\hat{p}_{\varepsilon}\) from (2.4). We obtain the following result.
**Theorem 8**.: _For numerical constants independent of all problem parameters,_
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{1}(\hat{\mu}\otimes \hat{\nu})}]\lesssim\big{(}1+\frac{1}{\sqrt{\varepsilon}}\big{)}\Big{(}\frac{ \mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon }{L})}{n}\Big{)}^{1/4}.\]
This result shows that MID scaling even applies when estimating the full density \(p_{\varepsilon}\), albeit in the empirical \(L^{1}(\hat{\mu}\otimes\hat{\nu})\) norm. Previously known bounds for the convergence of \(\hat{p}_{\varepsilon}\) to \(p_{\varepsilon}\) incurred an exponential dependence on \(1/\varepsilon\)[12], and Theorem 8 removes this dependence while only incurring MID scaling.
We remark that, unlike Theorem 2 on MID scaling for values, MID scaling for maps and densities doesn't yet have analogs in the un-regularized setting, to the best of our knowledge. Developing such analogs is an interesting direction for future work. In the next section, we demonstrate a stronger form of MID scaling for these problems when the low-dimensional distribution is supported on an embedded manifold.
### Fast rates with MID scaling on embedded manifolds
In this section, we describe how we can strengthen the results from the previous section by assuming the low-dimensional measure is supported on an embedded manifold.
Assumptions.We work under the following assumptions, and for simplicity assume that \(\nu\) is the low-dimensional measure. See Appendix A for a review of embedded Riemannian manifolds, as well as a discussion of the tools from the theory of random geometric graphs on embedded manifolds that we use in our proofs.
**Assumption 2** (\(\nu\) is supported on an embedded manifold).: _Assume \(\nu\) is supported on a compact, smooth, connected, Riemannian manifold \((N,h)\) of dimension \(d_{\nu}\geqslant 3\) without boundary, where \(N\) is endowed with the submanifold geometry from its inclusion in \(\mathbb{R}^{d}\)._
We also assume that \(\nu\) is compatible with \(N\) in the following sense.
**Assumption 3** (\(\nu\) is Lipschitz and non-vanishing).: _Assume that \(\nu\) has a Lipschitz, non-vanishing density with respect to the Riemannian volume on \(N\)._
These assumptions represent a natural instantiation of the manifold hypothesis while still furnishing enough analytical structure to yield stronger results than in the previous section. In this section and associated proofs, Assumptions 2 and 3 are made throughout, in addition to Assumption 1.
Suppressing constants depending on \(N\) and \(\nu\).In contrast to the previous section, in this section and associated proofs we will generally suppress constants depending on \(N\) and \(\nu\). The suppressed constants are strictly a function of the intrinsic dimension \(d_{\nu}\) (and not the extrinsic dimension \(d\)), the geometry of \(N\), and uniform bounds on the density of \(\nu\) and its Lipschitz constant. In Appendix A, where we collect all the background and necessary results on embedded manifolds and random geometric graphs, we describe the relevant geometric quantities from the manifold \(N\).
Main result on dual potential convergence.The purpose of this section is to show that under these assumptions on \(\nu\) and \(N\), we can derive significantly stronger instances of MID scaling than in the previous section. Our main result concerns the convergence of the empirical dual potentials \(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon}\) to their population counterparts \(f_{\varepsilon},g_{\varepsilon}\). Such convergence is essentially stronger than that of entropic OT values, maps, and densities, since each of those quantities is defined in terms of the dual potentials. Since this result measures convergence in population norms, we remind the reader that the out of sample extensions of \(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon}\) as defined in (2.7) and (2.8), respectively, are in full effect.
**Theorem 9**.: _If \(\varepsilon/L\) is sufficiently small, then_
\[\mathbb{E}\big{[}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\mu)}^{2}+ \|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}\big{]}\lesssim \big{(}\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(}\frac{L}{ \varepsilon}\Big{)}^{13d_{\nu}+8}\cdot\frac{1}{n}.\]
_We also have convergence in empirical norms_
\[\mathbb{E}\big{[}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^ {2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}\big{]} \lesssim\big{(}\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(} \frac{L}{\varepsilon}\Big{)}^{9d_{\nu}+4}\cdot\frac{1}{n}.\]
Some remarks on this result are in order. First, as in all the results of this section, we require \(\varepsilon/L\) to be sufficiently small; the precise size is given in Equation (A.1). Second, the result concerns the sum of the squared norms, but this is merely for a convenient statement, and the proofs for each term are separate and different. Third, the empirical norm convergence is useful for some of our applications below, while also being an important preliminary step to the full population norm results. Fourth, we remark that, to the best of our knowledge, all previously known bounds on entropic OT dual potentials incurred an exponential dependence on \(\varepsilon\)[1, 1, 2, 3]. The significance of Theorem 9 is that it avoids exponential dependence on \(\varepsilon\) while only incurring minimum intrinsic dimension-dependence, in accordance with MID scaling.
MID scaling in the embedded manifold setting.As do all the results in this section, Theorem 9 evinces a slightly modified form of MID scaling as compared to the previous sections. In our embedded manifold setting, the minimum part of MID scaling simply means that the intrinsic dimension of \(\mu\) doesn't appear in the bounds (remarkably, nothing about
\(\mu\) appears in the bound at all); if \(\mu\) were also assumed to satisfy the above assumptions on a manifold of dimension \(d_{\mu}\), we could write bounds with an appropriate minimum. In terms of the intrinsic dimension part of MID scaling, there is only a superficial difference, since although the rates in this section do not explicitly involve covering numbers at scale \(\varepsilon/L\) but instead the power \((L/\varepsilon)^{d_{\nu}}\), the two are, in fact, comparable (Proposition 43). And finally, as we mentioned above, there are hidden constants depending on \(N\) and \(\nu\), and especially the intrinsic dimension \(d_{\nu}\), as opposed to the previous results which had only numerical constants.
Applications of dual convergence.We demonstrate the power of Theorem 9 with a few applications. Our first two results improve upon the map and density estimation results from the previous section. For each of these applications, we again require \(\varepsilon/L\) to be sufficiently small, as specified in Equation (A.1).
Recall the setting of Theorem 7: we wish to estimate \(T_{\varepsilon}(x):=\mathbb{E}_{\pi_{\varepsilon}}[y\,|\,x]\) from samples, and we use the plug-in estimator \(\hat{T}_{\varepsilon}(x):=\mathbb{E}_{\hat{\pi}_{\varepsilon}}[y\,|\,x]\). Using the canonical extensions of the empirical dual potentials \(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon}\) and correspondingly the empirical density \(\hat{p}_{\varepsilon}\) as in equations (2.7), (2.8), and (2.4), we can extend \(\hat{T}_{\varepsilon}\) to a map in \(L^{2}(\mu)\), and consider convergence in this population norm. This extended map \(\hat{T}_{\varepsilon}\) can be evaluated in linear time once the empirical dual potentials are known, and was originally proposed as a computationally efficient estimator for the un-regularized OT map [13]. We obtain the following corollary of Theorem 9.
**Corollary 10**.: _If \(\varepsilon/L\) is sufficiently small, and the diameter of \(\operatorname{supp}(\nu)\) is at most \(R\), then_
\[\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(\mu)}^{2}]\lesssim R ^{2}\cdot\left(1+\frac{1}{\varepsilon^{4}}\right)\cdot\left(\frac{L}{ \varepsilon}\right)^{11d_{\nu}+4}\cdot\frac{1}{n}\]
Corollary 10 improves over Theorem 7 in that it achieves a fast \(1/n\) rate of convergence in the full population norm \(L^{2}(\mu)\).
We can also strengthen our result on density estimation from the previous section, Theorem 8. As in the case of map estimation, the canonical extensions of the empirical density \(\hat{p}_{\varepsilon}\) described in section 2, allow us to consider \(\hat{p}_{\varepsilon}\) as an out of sample estimator for \(p_{\varepsilon}\).
**Corollary 11**.: _If \(\varepsilon/L\) is sufficiently small, then_
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2 }]\lesssim\left(1+\frac{1}{\varepsilon^{4}}\right)\cdot\left(\frac{L}{ \varepsilon}\right)^{15d_{\nu}+8}\cdot\frac{1}{n}.\]
This result shows that, in the embedded manifold setting, Theorem 8 can be strengthened to full population norm convergence, \(L^{2}\) rather than \(L^{1}\), and to faster rates.
Our final application concerns the _bias_ of entropic OT, namely the quantity \(\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})]-S_{\varepsilon}(\mu,\nu)\). Recent work established that the bias converges at a fast \(1/n\) rate, yet with an exponential dependence on the regularization parameter \(\varepsilon\)[12, 13]. The following result removes this exponential dependence and achieves MID scaling. We state this result as a Theorem since, unlike the previous Corollaries, it doesn't directly follow from Theorem 9.
**Theorem 12**.: _If \(\varepsilon/L\) is sufficiently small, then_
\[|\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})]-S_{\varepsilon}(\mu,\nu)| \lesssim\left(\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\right)\cdot\left(\frac{ L}{\varepsilon}\right)^{9d_{\nu}+4}\cdot\frac{1}{n}.\]
In sum, the results in this section demonstrate a strong form of the MID scaling phenomenon in the relatively general setting of Assumptions 2 and 3. These results underscore the main message of our work: MID scaling is a generic phenomenon for entropic OT. They demonstrate that MID scaling is certainly not limited to value estimation or map and density estimation in empirical norms with slow \(1/\sqrt{n}\) rates, but in the embedded manifold setting even holds for dual potentials, maps, densities, and biases, all with fast \(1/n\) rates in population norms.
## 4 Preliminary results
In this section, we fix final pieces of notation and describe some simple observations that form the foundation of our approach.
### Concavity for the empirical dual
Given probability measures \(P,Q\) on \(\mathbb{R}^{d}\), we define the entropic OT dual function \(\Phi_{\varepsilon}^{PQ}\colon L^{\infty}(P)\times L^{\infty}(Q)\to\mathbb{R}\) as
\[\Phi_{\varepsilon}^{PQ}(f,g):=P(f)+Q(g)-\varepsilon(P\otimes Q)(e^{- \varepsilon^{-1}(c-f-g)}-1).\]
With this notation, we can express the population dual \(\Phi_{\varepsilon}=\Phi_{\varepsilon}^{\mu\nu}\) and the empirical dual \(\hat{\Phi}_{\varepsilon}=\Phi_{\varepsilon}^{\hat{\mu}\hat{\nu}}\). We collect some basic facts about entropic dual functions below.
**Proposition 13**.: _Let \(P,Q\) be probability measures on \(\mathbb{R}^{d}\). The corresponding entropic OT dual function \(\Phi_{\varepsilon}^{PQ}\) has the following properties:_
1. **(Definition of gradient.)** _For every pair_ \(h_{1}=(f_{1},g_{1})\in L^{\infty}(P)\times L^{\infty}(Q)\)_, there exists an element of_ \(L^{\infty}(P)\times L^{\infty}(Q)\) _which we denote by_ \(\nabla\Phi_{\varepsilon}^{PQ}(f_{1},g_{1})\) _such that for all_ \(h_{0}=(f_{0},g_{0})\in L^{\infty}(P)\times L^{\infty}(Q)\)_,_ \[\langle\nabla\Phi_{\varepsilon}^{PQ}(h_{1}),h_{0}\rangle_{L^{2}(P) \times L^{2}(Q)} =\int f_{0}(x)\Big{(}1-\int e^{-\varepsilon^{-1}(c(x,y)-f_{1}(x)- g_{1}(y))}\mathrm{d}Q(y)\Big{)}\mathrm{d}P(x)\] \[+\int g_{0}(y)\Big{(}1-\int e^{-\varepsilon^{-1}(c(x,y)-f_{1}(x) -g_{1}(y))}\mathrm{d}P(x)\Big{)}\mathrm{d}Q(x).\] _In other words, the gradient of_ \(\Phi_{\varepsilon}^{PQ}\) _at_ \((f_{1},g_{1})\) _is the marginal error corresponding to_ \((f_{1},g_{1})\)_._
2. **(Concavity.)** _For any two pairs of dual potentials_ \(h_{0},h_{1}\in L^{\infty}(P)\times L^{\infty}(Q)\)_, we have the inequalities_ \[\Phi_{\varepsilon}^{PQ}(h_{0})-\Phi_{\varepsilon}^{PQ}(h_{1})\leqslant \langle\nabla\Phi_{\varepsilon}^{PQ}(h_{1}),h_{0}-h_{1}\rangle_{L^{2}(P) \times L^{2}(Q)},\] (4.1) _and_ \[\langle\nabla\Phi_{\varepsilon}^{PQ}(h_{0}),h_{0}-h_{1}\rangle_{L^{2}(P) \times L^{2}(Q)}\leqslant\Phi_{\varepsilon}^{PQ}(h_{0})-\Phi_{\varepsilon}^{ PQ}(h_{1}).\] (4.2)
3. **(Marginal rounding improves the dual objective.)** _Let \((f,g)\in L^{\infty}(P)\times L^{\infty}(Q)\), and set_ \[\tilde{f}(x):=-\varepsilon\log\Big{(}\int e^{-\varepsilon^{-1}(c(x,y)-g(y))} \mathrm{d}Q(y)\Big{)}.\] _Then_ \[\Phi_{\varepsilon}^{PQ}(f,g)\leqslant\Phi_{\varepsilon}^{PQ}(\tilde{f},g).\] _And analogously when marginal rounding in the_ \(g\) _variable._
Proof of Proposition 13.: The first item is a definition. For the second item, write
\[\Phi_{\varepsilon}^{PQ}(f,g)=(P\otimes Q)\big{(}f(x)+g(y)-\varepsilon e^{- \varepsilon^{-1}(c(x,y)-f(x)-g(y))}\big{)}+\varepsilon,\]
and observe that the function \(t\mapsto t-\varepsilon e^{-\varepsilon^{-1}(c-t)}\) is concave. The result follows by applying the concavity of the integrand pointwise and collecting terms. The third item follows from the first two.
### Pointwise control and log-Lipschitz bounds
In this section we describe some simple yet powerful implications of Assumption 1.
The following pointwise bounds appear in a number of places [10, 11, 12], and so their proof is omitted.
**Proposition 14** (Pointwise control on dual potentials).: _Under our normalization conventions that \(\nu(g_{\varepsilon})=0\) and \(\hat{\nu}(\hat{g}_{\varepsilon})=0\), we have the uniform bounds_
\[\|\hat{f}_{\varepsilon}\|_{L^{\infty}(\mu)},\ \|\hat{g}_{\varepsilon}\|_{L^{ \infty}(\nu)}\leqslant 2,\qquad\|f_{\varepsilon}\|_{L^{\infty}(\mu)},\ \|g_{ \varepsilon}\|_{L^{\infty}(\nu)}\leqslant 1.\]
The following Lipschitz bounds are also well-known [11], but since they are the foundation of our approach we include their proof.
**Proposition 15** (Lipschitz bounds).: _The population dual potentials \(f_{\varepsilon}\) and \(g_{\varepsilon}\) are \(L\)-Lipschitz over \(\mathrm{supp}(\mu)\) and \(\mathrm{supp}(\nu)\), respectively. The extended empirical dual potentials \(\hat{f}_{\varepsilon}\) and \(\hat{g}_{\varepsilon}\) are also \(L\)-Lipschitz over \(\mathrm{supp}(\mu)\) and \(\mathrm{supp}(\nu)\), respectively. In particular, the population density \(p_{\varepsilon}\) and the extended empirical density \(\hat{p}_{\varepsilon}\) are each \(\frac{2L}{\varepsilon}\)-log-Lipschitz in each of their variables over \(\mathrm{supp}(\mu)\times\mathrm{supp}(\nu)\)._
Proof.: By the marginal constraints (2.5), we have that for all \(x\in\mathrm{supp}(\mu)\),
\[e^{-\frac{1}{\varepsilon}f_{\varepsilon}(x)}=\int e^{-\frac{1}{\varepsilon}( c(x,y)-g_{\varepsilon}(y)}\mathrm{d}\nu(y).\]
Using Assumption 1 on the cost \(c\) being Lipschitz, we see that for \(x,x^{\prime}\in\mathrm{supp}(\mu)\)
\[e^{-\frac{1}{\varepsilon}f_{\varepsilon}(x)}\leqslant e^{-\frac{1}{ \varepsilon}f_{\varepsilon}(x^{\prime})+\frac{L}{\varepsilon}\|x-x^{\prime}\|}\]
By symmetry it follows that
\[|f_{\varepsilon}(x)-f_{\varepsilon}(x^{\prime})|\leqslant L\|x-x^{\prime}\|.\]
The analogous argument establishes the claim for \(g_{\varepsilon}\), and also proves the claims for the extended empirical potentials \(\hat{f}_{\varepsilon}\) and \(\hat{g}_{\varepsilon}\).
For the claims about the densities, note that
\[\log p_{\varepsilon}(x,y)=-\frac{1}{\varepsilon}\big{(}c(x,y)-f_{\varepsilon}( x)-g_{\varepsilon}(y)\big{)},\]
and so the claim follows from the previous ones for \(f_{\varepsilon}\) and \(g_{\varepsilon}\), as well as Assumption 1. Similarly for \(\log\hat{p}_{\varepsilon}\).
## 5 Proofs of MID scaling with slow rates
In this section, we give our proofs of MID scaling for values, Theorem 2, MID scaling for maps in empirical norms, Theorem 7, and MID scaling for densities in empirical norms, Theorem 8. The proofs are organized in the following manner. In section 5.1, we give a detailed exposition of our approach in the case of value estimation. In section 5.2, we give the proof of one of our main technical lemmas, and the source of MID scaling, on sub-exponential bounds for \(\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}\). In section 5.3, we show how to reduce map estimation to the estimates from the previous sections, and do the same for density estimation in section 5.4.
### Proof of value rate and overview of technical approach
In this section, we illustrate the main ideas of our technical approach by describing our proof of Theorem 2 on the convergence of entropic OT values. To situate our approach, note that most existing statistical work on entropic OT studies smooth cost functions and shows that the smoothness of the cost functions implies smoothness for the dual potentials, leading to small function classes over which empirical processes can be well-controlled [1, 2, 3, 19, 18, 17]. The present work builds on a different approach, developed in [10], which entirely avoids empirical process theory by using strong concavity of the empirical dual objective. The benefit of this approach is that it is simple, requires no smoothness assumptions on the cost, leads to dimension-free bounds, and can be used to prove fast rates for most entropic OT quantities [10]. However, because it is dimension-free, it is not suited to a fine understanding of the dimension-dependence of entropic OT, and in particular it incurs exponential factors of \(1/\varepsilon\) at many points in the arguments. The main technical goal of this work is to refine this strong concavity of the empirical dual approach by replacing the exponential dependence on \(1/\varepsilon\) with a very fine dependence on the dimension of the problem: MID scaling.
To this end, note that the exponential dependence of [10] primarily arises from two separate sources: first, the strong concavity of the empirical dual objective, and second, pointwise bounds on the entropic density \(p_{\varepsilon}\). For the results in this section, we avoid the exponential factors arising from strong concavity by using more delicate arguments which rely on only concavity, and remove the exponential factors from the pointwise bounds on the entropic density \(p_{\varepsilon}\) with a novel estimate (Lemma 16 below) which is itself the source of MID scaling.
To present the ideas as clearly as possible, we will suppress numerical constants with the notation \(u\lesssim v\); they can easily be extracted from the proof. Recall that \(\Phi_{\varepsilon}\) is
the population entropic OT dual function, and \(\hat{\Phi}_{\varepsilon}\) is its empirical counterpart. Then Theorem 2 concerns the quantity
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|]= \mathbb{E}[|\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g}_{ \varepsilon})-\Phi_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})|].\]
To bound this quantity, we first decompose it in the following manner:
\[\mathbb{E}[|\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{ g}_{\varepsilon})-\Phi_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})|] \leqslant\mathbb{E}[|\hat{\Phi}_{\varepsilon}(\hat{f}_{ \varepsilon},\hat{g}_{\varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon}, g_{\varepsilon})|]+\mathbb{E}[|\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{ \varepsilon})-\Phi_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})|]\] \[=\mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{ g}_{\varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})]+ \mathbb{E}[|\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})-\Phi_{ \varepsilon}(f_{\varepsilon},g_{\varepsilon})|].\]
For convenience, we refer to the first term as a bias term, and the second as a variance term. Note that our usage of "bias" and "variance" in this context is not intended to be standard, merely suggestive.
Variance term.If we just examine this variance term, we can compute:
\[\mathbb{E}[|\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{ \varepsilon})-\Phi_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})|]\] \[\quad=\mathbb{E}[|(\hat{\mu})-\mu(f_{\varepsilon})+(\hat{\nu}- \nu)(g_{\varepsilon})-\varepsilon(\hat{\mu}\otimes\hat{\nu}-\mu\otimes\nu)(p _{\varepsilon})|]\] \[\quad\leqslant\mathbb{E}[\big{(}(\hat{\mu}-\mu)(f_{\varepsilon}) +(\hat{\nu}-\nu)(g_{\varepsilon})-\varepsilon(\hat{\mu}\otimes\hat{\nu}-\mu \otimes\nu)(p_{\varepsilon})\big{)}^{2}]^{1/2}\] \[\quad\leqslant\big{(}\mathbb{E}[\big{(}(\hat{\mu}-\mu)(f_{ \varepsilon})\big{)}^{2}]+\mathbb{E}[\big{(}(\hat{\nu}-\nu)(g_{\varepsilon}) \big{)}^{2}]+\varepsilon^{2}\mathbb{E}[\big{(}(\hat{\mu}\otimes\hat{\nu}-\mu \otimes\nu)(p_{\varepsilon})\big{)}^{2}]\big{)}^{1/2}\] \[\quad\leqslant\mathbb{E}[\big{(}(\hat{\mu}-\mu)(f_{\varepsilon} )\big{)}^{2}]^{1/2}+\mathbb{E}[\big{(}(\hat{\nu}-\nu)(g_{\varepsilon})\big{)} ^{2}]^{1/2}+\varepsilon\mathbb{E}[\big{(}(\hat{\mu}\otimes\hat{\nu}-\mu \otimes\nu)(p_{\varepsilon})\big{)}^{2}]^{1/2}.\]
For the first two terms, we use the pointwise boundedness of the dual potentials from Proposition 14 to yield
\[\mathbb{E}[\big{(}(\hat{\mu}-\mu)(f_{\varepsilon})\big{)}^{2}]^{1/2}+\mathbb{ E}[\big{(}(\hat{\nu}-\nu)(g_{\varepsilon})\big{)}^{2}]^{1/2}=\sqrt{\frac{\text{Var}_{ \mu}(f_{\varepsilon})}{n}}+\sqrt{\frac{\text{Var}_{\nu}(g_{\varepsilon})}{n}} \lesssim\frac{1}{\sqrt{n}}.\]
For the third term, the marginal constraints for \(p_{\varepsilon}\) from (2.5) allow us to cancel cross-terms and compute
\[\mathbb{E}[(\hat{\mu}\otimes\hat{\nu})(p_{\varepsilon}-(\mu \otimes\nu)(p_{\varepsilon}))^{2}]^{1/2} =\mathbb{E}[(\hat{\mu}\otimes\hat{\nu})(p_{\varepsilon}-1)^{2}]^{ 1/2}\] \[=\Big{(}\frac{1}{n^{4}}\sum_{i,j,k,l=1}^{n}\mathbb{E}[(p_{ \varepsilon}(x_{i},y_{j})-1)(p_{\varepsilon}(x_{k},y_{l})-1)]\Big{)}^{1/2}\] \[=\frac{1}{n}\sqrt{\text{Var}_{\mu\otimes\nu}(p_{\varepsilon})} \leqslant\frac{1}{n}\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}.\]
We note that this kind of calculation is used repeatedly throughout the paper. We finally arrive at the bound
\[\mathbb{E}[|\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{ \varepsilon})-\Phi_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})|] \lesssim\frac{1}{\sqrt{n}}+\frac{\varepsilon\|p_{\varepsilon}\|_{ L^{2}(\mu\otimes\nu)}}{n}\] \[\lesssim(1+\varepsilon)\frac{\|p_{\varepsilon}\|_{L^{2}(\mu \otimes\nu)}}{\sqrt{n}}, \tag{5.1}\]
where the last inequality follows because \((\mu\otimes\nu)(p_{\varepsilon})=1\) implies \(1\leqslant\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}\) by Cauchy-Schwarz.
Bias term.We use concavity of the empirical dual to control the bias term \(\mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})- \hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})]\) by a quantity involving \(\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}\) again. Indeed, by Proposition 13 on concavity of \(\hat{\Phi}_{\varepsilon}\) and Proposition 14 on pointwise control for the dual potentials, we have
\[\mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g} _{\varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})] \leqslant\mathbb{E}[\langle\nabla\hat{\Phi}_{\varepsilon}(f_{ \varepsilon},g_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon},\hat{g}_ {\varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu} )}]\] \[\leqslant\mathbb{E}\big{[}\|\nabla\hat{\Phi}_{\varepsilon}(f_{ \varepsilon},g_{\varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})} \cdot\|(\hat{f}_{\varepsilon}-f_{\varepsilon},\hat{g}_{\varepsilon}-g_{ \varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}\big{]}\] \[\lesssim\mathbb{E}\big{[}\|\nabla\hat{\Phi}_{\varepsilon}(f_{ \varepsilon},g_{\varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}^{2} \big{]}^{1/2}.\]
Using the marginal constraints (2.5) to cancel cross terms once more, we find
\[\mathbb{E}[\|\nabla\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{ \varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}^{2}] =\mathbb{E}\Big{[}\frac{1}{n}\sum_{i=1}^{n}\Big{(}\frac{1}{n}\sum _{j=1}^{n}p_{\varepsilon}(x_{i},y_{j})-1\Big{)}^{2}\] \[+\frac{1}{n}\sum_{j=1}^{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}p_{ \varepsilon}(x_{i},y_{j})-1\Big{)}^{2}\Big{]}\] \[=\frac{1}{n^{3}}\sum_{i,j_{0},j_{1}=1}^{n}\mathbb{E}[(p_{ \varepsilon}(x_{i},y_{j_{0}})-1)(p_{\varepsilon}(x_{i},y_{j_{1}})-1)]\] \[+\frac{1}{n^{3}}\sum_{i_{0},i_{1},j=1}^{n}\mathbb{E}[(p_{ \varepsilon}(x_{i_{0}},y_{j})-1)(p_{\varepsilon}(x_{i_{1}},y_{j})-1)]\] \[=\frac{2}{n}\operatorname{Var}_{\mu\otimes\nu}(p_{\varepsilon}) \lesssim\frac{\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}}{n}.\]
We arrive at an important, if elementary, bound that we will use frequently throughout this work,
\[\mathbb{E}[\|\nabla\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{ \varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}^{2}]\lesssim\frac{ \|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}}{n}. \tag{5.2}\]
In particular, the bias is bounded as
\[\mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g} _{\varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})] \lesssim\frac{\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}}{\sqrt{n}}. \tag{5.3}\]
Controlling \(\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}\).We can combine (5.1) and (5.3) to obtain
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon} (\mu,\nu)|]\lesssim(1+\varepsilon)\frac{\|p_{\varepsilon}\|_{L^{2}(\mu\otimes \nu)}}{\sqrt{n}}. \tag{5.4}\]
We have thus reduced the problem to terms with a dimension-free dependence on the sample size \(n\), and with the dimension and \(\varepsilon\)-dependence residing fully in the term \((1+\varepsilon)\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}\). Up to this point, the differences between the techniques introduced in [11] and our approach are essentially contained in two steps, each taken to avoid factors of \(e^{1/\varepsilon}\). First, we control the bias term with concavity rather than strong concavity. And second, we refrain from using the pointwise control of Proposition 14 to bound the \(p_{\varepsilon}\) terms, which would result in the bound \(\|p_{\varepsilon}\|_{L^{\infty}(\mu\otimes\nu)}\leqslant e^{C/\varepsilon}\) for a constant \(C\). Such pointwise and exponential bounds were, to the best of our knowledge, all that was known about \(p_{\varepsilon}\) until this work.
One of the core technical innovations in this work, and indeed the source of MID scaling, is to provide the following sub-exponential, dimension-dependent bound on \(p_{\varepsilon}\).
**Lemma 16** (MID scaling for the norm of entropic densities).: _We have_
\[\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}\lesssim\mathcal{N}(\mu,\frac{ \varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon}{L}).\]
This result is proven, in the following section, by using the marginal constraints that \(p_{\varepsilon}\) satisfies in Equation (2.5) to gain pointwise control on \(p_{\varepsilon}\) that, when integrated, yields the covering numbers. In the embedded manifold setting of section 6, the \(L^{2}\) bound of Lemma 16 can be strengthened to pointwise \(L^{\infty}\) control, using the same technique (Lemma 26). The proof technique is used again in section 6.2, and we indeed expect that it will be of some broader utility. For example, it may be helpful for calculating quantities arising in the asymptotic distributions of entropic OT quantities [10].
For our subsequent results, it is convenient to record the following Lemma, which results from combining Equation (5.3) with Lemma 16.
**Lemma 17** (MID scaling for the bias).: _We have_
\[\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)] \lesssim\sqrt{\frac{\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}( \nu,\frac{\varepsilon}{L})}{n}}.\]
Summary of the technical approach.In summary, we decomposed the difference \(|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|\) into a bias term and a variance term. For the variance term, an elementary calculation allows us to control it by something with the dimension of the problem residing purely in the \(\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}\) term. And for the bias term, a judicious use of concavity allows us to control it by another expression with the dimension of the problem purely in the \(\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}\) term. We then conclude with Lemma 16. The proofs of MID scaling with slow rates for maps and densities, Theorem 7 and Theorem 8, use additional estimates that also follow the pattern of splitting into bias and variance terms (loosely construed) to then reduce to Lemma 16, and are included in sections 5.3 and 5.4, respectively.
### MID scaling for the norm of entropic densities
In this section, we will prove Lemma 16, which gives sub-exponential, MID scaling-type bounds on the entropic density \(p_{\varepsilon}\). To this end, the following fact will be useful. Recall that we write the closed ball of size \(\delta\) at \(z\in\mathbb{R}^{d}\) as
\[B^{\mathrm{cl}}(z,\delta):=\big{\{}z^{\prime}\in\mathbb{R}^{d}\colon\|z-z^{ \prime}\|\leqslant\delta\big{\}}.\]
**Proposition 18** (Average inverse mass is bounded by the covering number).: _Suppose \(P\) is a compactly supported probability measure on \(\mathbb{R}^{d}\). Then for all \(\delta>0\),_
\[\int P(B^{\mathrm{cl}}(z,\delta))^{-1}\mathrm{d}P(z)\leqslant\mathcal{N}(P, \delta/4).\]
To give the proof of this Proposition, recall that for \(\tau>0\), a _proper_\(\tau\)-covering of a set \(A\subset\mathbb{R}^{d}\), is a covering of \(A\) at scale \(\tau\) with centers contained within \(A\). We write the minimal size of a proper covering with respect to the Euclidean norm at scale \(\tau>0\) as \(\mathcal{N}^{\mathrm{pr}}(A,\|\cdot\|,\tau)\). Note that proper covering numbers are comparable to vanilla covering numbers since \(\mathcal{N}^{\mathrm{pr}}(A,\|\cdot\|,\tau)\leqslant\mathcal{N}(A,\|\cdot\|, \tau/2)\).
Proof of Proposition 18.: Let \(z_{1},\ldots,z_{K}\in\mathrm{supp}(P)\) be a proper \(\delta/2\) covering of \(\mathrm{supp}(P)\) achieving \(K=\mathcal{N}^{\mathrm{pr}}(\mathrm{supp}(P),\|\cdot\|,\delta/2)\). Since this is a proper covering, we know that for all \(k\in[K]\), \(P(B^{\mathrm{cl}}(z_{k},\delta/2))>0\). The triangle inequality then implies
\[z\in B^{\mathrm{cl}}(z_{k},\delta/2)\implies 0<P(B^{\mathrm{cl}}(z_{k},\delta/2)) \leqslant P(B^{\mathrm{cl}}(z,\delta)).\]
Thus
\[\int P(B^{\mathrm{cl}}(z,\delta))^{-1}\mathrm{d}P(z) \leqslant\sum_{k=1}^{K}\int_{B^{\mathrm{cl}}(z_{k},\delta/2)}P( B^{\mathrm{cl}}(z,\delta))^{-1}\mathrm{d}P(z)\] \[\leqslant\sum_{k=1}^{K}\int_{B^{\mathrm{cl}}(z_{k},\delta/2)}P( B^{\mathrm{cl}}(z_{k},\delta/2))^{-1}\mathrm{d}P(z)\] \[=K=\mathcal{N}^{\mathrm{pr}}(P,\delta/2)\leqslant\mathcal{N}(P, \delta/4).\]
Proof of Lemma 16.: We show the bound for \(\mathcal{N}(\nu,\frac{\varepsilon}{L})\), the bound for \(\mathcal{N}(\mu,\frac{\varepsilon}{L})\) follows by symmetry. We proceed via the marginal constraints on \(p_{\varepsilon}\) from (2.5): for all \(x\in\mathrm{supp}(\mu)\) and \(y\in\mathrm{supp}(\nu)\),
\[1=p_{\varepsilon}(x,y)\int\frac{p_{\varepsilon}(x,y^{\prime})}{p_{\varepsilon }(x,y)}\mathrm{d}\nu(y^{\prime}). \tag{5.5}\]
Applying Proposition 15 to Equation (5.5) yields,
\[1 \geqslant p_{\varepsilon}(x,y)\int_{B^{\mathrm{cl}}(y,\frac{4 \varepsilon}{L})}\frac{p_{\varepsilon}(x,y^{\prime})}{p_{\varepsilon}(x,y)} \mathrm{d}\nu(y^{\prime})\] \[\geqslant p_{\varepsilon}(x,y)\int_{B^{\mathrm{cl}}(y,\frac{4 \varepsilon}{L})}e^{-\frac{2L}{\varepsilon}\|y^{\prime}-y\|}\mathrm{d}\nu(y^{ \prime})\] \[\geqslant e^{-8}\cdot p_{\varepsilon}(x,y)\cdot\nu(B^{\mathrm{ cl}}(y,\frac{4\varepsilon}{L})).\]
Since \(y\in\mathrm{supp}(\nu)\), \(\nu(B^{\mathrm{cl}}(y,\frac{4\varepsilon}{L}))>0\), so we can rearrange this inequality and apply the marginal constraints (2.5) once more to yield
\[\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2} =\int p_{\varepsilon}(x,y)^{2}\mathrm{d}(\mu\otimes\nu)(x,y)\] \[\lesssim\int\nu(B^{\mathrm{cl}}(y,\frac{4\varepsilon}{L}))^{-1}p_ {\varepsilon}(x,y)\mathrm{d}(\mu\otimes\nu)(x,y)\] \[=\int\nu(B^{\mathrm{cl}}(y,\frac{4\varepsilon}{L}))^{-1}\mathrm{ d}\nu(y).\]
The result follows from Proposition 18.
### MID scaling for maps in empirical norm
As we discuss in the previous section, to prove Theorem 7 and Theorem 8, we reduce to the bound on the bias from Lemma 17. We emphasize that bounds on entropic maps and densities with exponential dependence on \(1/\varepsilon\) are known from [10]. To prove such bounds without incurring exponential factors requires further work beyond the previous sections. As in some previous works, we reduce from the error of map estimation to the error of value estimation; the difference in our approach is that we avoid factors of the KL-divergence [12] and it applies to the empirical entropic OT map rather than a modified estimator [13].
The idea is to consider the following rounded dual potential, defined for all \(x\in\operatorname{supp}(\mu)\) as
\[\tilde{f}_{\varepsilon}(x):=-\varepsilon\log\Big{(}\frac{1}{n}\sum_{j=1}^{n}e ^{-\varepsilon^{-1}(c(x,y_{j})-g_{\varepsilon}(y_{j}))}\Big{)}. \tag{5.6}\]
Let the corresponding density be defined for all \(x\in\operatorname{supp}(\mu)\) and \(y\in\operatorname{supp}(\nu)\) as \(\tilde{p}_{\varepsilon}(x,y):=e^{-\varepsilon^{-1}(c(x,y)-\tilde{f}_{ \varepsilon}(x)-g_{\varepsilon}(y))}\); note that \(\tilde{p}_{\varepsilon}\) is such that for all \(x\in\operatorname{supp}(\mu)\),
\[1=\frac{1}{n}\sum_{j=1}^{n}\tilde{p}_{\varepsilon}(x,y_{j}).\]
This fact allows us to compare \(\frac{1}{n}\tilde{p}_{\varepsilon}(x,\cdot)\) with \(\frac{1}{n}\tilde{p}_{\varepsilon}(x,\cdot)\) as probability distributions on \(\mathcal{Y}\). The following Lemma is then a consequence of Pinsker's inequality, cancellation of some terms, and a crucial use of the monotonicity of the empirical dual under marginal rounding from Proposition 13 to replace \(\tilde{f}_{\varepsilon}\) with its un-rounded counterpart \(f_{\varepsilon}\).
**Lemma 19**.: _Let \(\tilde{f}_{\varepsilon}\) and \(\tilde{p}_{\varepsilon}\) be as above, and suppose the diameter of \(\operatorname{supp}(\nu)\) is at most \(R\). Then_
\[\mathbb{E}\Big{[}\Big{\|}\hat{T}_{\varepsilon}-\frac{1}{n}\sum_{j=1}^{n}y_{j} \tilde{p}_{\varepsilon}(x,y_{j})\Big{\|}_{L^{2}(\hat{\mu})}^{2}\Big{]}\lesssim \frac{R^{2}}{\varepsilon}\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{ \varepsilon}(\mu,\nu)].\]
However, this lemma does not yet yield Theorem 7 as it involves the rounded density \(\tilde{p}_{\varepsilon}\). The following calculation addresses this issue.
**Lemma 20**.: _Let \(\tilde{f}_{\varepsilon}\) and \(\tilde{p}_{\varepsilon}\) be as above. Then_
\[\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}] \lesssim\mathbb{E}\Big{[}\Big{\|}\hat{T}_{\varepsilon}-\frac{1}{n}\sum_{j=1}^ {n}y_{j}\tilde{p}_{\varepsilon}(x,y_{j})\Big{\|}_{L^{2}(\hat{\mu})}^{2}\Big{]} +R^{2}\cdot\frac{\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu, \frac{\varepsilon}{L})}{n}.\]
We first dispatch Theorem 7, and then prove these lemmas.
Proof of Theorem 7.: Combining Lemma 17 with the above two lemmas yields
\[\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}] \lesssim R^{2}\Big{\{}\frac{1}{\varepsilon}\cdot\sqrt{\frac{\mathcal{N}(\mu, \frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon}{L})}{n}}+\frac {\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{ \varepsilon}{L})}{n}\Big{\}}.\]
Now observe that the statement of Theorem 7 is trivial when \(\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon }{L})\geqslant n\), since we always have the bound \(2R^{2}\). Hence it suffices to consider the case \(\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon }{L})\leqslant n\), in which case we may conclude from the above inequality.
Proof of Lemma 19.: To prove this result, let us first assume that \(0\in\operatorname{supp}(\nu)\) so that for each \(y\in\operatorname{supp}(\nu)\), we have \(\|y\|\leqslant R\). Then for each \(x\in\mathcal{X}\), we apply triangle inequality and Pinsker's inequality to see that
\[\Big{\|}\hat{T}_{\varepsilon}(x)-\frac{1}{n}\sum_{j=1}^{n}y_{j} \tilde{p}_{\varepsilon}(x,y_{j})\Big{\|} \leqslant\frac{R}{n}\sum_{j=1}^{n}|\tilde{p}_{\varepsilon}(x,y_{j} )-\hat{p}_{\varepsilon}(x,y_{j})|\] \[\lesssim R\sqrt{\operatorname{KL}(\frac{1}{n}\hat{p}_{\varepsilon }(x,\cdot)\,\|\,\frac{1}{n}\tilde{p}_{\varepsilon}(x,\cdot))}.\]
Therefore,
\[\Big{\|}\hat{T}_{\varepsilon}-\frac{1}{n}\sum_{j=1}^{n}y_{j} \tilde{p}_{\varepsilon}(x,y_{j}) \Big{\|}_{L^{2}(\hat{\mu})}^{2}\lesssim R^{2}\frac{1}{n}\sum_{i=1}^ {n}\operatorname{KL}(\frac{1}{n}\hat{p}_{\varepsilon}(x_{i},\cdot)\,\|\,\frac {1}{n}\tilde{p}_{\varepsilon}(x_{i},\cdot))\] \[=\frac{R^{2}}{\varepsilon}\sum_{i,j=1}^{n}\frac{1}{n^{2}}\hat{p} _{\varepsilon}(x_{i},y_{j})(\hat{f}_{\varepsilon}(x_{i})-\tilde{f}_{ \varepsilon}(x_{i})+\hat{g}_{\varepsilon}(y_{j})-g_{\varepsilon}(y_{j}))\] \[=\frac{R^{2}}{\varepsilon}\big{(}\hat{\mu}(\hat{f}_{\varepsilon }-\tilde{f}_{\varepsilon})+\hat{\nu}(\hat{g}_{\varepsilon}-g_{\varepsilon}) \big{)},\]
where we used the marginal constraints of \(\hat{p}_{\varepsilon}\) for the last step. Using the notation, as in the previous section, \(\hat{\Phi}_{\varepsilon}:=\Phi_{\varepsilon}^{\hat{\mu}\hat{\nu}}\), we recognize the result as a difference of \(\hat{\Phi}_{\varepsilon}\), and apply Proposition 13 on marginal rounding improving dual objective value, to conclude that
\[\Big{\|}\hat{T}_{\varepsilon}-\frac{1}{n}\sum_{j=1}^{n}y_{j} \tilde{p}_{\varepsilon}(x,y_{j})\|_{L^{2}(\hat{\mu})}^{2} \lesssim\frac{R^{2}}{\varepsilon}\big{(}\hat{\mu}(\hat{f}_{ \varepsilon}-\tilde{f}_{\varepsilon})+\hat{\nu}(\hat{g}_{\varepsilon}-g_{ \varepsilon})\big{)}\] \[=\frac{R^{2}}{\varepsilon}\big{(}\hat{\Phi}_{\varepsilon}(\hat{f }_{\varepsilon},\hat{g}_{\varepsilon})-\hat{\Phi}_{\varepsilon}(\tilde{f}_{ \varepsilon},g_{\varepsilon})\big{)}\] \[\leqslant\frac{R^{2}}{\varepsilon}\big{(}\hat{\Phi}_{\varepsilon }(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})-\hat{\Phi}_{\varepsilon}(f_{ \varepsilon},g_{\varepsilon})\big{)}.\]
Taking expectations, \(\mathbb{E}[\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})]=\Phi_{ \varepsilon}(f_{\varepsilon},g_{\varepsilon})\), and we conclude the result. For the general case where \(0\) may not be in \(\operatorname{supp}(\nu)\), we can perform the above argument with suitably offset \(y\).
Proof of Lemma 20.: As above, let us first assume that \(0\in\operatorname{supp}(\nu)\) so that for each \(y\in\operatorname{supp}(\nu)\), we have \(\|y\|\leqslant R\). We apply Young's inequality twice, first comparing \(T_{\varepsilon}(x)\) to the empirical version \(\frac{1}{n}\sum_{j=1}^{n}y_{j}p_{\varepsilon}(x,y_{j})\), and then the empirical version to the version involving \(\tilde{p}_{\varepsilon}\):
\[\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(\hat{ \mu})}^{2}] \lesssim\mathbb{E}\Big{[}\Big{\|}\hat{T}_{\varepsilon}-\frac{1}{n} \sum_{j=1}^{n}y_{j}\tilde{p}_{\varepsilon}(x,y_{j})\Big{\|}_{L^{2}(\hat{\mu})} ^{2}\Big{]}\] \[+\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{j=1}^{n}y_{j}(\tilde{p }_{\varepsilon}(x,y_{j})-p_{\varepsilon}(x,y_{j}))\Big{\|}_{L^{2}(\hat{\mu})} ^{2}\Big{]}\] \[+\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{j=1}^{n}y_{j}p_{ \varepsilon}(x,y_{j})-T_{\varepsilon}(x)\Big{\|}_{L^{2}(\hat{\mu})}^{2}\Big{]}\]
The third term is controlled as
\[\mathbb{E}\Big{[}\Big{\|}\frac{1}{n} \sum_{j=1}^{n}y_{j}p_{\varepsilon}(x,y_{j})-T_{\varepsilon}(x) \Big{\|}_{L^{2}(\hat{\mu})}^{2}\Big{]}\] \[=\mathbb{E}\Big{[}\frac{1}{n^{2}}\sum_{j,k=1}^{n}\langle y_{j}p_{ \varepsilon}(x,y_{j})-T_{\varepsilon}(x),y_{k}p_{\varepsilon}(x,y_{k})-T_{ \varepsilon}(x)\rangle_{L^{2}(\hat{\mu})}\Big{]}\] \[=\frac{1}{n}\|yp_{\varepsilon}(x,y)-T_{\varepsilon}(x)\|_{L^{2}( \mu\otimes\nu)}^{2}\lesssim\frac{R^{2}}{n}\|p_{\varepsilon}\|_{L^{2}(\mu \otimes\nu)}^{2},\]
where the second equality follows because, for \(j\neq k\), \(y_{j}\) and \(y_{k}\) are iid draws from \(\nu\), so that these terms cancel. For the second term, we begin by observing that for each \(y_{j}\),
\[\tilde{p}_{\varepsilon}(x,y_{j})=\frac{p_{\varepsilon}(x,y_{j})}{\frac{1}{n} \sum_{k=1}^{n}p_{\varepsilon}(x,y_{k})}.\]
We thus apply triangle inequality and this equation to see that
\[\Big{\|}\frac{1}{n}\sum_{j=1}^{n}y_{j}(\tilde{p}_{\varepsilon}(x, y_{j})-p_{\varepsilon}(x,y_{j}))\Big{\|} \leqslant\frac{R}{n}\sum_{j=1}^{n}|\tilde{p}_{\varepsilon}(x,y_{j} )-p_{\varepsilon}(x,y_{j})|\] \[=\frac{R}{n}\sum_{j=1}^{n}p_{\varepsilon}(x,y_{j})\Big{|}\frac{1} {\frac{1}{n}\sum_{k=1}^{n}p_{\varepsilon}(x,y_{k})}-1\Big{|}\] \[=R\Big{|}\frac{1}{n}\sum_{j=1}^{n}p_{\varepsilon}(x,y_{j})-1\Big{|}.\]
Taking the \(\|\cdot\|_{L^{2}(\hat{\mu})}^{2}\) norm, we recognize this as the squared norm of the first component of \(\nabla\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})\), and use the bound in Equation (5.2) to conclude
\[\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{j=1}^{n}y_{j}(\tilde{p }_{\varepsilon}(x,y_{j})-p_{\varepsilon}(x,y_{j}))\Big{\|}_{L^{2}(\hat{\mu})} ^{2}\Big{]} \leqslant R^{2}\mathbb{E}[\|\nabla\hat{\Phi}_{\varepsilon}(f_{ \varepsilon},g_{\varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}^{2}]\] \[\lesssim\frac{R^{2}}{n}\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu )}^{2}.\]
Applying Lemma 16 yields the result. For the general case where \(0\) may not be in \(\operatorname{supp}(\nu)\), we can perform the above argument with suitably offset \(y\).
### MID scaling for densities in empirical norm
We prove Theorem 8 on empirical norm convergence of the entropic OT density using the techniques introduced in the previous section. We prove the following two lemmas.
**Lemma 21**.: _Let \(\tilde{f}_{\varepsilon},\tilde{p}_{\varepsilon}\) be as in the previous section. Then_
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-\tilde{p}_{\varepsilon}\|_{L^{1}(\hat{\mu} \otimes\hat{\nu})}]\lesssim\frac{1}{\sqrt{\varepsilon}}\mathbb{E}[S_{ \varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)]^{1/2}\]
**Lemma 22**.: _Let \(\tilde{f}_{\varepsilon},\tilde{p}_{\varepsilon}\) be as in the previous section. Then_
\[\mathbb{E}[\|p_{\varepsilon}-\hat{p}_{\varepsilon}\|_{L^{1}(\hat{\mu}\otimes \tilde{\nu})}]\lesssim\mathbb{E}[\|\hat{p}_{\varepsilon}-\tilde{p}_{\varepsilon }\|_{L^{1}(\hat{\mu}\otimes\tilde{\nu})}]+\sqrt{\frac{\mathcal{N}(\mu,\frac{ \varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon}{L})}{n}}.\]
We first dispatch Theorem 8, and then prove these lemmas.
Proof of Theorem 8.: Combining the above two lemmas with Lemma 17 yields
\[\mathbb{E}[\|p_{\varepsilon}-\hat{p}_{\varepsilon}\|_{L^{1}(\hat{\mu}\otimes \tilde{\nu})}]\lesssim\frac{1}{\sqrt{\varepsilon}}\Big{(}\frac{\mathcal{N}( \mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon}{L})}{n} \Big{)}^{1/4}+\sqrt{\frac{\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge \mathcal{N}(\nu,\frac{\varepsilon}{L})}{n}}.\]
Now, observe that the statement of Theorem 8 is trivial when \(\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon }{L})\geqslant n\), since we always have the bound \(\lesssim 1\) by triangle inequality. Hence it suffices to consider the case \(\mathcal{N}(\mu,\frac{\varepsilon}{L})\wedge\mathcal{N}(\nu,\frac{\varepsilon }{L})\leqslant n\), in which case we may conclude from the above inequality.
Proof of Lemma 21.: Pinsker's and Jensen's imply
\[\|\tilde{p}_{\varepsilon}-\hat{p}_{\varepsilon}\|_{L^{1}(\hat{ \mu}\otimes\tilde{\nu})} \lesssim\frac{1}{n}\sum_{i=1}^{n}\sqrt{\operatorname{KL}(\frac{1}{n} \hat{p}_{\varepsilon}(x_{i},\cdot)\parallel\frac{1}{n}\tilde{p}_{\varepsilon} (x_{i},\cdot))}\] \[\leqslant\Big{(}\frac{1}{n}\sum_{i=1}^{n}\operatorname{KL}(\frac{ 1}{n}\hat{p}_{\varepsilon}(x_{i},\cdot)\parallel\frac{1}{n}\tilde{p}_{ \varepsilon}(x_{i},\cdot))\Big{)}^{1/2}.\]
Taking the expectation and applying Jensen's once more yields
\[\mathbb{E}[\|\tilde{p}_{\varepsilon}-\hat{p}_{\varepsilon}\|_{L^{1}(\hat{\mu} \otimes\tilde{\nu})}]\lesssim\mathbb{E}\Big{[}\frac{1}{n}\sum_{i=1}^{n} \operatorname{KL}(\frac{1}{n}\hat{p}_{\varepsilon}(x_{i},\cdot)\parallel\frac{ 1}{n}\tilde{p}_{\varepsilon}(x_{i},\cdot))\Big{]}^{1/2}.\]
The statement then follows as in Lemma 19.
Proof of Lemma 22.: Apply triangle inequality:
\[\mathbb{E}[\|p_{\varepsilon}-\hat{p}_{\varepsilon}\|_{L^{1}(\hat{\mu}\otimes \tilde{\nu})}]\leqslant\mathbb{E}[\|\hat{p}_{\varepsilon}-\tilde{p}_{ \varepsilon}\|_{L^{1}(\hat{\mu}\otimes\tilde{\nu})}]+\mathbb{E}[\|\tilde{p}_{ \varepsilon}-p_{\varepsilon}\|_{L^{1}(\hat{\mu}\otimes\tilde{\nu})}].\]
For the second term we use the same reasoning as in the proof of Lemma 20 to observe that
\[\|\tilde{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{1}(\hat{\mu}\otimes\tilde{ \nu})}=\frac{1}{n^{2}}\sum_{i,j=1}^{n}|\tilde{p}_{\varepsilon}(x_{i},y_{j})-p _{\varepsilon}(x_{i},y_{j})|=\frac{1}{n}\sum_{i=1}^{n}\Big{|}\frac{1}{n}\sum_{j =1}^{n}p_{\varepsilon}(x_{i},y_{j})-1\Big{|}.\]
Applying Cauchy-Schwarz we find that
\[\|\tilde{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{1}(\hat{\mu}\otimes\tilde{ \nu})}\leqslant\Big{(}\frac{1}{n}\sum_{i=1}^{n}\Big{(}\frac{1}{n}\sum_{j=1}^{n} p_{\varepsilon}(x_{i},y_{j})-1\Big{)}^{2}\Big{)}^{1/2}\leqslant\|\nabla\hat{ \Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})\|_{L^{2}(\hat{\mu}) \times L^{2}(\hat{\nu})}.\]
Taking expectations and applying Cauchy-Schwarz once more, we can apply Equation (5.2) to yield
\[\mathbb{E}[\|\tilde{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{1}(\hat{\mu} \otimes\tilde{\nu})}]\leqslant\mathbb{E}[\|\nabla\hat{\Phi}_{\varepsilon}(f_{ \varepsilon},g_{\varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}^{2} ]^{1/2}\lesssim\frac{\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}}{\sqrt{n}}.\]
Finally, Lemma 16 yields the result.
Proofs of MID scaling with fast rates on manifolds
In this section, we give the proofs of our stronger results when \(\nu\) is supported on an embedded Riemannian manifold. To introduce our approach, recall that fast rates of convergence for the entropic OT dual potentials, map, density, and bias are known, but at the price of exponential factors in \(1/\varepsilon\)[13]. As we mention in the previous section, those estimates are established through strong concavity of the empirical dual objective, but because the strong concavity parameter is exponentially small, such an approach is inadequate for establishing MID scaling. The techniques in the previous section represent a very weak form of this approach, using only concavity rather than strong concavity to replace exponential factors with MID scaling. In this section, we power our arguments with a _quadratic growth_ condition [12], which is a stronger condition than concavity alone, yet still weaker than true strong concavity.
In this regard, our approach is inspired by recent work [16], which has found that a sufficient condition for a quadratic growth condition for the entropic OT dual is a Poincare inequality. Indeed, [16] shows that given probability measures \(P,Q\) on \(\mathbb{R}^{d}\), if \(P\) satisfies a Poincare inequality with constant \(C_{P}\), then for any dual potentials \(f,g\),
\[\|f-f_{\varepsilon}^{PQ}\|_{L^{2}(P)}^{2}\lesssim\frac{L^{2}}{ \varepsilon C_{P}}(\Phi_{\varepsilon}^{PQ}(f_{\varepsilon}^{PQ},g_{ \varepsilon}^{PQ})-\Phi_{\varepsilon}^{PQ}(f,g)), \tag{6.1}\]
where \(f_{\varepsilon}^{PQ},g_{\varepsilon}^{PQ}\) are the optimal dual potentials from \(P\) to \(Q\) and we are suppressing numerical constants. However, this estimate cannot directly establish quadratic growth for the _empirical_ dual \(\hat{\Phi}_{\varepsilon}\) since it relies on a Poincare inequality for the source measure \(P\). We emphasize that our approach is fundamentally about the empirical dual rather than the population dual, since working with the empirical dual is what allows us to bypass empirical processes by reducing to computing variances of population quantities (consider, for example, our proof of Lemma 17).
As our first major step in this section, we develop quadratic growth inequalities for the empirical dual. In particular, in section 6.1 we use different techniques than [16] to show that under an empirical analog of a Poincare inequality - a spectral gap of the random geometric graph (RGG) of \(\nu\) - an analog of (6.1) holds for the empirical dual \(\hat{\Phi}_{\varepsilon}\). Using recent advances in the theory of RGGs on embedded manifolds [10] which we describe in Appendix A.2, a spectral gap for the RGG of \(\nu\) then follows from a Poincare inequality for \(\nu\) itself, which is entailed by Assumption 2 and Assumption 3. To summarize, the primary uses of our embedded manifold assumptions are that they imply a spectral gap for a weighted Laplacian associated to \(\nu\) (Proposition 37), which then implies a spectral gap for the RGG of \(\nu\) using spectral convergence theory [10], which finally yields quadratic growth for the empirical dual; this argument is encapsulated in Lemma 23.
After we prove Lemma 23 in section 6.1, in section 6.2 we next bound the \(f\) dual potentials in terms of the \(g\) dual potentials, both in population and empirical norms, while only picking up MID scaling factors. These results are combined in section 6.3 to yield convergence of the \(g\) dual potentials in empirical norm, which then implies Theorem 12 and all of Theorem 9 on dual potential convergence except the convergence of the \(g\) dual potentials in population norm. In section 6.4, we complete the proof of Theorem 9 by
applying the previous results to establish the convergence of the \(g\) dual potentials in population norm. Finally, in section 6.5 we prove Corollary 10 and Corollary 11.
Throughout, we work under the assumption that \(n\) is large enough that
\[\big{(}\frac{\varepsilon}{L}\big{)}^{2d_{\nu}}\gtrsim\frac{1}{n}\Big{(}\log \Big{(}\frac{Ln}{\varepsilon}\Big{)}+\frac{1}{\varepsilon}\Big{)}. \tag{6.2}\]
We may assume \(n\) is this large without loss of generality, since otherwise it is not hard to check that our bounds are worse than the trivial ones.
### Quadratic growth for the empirical dual
In this section, we prove the following quadratic growth inequality for the empirical entropic OT dual function.
**Lemma 23** (Quadratic growth for the empirical dual).: _Let \(\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}\colon L^{\infty}(\hat{\nu})\to\mathbb{R}\) denote the empirical dual objective with \(f\) rounded, namely_
\[\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(g):=\hat{\mu}\big{(}-\varepsilon\log \big{(}\frac{1}{n}\sum_{j=1}^{n}e^{-\varepsilon^{-1}(c(x,y_{j})-g(y_{j}))} \big{)}\big{)}+\hat{\nu}(g).\]
_Then, for \(\varepsilon\) sufficiently small, we have_
\[\varepsilon\Big{(}\frac{\varepsilon}{L}\Big{)}^{d_{\nu}+2}\cdot\|g-\hat{g}_{ \varepsilon}\|_{L^{2}(\hat{\nu})}^{2}\lesssim\|g-\hat{g}_{\varepsilon}\|_{L^{ \infty}(\hat{\nu})}^{2}(\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(\hat{g}_{ \varepsilon})-\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(g)),\qquad\forall g\in L ^{\infty}(\hat{\nu}),\ \hat{\nu}(g)=0.\]
This lemma is proven by reducing the claim to a spectral gap for the random geometric graph (RGG) of \(\nu\). Using recent fundamental work on the spectral convergence of RGGs of embedded manifolds [10], we conclude the result. We include this background material in Appendix A.2.
Proof of Lemma 23.: Fix \(g\in L^{\infty}(\hat{\nu})\) with \(\hat{\nu}(g)=0\), and put \(\alpha:=g-\hat{g}_{\varepsilon}\). Let
\[\psi(t):=-\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(\hat{g}_{\varepsilon}+t \alpha),\]
and also define
\[q_{t}(x_{i},y_{j}):=\frac{e^{-\varepsilon^{-1}(c(x_{i},y_{j})-(\hat{g}_{ \varepsilon}(y_{j})+t\alpha(y_{j}))}}{\frac{1}{n}\sum_{k=1}^{n}e^{-\varepsilon ^{-1}(c(x_{i},y_{k})-(\hat{g}_{\varepsilon}(y_{k})+t\alpha(y_{k}))}}\]
Then it is straightforward to verify that \(\psi\) is a convex function with minimum attained at \(t=0\) such that
\[\psi^{\prime\prime}(t)=\frac{1}{\varepsilon}\hat{\mu}\big{(}\operatorname{ Var}_{\frac{1}{n}q_{t}(x,\cdot)}(\alpha)\big{)}.\]
Note that,
\[q_{t}(x_{i},y_{j})\geqslant e^{-2t\varepsilon^{-1}\|\alpha\|_{L^{\infty}( \hat{\nu})}}q_{0}(x_{i},y_{j})=e^{-2t\varepsilon^{-1}\|\alpha\|_{L^{\infty}( \hat{\nu})}}\hat{p}_{\varepsilon}(x_{i},y_{j}).\]
Therefore,
\[\psi^{\prime\prime}(t) =\frac{1}{2\varepsilon}\hat{\mu}\big{(}\mathbb{E}_{y,y^{\prime} \sim\frac{1}{n}q_{t}(x,\cdot)}[(\alpha(y)-\alpha(y^{\prime}))^{2}]\big{)}\] \[\geqslant\frac{e^{-4t\varepsilon^{-1}\|\alpha\|_{L^{\infty}(\hat{ \nu})}}}{2\varepsilon}\hat{\mu}\big{(}\mathbb{E}_{y,y^{\prime}\sim\frac{1}{n} \hat{p}_{\varepsilon}(x,\cdot)}[(\alpha(y)-\alpha(y^{\prime}))^{2}]\big{)},\]
It follows that, for \(t>0\),
\[t^{2}\frac{e^{-4t\varepsilon^{-1}\|\alpha\|_{L^{\infty}(\hat{\nu})}}}{4\varepsilon }\hat{\mu}\big{(}\mathbb{E}_{y,y^{\prime}\sim_{n}^{\frac{1}{n}}\hat{p}_{ \varepsilon}(x,\cdot)}[(\alpha(y)-\alpha(y^{\prime}))^{2}]\big{)}\leqslant\psi( t)-\psi(0)\leqslant\psi(1)-\psi(0),\]
where the first inequality is strong convexity for \(\psi\) on the interval \((-\infty,t]\), and the second inequality follows because \(\psi\) is a convex function with minimizer at \(t=0\). If \(\|\alpha\|_{L^{\infty}(\hat{\nu})}=0\), then the inequality is trivially true. Otherwise, we may set \(t=\varepsilon/4\|\alpha\|_{L^{\infty}(\hat{\nu})}\) to yield
\[\frac{\varepsilon}{\|\alpha\|_{L^{\infty}(\hat{\nu})}^{2}}\mathbb{E}_{y,y^{ \prime}\sim_{n}^{\frac{1}{n}}\hat{p}_{\varepsilon}(x,\cdot)}[(\alpha(y)- \alpha(y^{\prime}))^{2}]\lesssim\psi(1)-\psi(0).\]
It thus suffices to show that
\[\hat{\mu}\big{(}\mathbb{E}_{y,y^{\prime}\sim_{n}^{\frac{1}{n}}\hat{p}_{ \varepsilon}(x,\cdot)}[(\alpha(y)-\alpha(y^{\prime}))^{2}]\big{)}\gtrsim \left(\frac{\varepsilon}{L}\right)^{d_{\nu}+2}\hat{\nu}(\alpha^{2}), \tag{6.3}\]
Observe that
\[\hat{\mu}\big{(}\mathbb{E}_{y,y^{\prime}\sim_{n}^{\frac{1}{n}}\hat{p}_{ \varepsilon}(x,\cdot)}[(\alpha(y)-\alpha(y^{\prime}))^{2}]\big{)}=\frac{1}{n^{ 2}}\sum_{j,k=1}^{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\hat{p}_{\varepsilon}(x_{i}, y_{j})\hat{p}_{\varepsilon}(x_{i},y_{k})\Big{)}(\alpha(y_{j})-\alpha(y_{k}))^{2}.\]
By Proposition 15,
\[\hat{p}_{\varepsilon}(x_{i},y_{j})\geqslant e^{-2\frac{L}{\varepsilon}\|y_{j }-y_{k}\|}\hat{p}_{\varepsilon}(x_{i},y_{k}),\qquad\forall j,k\in[n].\]
Hence
\[\frac{1}{n^{2}}\sum_{j,k=1}^{n}\Big{(}\frac{1}{n}\sum_{i=1}^{n} \hat{p}_{\varepsilon}(x_{i},y_{j})\hat{p}_{\varepsilon}(x_{i},y_{k})\Big{)}( \alpha(y_{j})-\alpha(y_{k}))^{2}\geqslant\] \[\qquad\qquad\qquad\frac{1}{n^{2}}\sum_{j,k=1}^{n}e^{-2\frac{L}{ \varepsilon}\|y_{j}-y_{k}\|}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\hat{p}_{ \varepsilon}(x_{i},y_{k})^{2}\Big{)}(\alpha(y_{j})-\alpha(y_{k}))^{2}\] \[\qquad\qquad\qquad\geqslant\frac{1}{n^{2}}\sum_{j,k=1}^{n}e^{-2 \frac{L}{\varepsilon}\|y_{j}-y_{k}\|}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\hat{p}_ {\varepsilon}(x_{i},y_{k})\Big{)}^{2}(\alpha(y_{j})-\alpha(y_{k}))^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
### Reduction from \(f\) dual potentials to \(g\) dual potentials
In this section, we will prove the following Lemma, which shows that although the \(f\) dual potentials are defined on the support of \(\mu\) - a potentially higher dimensional set than the support of \(\nu\) - their convergence can be controlled by the convergence of the \(g\) dual potentials while only incurring MID scaling factors.
**Lemma 24** (Reduction from \(f\) dual potentials to \(g\) dual potentials).: _We have_
\[\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}] \lesssim\Big{(}\frac{L}{\varepsilon}\Big{)}^{3d_{\nu}}\big{(}\mathbb{E}[\|\hat {g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}]+\frac{1+ \varepsilon^{2}}{n}\big{)},\]
_and similarly_
\[\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\mu)}^{2}] \lesssim\Big{(}\frac{L}{\varepsilon}\Big{)}^{3d_{\nu}}\big{(}\mathbb{E}[\|\hat {g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}]+\frac{1+ \varepsilon^{2}}{n}\big{)}.\]
To prove Lemma 24, as well as for the proofs in the following sections, the following helper lemmas are convenient (proven in Appendix B.2).
**Lemma 25** (Mass of empirical balls).: _With probability at least \(1-\frac{1}{n}e^{-10/\varepsilon}\),_
\[\inf_{z\in N}\hat{\nu}(B(z,\frac{\varepsilon}{L}))\gtrsim\Big{(}\frac{ \varepsilon}{L}\Big{)}^{d_{\nu}}.\]
**Lemma 26** (Uniform control on entropic densities).: _We have the uniform bound,_
\[\|p_{\varepsilon}\|_{L^{\infty}(\mu\otimes\nu)}\lesssim\Big{(}\frac{L}{ \varepsilon}\Big{)}^{d_{\nu}}.\]
_And, with probability at least \(1-\frac{1}{n}e^{-10/\varepsilon}\),_
\[\|\hat{p}_{\varepsilon}\|_{L^{\infty}(\mu\otimes\nu)}\lesssim\Big{(}\frac{L}{ \varepsilon}\Big{)}^{d_{\nu}}.\]
Proof of Lemma 24.: Let
\[\tilde{f}_{\varepsilon}(x):=-\varepsilon\log\Big{(}\frac{1}{n}\sum_{j=1}^{n}e ^{-\frac{1}{\varepsilon}(c(x,y_{j})-g_{\varepsilon}(y_{j}))}\Big{)}.\]
Note that, for all \(x\in\operatorname{supp}\mu\),
\[\left(\hat{f}_{\varepsilon}(x)-f_{\varepsilon}(x)\right)^{2}\lesssim\left( \hat{f}_{\varepsilon}(x)-\tilde{f}_{\varepsilon}(x)\right)^{2}+\left(\tilde{ f}_{\varepsilon}(x)-f_{\varepsilon}(x)\right)^{2},\]
and we'll begin by studying each of these terms separately.
For the first term, put \(\alpha:=g_{\varepsilon}-\hat{g}_{\varepsilon}\), and let \(\varphi\colon[0,1]\to\mathbb{R}\) be defined as
\[\varphi(t):=-\varepsilon\log\Big{(}\frac{1}{n}\sum_{j=1}^{n}e^{-\varepsilon^ {-1}(c(x,y_{j})-\hat{g}_{\varepsilon}(y_{j})-t\alpha(y_{j})}\Big{)},\]
and notice \(\varphi(0)=\hat{f}_{\varepsilon}(x)\) while \(\varphi(1)=\hat{f}_{\varepsilon}(x)\). For \(y\in\mathrm{supp}(\nu)\), put
\[q_{t}(x,y):=\frac{e^{-\varepsilon^{-1}(c(x,y)-\hat{g}_{\varepsilon}(y)-t\alpha( y))}}{\frac{1}{n}\sum_{k=1}^{n}e^{-\varepsilon^{-1}(c(x,y_{k})-\hat{g}_{ \varepsilon}(y_{k})-t\alpha(y_{k}))}}\]
Then
\[|\varphi^{\prime}(t)|=\Big{|}\frac{1}{n}\sum_{j=1}^{n}\alpha(y_{j})q_{t}(x,y_{ j})\Big{|}\leqslant\|\alpha\|_{L^{2}(\hat{\nu})}\Big{(}\frac{1}{n}\sum_{j=1}^{n}q_ {t}(x,y_{j})^{2}\Big{)}^{1/2}.\]
Using Proposition 15, we find that for any \(y,y^{\prime}\in\mathrm{supp}(\nu)\) and \(t\in[0,1]\),
\[|\log q_{t}(x,y)-\log q_{t}(x,y^{\prime})| =\frac{1}{\varepsilon}|c(x,y)-\hat{g}_{\varepsilon}(y)-t\alpha(y) -(c(x,y^{\prime})-\hat{g}_{\varepsilon}(y^{\prime})-t\alpha(y^{\prime}))|\] \[\leqslant\frac{2L}{\varepsilon}\|y-y^{\prime}\|.\]
Therefore, for any \(j\in[n]\),
\[1=\frac{1}{n}\sum_{k=1}^{n}q_{t}(x,y_{k})\gtrsim q_{t}(x,y_{j})\hat{\nu}(B(y_{ j},\frac{\varepsilon}{L}))\geqslant q_{t}(x,y_{j})\inf_{z\in N}\hat{\nu}(B(z, \frac{\varepsilon}{L})).\]
So that
\[|\hat{f}_{\varepsilon}(x)-\tilde{f}_{\varepsilon}(x)|\leqslant \int_{0}^{1}|\varphi^{\prime}(t)|\mathrm{d}t \leqslant\|\alpha\|_{L^{2}(\hat{\nu})}\int_{0}^{1}\Big{(}\frac{1}{n }\sum_{j=1}^{n}q_{t}(x,y_{j})^{2}\Big{)}^{1/2}\mathrm{d}t\] \[\lesssim\|\alpha\|_{L^{2}(\hat{\nu})}\sup_{z\in N}\hat{\nu}(B(z, \frac{\varepsilon}{L}))^{-1/2}\int_{0}^{1}\Big{(}\frac{1}{n}\sum_{j=1}^{n}q_{ t}(x,y_{j})\Big{)}^{1/2}\mathrm{d}t\] \[=\|\alpha\|_{L^{2}(\hat{\nu})}\sup_{z\in N}\hat{\nu}(B(z,\frac{ \varepsilon}{L}))^{-1/2}. \tag{6.4}\]
Now consider the \((\tilde{f}_{\varepsilon}(x)-f_{\varepsilon}(x))^{2}\) term. Note that
\[|\tilde{f}_{\varepsilon}(x)-f_{\varepsilon}(x)|=\Big{|}\varepsilon\log\Big{(} \frac{1}{n}\sum_{j=1}^{n}p_{\varepsilon}(x,y_{j})\Big{)}\Big{|}.\]
Since
\[1=\int p_{\varepsilon}(x,y)\mathrm{d}\nu(y),\]
there must exist some \(y^{\prime}\in\mathrm{supp}(\nu)\) such that \(p_{\varepsilon}(x,y^{\prime})\geqslant 1\). By Proposition 15, this implies
\[\frac{1}{n}\sum_{j=1}^{n}p_{\varepsilon}(x,y_{j})\geqslant\frac{1}{n}\sum_{j= 1}^{n}\frac{p_{\varepsilon}(x,y_{j})}{p_{\varepsilon}(x,y^{\prime})}\geqslant \frac{1}{n}\sum_{j=1}^{n}e^{-\frac{2L}{\varepsilon}\|y_{j}-y^{\prime}\|} \gtrsim\hat{\nu}(B(y^{\prime},\frac{\varepsilon}{L})).\]
Using Lipschitz-ness of \(\log\),
\[|\tilde{f}_{\varepsilon}(x)-f_{\varepsilon}(x)|\lesssim\varepsilon\hat{\nu}(B( y^{\prime},\frac{\varepsilon}{L}))^{-1}\Big{|}\frac{1}{n}\sum_{j=1}^{n}p_{ \varepsilon}(x,y_{j})-1\Big{|}. \tag{6.5}\]
Now, let \(\mathcal{E}\) denote the event of Lemma 25. Observe that by the pointwise control in Proposition 14 and since \(\mathbb{P}[\mathcal{E}^{c}]\leqslant\frac{1}{n}\),
\[\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}] \lesssim\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{f}_{\varepsilon}-f_{ \varepsilon}\|_{L^{2}(\hat{\mu})}^{2}]+\frac{1}{n}.\]
Combining Equations (6.4) and (6.5) yields
\[\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{ \mu})}^{2}] \lesssim\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{f}_{\varepsilon}-f_ {\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}]+\frac{1}{n}\] \[\lesssim\mathbb{E}[\mathbb{1}[\mathcal{E}](\|\hat{f}_{\varepsilon }-\tilde{f}_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}+\|\tilde{f}_{\varepsilon} -f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2})]+\frac{1}{n}\] \[\lesssim\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\mathbb{E }\Big{[}\mathbb{1}[\mathcal{E}]\Big{\{}\|\hat{g}_{\varepsilon}-g_{\varepsilon }\|_{L^{2}(\hat{\nu})}^{2}+\frac{\varepsilon^{2}}{n}\sum_{i=1}^{n}\Big{(} \frac{1}{n}\sum_{j=1}^{n}p_{\varepsilon}(x_{i},y_{j})-1\Big{)}^{2}\Big{\}} \Big{]}+\frac{1}{n}\] \[\lesssim\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\Big{\{} \mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}+ \frac{\varepsilon^{2}}{n}\|\nabla\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{ \varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}^{2}\Big{\}}\Big{]} +\frac{1}{n}\] \[\lesssim\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\Big{\{} \mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}]+ \frac{\varepsilon^{2}}{n}\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}\Big{\}} +\frac{1}{n},\]
where the final inequality follows from \(\mathbb{1}[\mathcal{E}]\leqslant 1\) and Equation (5.2). By Lemma 16 and Proposition 43 on the covering numbers of \(N\) (proved in Appendix A.3), we conclude the first inequality. The second follows in the same manner.
### MID scaling for the bias and \(g\) dual potentials in empirical norm
In this section, we use our quadratic growth inequality from section 6.1 and our bound on the \(f\) dual potentials in terms of the \(g\) dual potentials from section 6.2 to prove the following result, on convergence of the \(g\) dual potentials in empirical norm.
**Lemma 27** (Convergence of \(g\) dual potentials in empirical norm).: _We have_
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}] \lesssim\big{(}\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(} \frac{L}{\varepsilon}\Big{)}^{6d_{\nu}+4}\cdot\frac{1}{n}.\]
By the previous section, this implies convergence of the \(f\) dual potentials in both empirical and population norms, and therefore yields all parts of Theorem 9 other than the population norm convergence of the \(g\) dual potentials, which is proved in the next section. As a byproduct of our proof, we will also obtain Theorem 12, on MID scaling with fast rates for the bias.
Notice that a \(1/\sqrt{n}\) rate follows by combining our result on quadratic growth for the empirical dual, Lemma 23, with our prior result on the bias, Lemma 17. To obtain a faster rate, we proceed by a self-bounding argument.
We first bound the bias \(\mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g}_{ \varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})]\) in terms of \(\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}\) and \(\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}\) plus a \(1/n\) term arising from the squared norm of \(\nabla\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})\).
**Lemma 28**.: _For all \(a>0\),_
\[\mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{\mu},\hat{\nu})-\hat{\Phi}_{ \varepsilon}(\mu,\nu)]\lesssim\frac{1}{an}\Big{(}\frac{L}{\varepsilon}\Big{)} ^{d_{\nu}}+a\mathbb{E}\big{[}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2} (\hat{\mu})}^{2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})} ^{2}\big{]}.\]
Using our result on the quadratic growth of \(\hat{\Phi}_{\varepsilon}\), Lemma 23, the previous step then implies a bound for \(\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}\) in terms of itself, \(1/n\), and \(\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}\).
**Lemma 29**.: _For all \(a>0\),_
Combining this inequality with Lemma 24 from section 6.2 and taking the free parameter \(a\) sufficiently small, we can re-arrange and arrive at Lemma 27. Plugging our bounds on the dual potentials into Lemma 28 then yields Theorem 12. Let's give these proofs first, before we turn to the proofs of Lemma 28 and Lemma 29.
Proof of Lemma 27.: By Lemma 29 and Lemma 24, for all \(a>0\),
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}] \lesssim\frac{1}{\varepsilon}\Big{(}\frac{L}{\varepsilon}\Big{)}^{d_{\nu}+2} \Big{\{}\frac{1}{an}\Big{(}\frac{L}{\varepsilon}\Big{)}^{d_{\nu}}+a\Big{(} \frac{L}{\varepsilon}\Big{)}^{3d_{\nu}}\big{(}\mathbb{E}[\|\hat{g}_{ \varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}]+\frac{1+\varepsilon^{2 }}{n}\big{)}\Big{\}}+\frac{1}{n}.\]
If we take \(a=C\varepsilon\big{(}\frac{\varepsilon}{L}\big{)}^{4d_{\nu}+2}\) for a sufficiently small constant \(C\), we can re-arrange to yield
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}] \lesssim\big{(}1+\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(} \frac{L}{\varepsilon}\Big{)}^{6d_{\nu}+4}\cdot\frac{1}{n}\lesssim\big{(} \varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(}\frac{L}{ \varepsilon}\Big{)}^{6d_{\nu}+4}\cdot\frac{1}{n}.\]
Proof of Theorem 12.: First notice that \(\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})]-S_{\varepsilon}(\mu,\nu)= \mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g}_{ \varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})]\geqslant 0\), so that
\[|\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})]-S_{\varepsilon}(\mu,\nu)|= \mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g}_{ \varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})].\]
Invoking Lemma 28 with \(a=1\), and applying Lemma 24 plus Lemma 27, we obtain
\[\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)] \lesssim\big{(}1+\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(} \frac{L}{\varepsilon}\Big{)}^{9d_{\nu}+4}\cdot\frac{1}{n}\leqslant\big{(} \varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(}\frac{L}{ \varepsilon}\Big{)}^{9d_{\nu}+4}\cdot\frac{1}{n}.\]
and whence the result.
The proofs of Lemma 28 and Lemma 29 follow.
Proof of Lemma 28.: By Proposition 13, for all \(a>0\),
\[\mathbb{E}[S_{\varepsilon}(\hat{\mu}, \hat{\nu})-S_{\varepsilon}(\mu,\nu)]=\mathbb{E}[\hat{\Phi}_{ \varepsilon}(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})-\Phi_{\varepsilon}( f_{\varepsilon},g_{\varepsilon})]\] \[=\mathbb{E}[\hat{\Phi}_{\varepsilon}(\hat{f}_{\varepsilon},\hat{g }_{\varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})]\] \[\leqslant\mathbb{E}[\langle\nabla\hat{\Phi}_{\varepsilon}(f_{ \varepsilon},g_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon},\hat{g}_{ \varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}]\] \[\leqslant\frac{1}{a}\mathbb{E}[\|\nabla\hat{\Phi}_{\varepsilon}( f_{\varepsilon},g_{\varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\hat{\nu})}^{2}]+a \mathbb{E}\big{[}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^ {2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}\big{]}.\]
By Equation (5.2) and Lemma 16, we have
\[\mathbb{E}[S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)] \lesssim\frac{1}{an}\mathcal{N}(N,\frac{\varepsilon}{L})+a\mathbb{E}\big{[} \|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}+\|\hat{g}_{ \varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}\big{]}.\]
Applying Proposition 43 on the covering numbers of \(N\) (proved in Appendix A.3), we may conclude.
Proof of Lemma 29.: We begin by re-centering \(g_{\varepsilon}\) so that we can apply Lemma 23:
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{ \nu})}^{2}] \leqslant\mathbb{E}[\|\hat{g}_{\varepsilon}-(g_{\varepsilon}-\hat{ \nu}(g_{\varepsilon}))\|_{L^{2}(\hat{\nu})}^{2}]+\mathbb{E}[(\hat{\nu}(g_{ \varepsilon}))^{2}]\] \[\lesssim\mathbb{E}[\|\hat{g}_{\varepsilon}-(g_{\varepsilon}-\hat{ \nu}(g_{\varepsilon}))\|_{L^{2}(\hat{\nu})}^{2}]+\frac{1}{n}, \tag{6.6}\]
where the last step follows by our convention that \(\nu(g_{\varepsilon})=0\) and the pointwise control on \(g_{\varepsilon}\) from Proposition 14. Let \(\mathcal{E}\) denote the event described in Lemma 23. Applying Proposition 14 once more we see that
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-(g_{\varepsilon}-\hat{\nu}(g_ {\varepsilon}))\|_{L^{2}(\hat{\nu})}^{2}] =\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{g}_{\varepsilon}-(g_{ \varepsilon}-\hat{\nu}(g_{\varepsilon}))\|_{L^{2}(\hat{\nu})}^{2}]+\mathbb{E} [\mathbb{1}[\mathcal{E}^{c}]]\] \[\lesssim\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{g}_{\varepsilon }-(g_{\varepsilon}-\hat{\nu}(g_{\varepsilon}))\|_{L^{2}(\hat{\nu})}^{2}]+ \frac{1}{n}\] \[\lesssim\mathbb{E}\Big{[}\mathbb{1}[\mathcal{E}]\|\hat{g}_{ \varepsilon}-(g_{\varepsilon}-\hat{\nu}(g_{\varepsilon}))\|_{L^{\infty}(\hat{ \nu})}^{2}\] \[\times\frac{1}{\varepsilon}\Big{(}\frac{L}{\varepsilon}\Big{)}^{ d_{\nu}+2}\big{(}\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(\hat{g}_{\varepsilon})- \Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(g_{\varepsilon})\big{)}\Big{]}+\frac{ 1}{n}.\]
Using \(\mathbb{1}[\mathcal{E}]\leqslant 1\) and Proposition 14 once more, we find
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-(g_{\varepsilon}-\hat{\nu}(g_{\varepsilon} ))\|_{L^{2}(\hat{\nu})}^{2}]\lesssim\frac{1}{\varepsilon}\Big{(}\frac{L}{ \varepsilon}\Big{)}^{d_{\nu}+2}\mathbb{E}\big{[}\Psi_{\varepsilon}^{\hat{\mu} \hat{\nu}}(\hat{g}_{\varepsilon})-\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(g_{ \varepsilon})\big{]}.\]
By Proposition 13 on marginal rounding
\[\Psi_{\varepsilon}^{\hat{\mu}\hat{\nu}}(g_{\varepsilon})\geqslant\hat{\Phi}_ {\varepsilon}(f_{\varepsilon},g_{\varepsilon}).\]
Hence
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-(g_{\varepsilon}-\hat{\nu}(g_{\varepsilon} ))\|_{L^{2}(\hat{\nu})}^{2}]\lesssim\frac{1}{\varepsilon}\Big{(}\frac{L}{ \varepsilon}\Big{)}^{d_{\nu}+2}\mathbb{E}\big{[}\hat{\Phi}(\hat{f}_{ \varepsilon},\hat{g}_{\varepsilon})-\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})\big{]}.\]
Since \(\mathbb{E}[\hat{\Phi}_{\varepsilon}(f_{\varepsilon},g_{\varepsilon})]=\Phi_{ \varepsilon}(f_{\varepsilon},g_{\varepsilon})\), the result follows from Lemma 28.
### Fast rates for \(g\)-dual potentials in population norms
In this section, we establish the remaining part of Theorem 9 by proving the following Lemma.
**Lemma 30** (Fast population norm convergence of \(g\)-dual potentials).: _We have_
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}] \lesssim\big{(}\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(} \frac{L}{\varepsilon}\Big{)}^{13d_{\nu}+8}\cdot\frac{1}{n}.\]
Lemma 30 is proved with another self-bounding argument that we outline here, ignoring \(\varepsilon/L\) dependence for clarity. Note that we will frequently use the canonical extension of \(\hat{g}_{\varepsilon}\) to consider \((\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})\) as a dual variable pair for the semi-empirical problem between \(\hat{\mu}\) and \(\nu\). In particular, notation such as \(\nu(\hat{g}_{\varepsilon})\) is not referring to integration over any part of the sample \(y_{1},\ldots,y_{n}\sim\nu^{\otimes n}\), but rather over an _independent_ draw from \(\nu\).
1. Lemma 31: Use a Poincare inequality on \(\nu\) (which follows from our assumptions and is described in Proposition 40 in Appendix A.3) to control \(\|\hat{g}_{\varepsilon}-g_{\varepsilon}-\nu(\hat{g}_{\varepsilon})\|_{L^{2}( \nu)}^{2}\) in terms of entropic OT maps defined on \(N\)
2. Lemma 32: Use the same analysis as in the proof of the empirical norm convergence of the entropic OT maps (section 5.3) to control the previous term in terms of \(1/n\), the semi-empirical dual objective \(\Phi_{\varepsilon}^{\hat{\mu}\nu}\), and \(\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}\), with a free parameter from Young's inequality.
3. Lemma 33: Control the semi-empirical objective term with concavity to yield a bound in terms of \(1/n\) and \(\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}\), with a free parameter from Young's inequality.
4. Lemma 34: Bound the shift \(\mathbb{E}[\nu(\hat{g}_{\varepsilon})^{2}]\) by \(1/n\) using the semi-discrete objective once more
5. Use the free parameters from Young's inequality to conclude a \(1/n\) bound on \(\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}\).
The formal statements of the lemmas outlined above are given below.
**Lemma 31**.: _We have_
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}-\nu(\hat{g}_{\varepsilon} )\|_{L^{2}(\nu)}^{2}]\lesssim\mathbb{E}\Big{[}\Big{\|}\mathbb{E}_{\pi_{ \varepsilon}}[\nabla_{y}c(x,y)\,|\,y]-\frac{1}{n}\sum_{i=1}^{n}\nabla_{y}c(x_{ i},y)\hat{p}_{\varepsilon}(x_{i},y)\Big{\|}_{L^{2}(\nu)}^{2}\Big{]}.\]
**Lemma 32**.: _For all \(a>0\), we have_
\[\mathbb{E}\Big{[}\Big{\|}\mathbb{E}_{\pi_{\varepsilon}}[\nabla_{y }c(x,y)\,|\,y]-\frac{1}{n}\sum_{i=1}^{n}\nabla_{y}c(x_{i},y)\hat{p}_{ \varepsilon}(x_{i},y)\Big{\|}_{L^{2}(\nu)}^{2}\Big{]}\lesssim\frac{L^{2}}{ \varepsilon}\mathbb{E}[\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{\varepsilon },\hat{g}_{\varepsilon})-\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{ \varepsilon})]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
Taking \(a=C\varepsilon/L^{2}\) and \(a^{\prime}=C\varepsilon(\varepsilon/L)^{2d_{\nu}+2}\) for \(C\) a sufficiently small constant and rearranging yields
\[\mathbb{E}[\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}]\lesssim \Big{\{}L^{2}\Big{(}\frac{L}{\varepsilon}\Big{)}^{d_{\nu}+2}+\big{(} \varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\Big{(}\frac{L}{\varepsilon }\Big{)}^{13d_{\nu}+8}+\big{(}\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)} \Big{(}\frac{L}{\varepsilon}\Big{)}^{11d_{\nu}+4}\Big{\}}\cdot\frac{1}{n}.\]
Using \(\varepsilon/L\lesssim 1\) yields the result.
Proof of Lemma 31.: Note that the term on the LHS is precisely \(\operatorname{Var}_{\nu}(\hat{g}_{\varepsilon}-g_{\varepsilon})\) since we have specified \(g_{\varepsilon}\) to be such that \(\nu(g_{\varepsilon})=0\). We will thus apply the Poincare inequality on \(\nu\) from Proposition 40 to bound this term; this is legitimate because \(\hat{g}_{\varepsilon}\) and \(g_{\varepsilon}\) are each Lipschitz with respect to the ambient Euclidean norm and so are Lipschitz with respect to \((N,h)\), since embedded manifold distances are always at least as large as extrinsic distances. We obtain
\[\|\hat{g}_{\varepsilon}-g_{\varepsilon}-\nu(\hat{g}_{\varepsilon})\|_{L^{2}( \nu)}^{2}\lesssim\int\|\nabla_{N}(\hat{g}_{\varepsilon}-g_{\varepsilon})\|_{ h}^{2}\mathrm{d}\nu(y).\]
Since \(N\) is an embedded manifold, the manifold norm of the manifold gradient is always upper bounded by the Euclidean norm of the Euclidean gradient (for the reader's convenience, this fact is formally stated and proved as Proposition 41 in Appendix A.3) so that
\[\|\hat{g}_{\varepsilon}-g_{\varepsilon}-\nu(\hat{g}_{\varepsilon})\|_{L^{2}( \nu)}^{2}\lesssim\|\nabla\hat{g}_{\varepsilon}-\nabla g_{\varepsilon}\|_{L^{2 }(\nu)}^{2}.\]
To calculate \(\nabla g_{\varepsilon}\), consider the marginal constraint
\[e^{-\frac{1}{\varepsilon}g_{\varepsilon}(y)}=\int e^{-\frac{1}{\varepsilon}( c(x,y)-f_{\varepsilon}(x))}\mathrm{d}\mu(x).\]
Differentiating and re-arranging, we find that
\[\nabla g_{\varepsilon}(y)=\mathbb{E}_{\pi_{\varepsilon}}[\nabla_{y}c(x,y)\,| \,y].\]
Calculating in a similar fashion with the extended dual potential \(\hat{g}_{\varepsilon}\) we find
\[\nabla\hat{g}_{\varepsilon}(y)=\mathbb{E}_{\hat{\pi}_{\varepsilon}}[\nabla_{ y}c(x,y)\,|\,y]=\frac{1}{n}\sum_{i=1}^{n}\nabla_{y}c(x_{i},y)\hat{p}_{ \varepsilon}(x_{i},y).\]
The result follows.
Proof of Lemma 32.: For all \(y\) in \(\operatorname{supp}(\nu)\), put
\[\tilde{g}_{\varepsilon}(y):=-\varepsilon\log\Big{(}\frac{1}{n}\sum_{i=1}^{n}e ^{-\varepsilon^{-1}(c(x_{i},y)-f_{\varepsilon}(x_{i}))}\Big{)},\]
and let
\[\tilde{p}_{\varepsilon}(x,y):=e^{-\varepsilon^{-1}(c(x,y)-f_{\varepsilon}(x) -\tilde{g}_{\varepsilon}(y)}.\]
Then by Young's inequality,
\[\mathbb{E}\Big{[}\Big{\|}\mathbb{E}_{\pi_{\varepsilon}}[\nabla_{y}c(x, y)\,|\,y]-\frac{1}{n}\sum_{i=1}^{n}\nabla_{y}c(x_{i},y)\hat{p}_{\varepsilon}(x_{i},y) \Big{\|}_{L^{2}(\nu)}^{2}\Big{]}\lesssim\] \[\quad\quad\mathbb{E}\Big{[}\Big{\|}\mathbb{E}_{\pi_{\varepsilon}}[ \nabla_{y}c(x,y)\,|\,y]-\frac{1}{n}\sum_{i=1}^{n}\nabla_{y}c(x_{i},y)p_{ \varepsilon}(x_{i},y)\Big{\|}_{L^{2}(\nu)}^{2}\Big{]}\] \[\quad\quad+\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{i=1}^{n} \nabla_{y}c(x_{i},y)p_{\varepsilon}(x_{i},y)-\frac{1}{n}\sum_{i=1}^{n}\nabla_ {y}c(x_{i},y)\tilde{p}_{\varepsilon}(x_{i},y)\Big{\|}_{L^{2}(\nu)}^{2}\Big{]}\] \[\quad\quad+\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{i=1}^{n} \nabla_{y}c(x_{i},y)\tilde{p}_{\varepsilon}(x_{i},y)-\frac{1}{n}\sum_{i=1}^{n} \nabla_{y}c(x_{i},y)\hat{p}_{\varepsilon}(x_{i},y)\Big{\|}_{L^{2}(\nu)}^{2} \Big{]}.\]
Using the same argument as in the proof of Lemma 20 just with the role of \(\mu\) and \(\nu\) swapped, the first two terms can each be bounded as \(\lesssim L^{2}\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}/n\). By Lemma 16 and Proposition 43, we can bound \(\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}\lesssim(L/\varepsilon)^{d_{\nu}}\) and so arrive at
\[\mathbb{E}\Big{[}\Big{\|}\mathbb{E}_{\pi_{\varepsilon}}[\nabla_{ y}c(x,y)\,|\,y]-\frac{1}{n}\sum_{i=1}^{n}\nabla_{y}c(x_{i},y)\hat{p}_{ \varepsilon}(x_{i},y)\Big{\|}_{L^{2}(\nu)}^{2}\Big{]}\lesssim\] \[\quad\quad\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{i=1}^{n} \nabla_{y}c(x_{i},y)\tilde{p}_{\varepsilon}(x_{i},y)-\frac{1}{n}\sum_{i=1}^{n }\nabla_{y}c(x_{i},y)\hat{p}_{\varepsilon}(x_{i},y)\Big{\|}_{L^{2}(\nu)}^{2} \Big{]}+L^{2}\Big{(}\frac{L}{\varepsilon}\Big{)}^{d_{\nu}}\cdot\frac{1}{n}.\]
We proceed as in the proof of Lemma 19, by first applying triangle inequality and then Pinsker's. This yields
\[\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{i=1}^{n} \nabla_{y}c(x_{i},y)\tilde{p}_{\varepsilon}(x_{i},y)-\frac{1}{n} \sum_{i=1}^{n}\nabla_{y}c(x_{i},y)\hat{p}_{\varepsilon}(x_{i},y)\Big{\|}_{L^{ 2}(\nu)}^{2}\Big{]}\] \[\quad\quad\leqslant L^{2}\mathbb{E}\Big{[}\Big{\|}\frac{1}{n} \sum_{i=1}^{n}|\hat{p}_{\varepsilon}(x_{i},y)-\hat{p}_{\varepsilon}(x_{i},y)| \Big{\|}_{L^{2}(\nu)}^{2}\Big{]}\] \[\quad\quad\lesssim L^{2}\mathbb{E}\Big{[}\int\mathrm{KL}(\frac{ 1}{n}\hat{p}_{\varepsilon}(\cdot,y)\,\|\,\frac{1}{n}\tilde{p}_{\varepsilon}( \cdot,y))\mathrm{d}\nu(y)\Big{]}\] \[\quad\quad=\frac{L^{2}}{\varepsilon}\mathbb{E}\Big{[}\int\Big{(} \frac{1}{n}\sum_{i=1}^{n}\hat{p}_{\varepsilon}(x_{i},y)(\hat{f}_{\varepsilon}(x _{i})-f_{\varepsilon}(x_{i})+\hat{g}_{\varepsilon}(y)-\tilde{g}_{\varepsilon} (y))\Big{)}\mathrm{d}\nu(y)\Big{]}\] \[\quad\quad=\frac{L^{2}}{\varepsilon}\mathbb{E}\Big{[}\Phi_{ \varepsilon}^{\hat{\mu}\nu}(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})-\Phi_ {\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},\tilde{g}_{\varepsilon})+\Big{\langle} \int\hat{p}_{\varepsilon}(\cdot,y)\mathrm{d}\nu(y)-1,f_{\varepsilon}-f_{ \varepsilon}\Big{\rangle}_{L^{2}(\hat{\mu})}\Big{]}\] \[\quad\quad\leqslant\frac{L^{2}}{\varepsilon}\mathbb{E}\Big{[} \Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})- \Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{\varepsilon})+\Big{\langle} \int\hat{p}_{\varepsilon}(\cdot,y)\mathrm{d}\nu(y)-1,\hat{f}_{\varepsilon}-f_{ \varepsilon}\Big{\rangle}_{L^{2}(\hat{\mu})}\Big{]},\]
where the second equality follows because \((\hat{\mu}\otimes\nu)(\hat{p}_{\varepsilon})=(\hat{\mu}\otimes\nu)(\tilde{p}_{ \varepsilon})=1\), and the final inequality is via Proposition 13. To conclude the argument, we apply Young's inequality
to the inner product term to obtain, for all \(a>0\),
\[\mathbb{E}\Big{[}\Big{\langle}\int\hat{p}_{\varepsilon}(\cdot,y) \mathrm{d}\nu(y)-1,\hat{f}_{\varepsilon}-f_{\varepsilon}\Big{\rangle}_{L^{2}( \hat{\mu})}\Big{]} \leqslant a\mathbb{E}\Big{[}\Big{\|}\int(\hat{p}_{\varepsilon}( \cdot,y)-1)\mathrm{d}\nu(y)\Big{\|}_{L^{2}(\hat{\mu})}^{2}\Big{]}\] \[+\frac{1}{a}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\| _{L^{2}(\hat{\nu})}^{2}]\] \[=a\mathbb{E}\Big{[}\Big{\|}\int(\hat{p}_{\varepsilon}(\cdot,y)-p_ {\varepsilon}(\cdot,y))\mathrm{d}\nu(y)\Big{\|}_{L^{2}(\hat{\mu})}^{2}\Big{]}\] \[+\frac{1}{a}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\| _{L^{2}(\hat{\nu})}^{2}]\] \[\leqslant a\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\| _{L^{2}(\hat{\mu}\otimes\nu)}^{2}]+\frac{1}{a}\mathbb{E}[\|\hat{f}_{ \varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}].\]
By Lemma 35, which is stated and proved in the following section,
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\hat{\mu} \otimes\nu)}^{2}]\lesssim\frac{1}{\varepsilon^{2}}\cdot\Big{(}\frac{L}{ \varepsilon}\Big{)}^{2d_{\nu}}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{ \varepsilon}\|_{L^{2}(\hat{\mu})}^{2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon} \|_{L^{2}(\nu)}^{2}]+\frac{1}{n}.\]
Applying Lemma 24 and Lemma 27 we have
\[\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}] \lesssim\big{(}\varepsilon^{2}+\frac{1}{\varepsilon^{2}}\big{)}\cdot\Big{(} \frac{L}{\varepsilon}\Big{)}^{9d_{\nu}+4}\cdot\frac{1}{n}.\]
The result follows by collecting terms.
Proof of Lemma 33.: Proposition 13 on concavity of the dual implies that
\[\mathbb{E}[\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{\varepsilon},\hat{g}_{ \varepsilon})-\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{ \varepsilon})]\leqslant\mathbb{E}[\langle\nabla\Phi_{\varepsilon}^{\hat{\mu} \nu}(f_{\varepsilon},g_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon},\hat{g}_{\varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}( \nu)}].\]
Observe that because \((f_{\varepsilon},g_{\varepsilon})\) attain the \(\nu\)-marginal, the first component of \(\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}\) vanishes, implying
\[\langle\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{ \varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon},\hat{g}_{\varepsilon}-g _{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}(\nu)}=\big{\langle}1- \frac{1}{n}\sum_{i=1}^{n}p_{\varepsilon}(x_{i},y)),\hat{g}_{\varepsilon}-g_{ \varepsilon}\big{\rangle}_{L^{2}(\nu)}.\]
Young's inequality yields, for all \(a>0\),
\[\mathbb{E}[\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{\varepsilon},\hat{g}_{ \varepsilon})-\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{ \varepsilon})]\lesssim\frac{1}{a}\mathbb{E}\big{[}\big{\|}1-\frac{1}{n}\sum_ {i=1}^{n}p_{\varepsilon}(x_{i},y)\big{\|}_{L^{2}(\nu)}^{2}\big{]}+a\mathbb{E}[ \|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}].\]
The first term can be calculated as in (5.2) to yield
\[\mathbb{E}\big{[}\big{\|}1-\frac{1}{n}\sum_{i=1}^{n}p_{\varepsilon}(x_{i},y) \big{\|}_{L^{2}(\nu)}^{2}\big{]}\lesssim\frac{\|p_{\varepsilon}\|_{L^{2}(\mu \otimes\nu)}^{2}}{n}.\]
Finally, applying Lemma 16 and Proposition 43, we obtain the result.
Proof of Lemma 34.: Note that, since \((\hat{\mu}\otimes\nu)(\hat{p}_{\varepsilon})=(\hat{\mu}\otimes\nu)(p_{ \varepsilon})=1\) and we make the normalization assumption \(\nu(g_{\varepsilon})=0\), we have
\[\nu(\hat{g}_{\varepsilon})=\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{ \varepsilon},\hat{g}_{\varepsilon})-\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{ \varepsilon},g_{\varepsilon})+\hat{\mu}(f_{\varepsilon}-\hat{f}_{\varepsilon }).\]
By Jensen's inequality, we have that
\[|\nu(\hat{g}_{\varepsilon})|\leqslant|\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{ \varepsilon},\hat{g}_{\varepsilon})-\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{ \varepsilon},g_{\varepsilon})|+\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{ 2}(\hat{\mu})}.\]
Observe that the two-sided concavity statements from Proposition 13 imply that
\[|\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{\varepsilon},\hat{g}_ {\varepsilon})-\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{ \varepsilon})| \leqslant|\langle\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{ \varepsilon},g_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon},\hat{g}_{ \varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}(\nu)}|\] \[+|\langle\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{ \varepsilon},\hat{g}_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon}, \hat{g}_{\varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}( \nu)}|.\]
For the first term, Cauchy-Schwarz and Proposition 14 on uniform bounds for the dual potentials imply that
\[|\langle\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{ \varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon},\hat{g}_{\varepsilon}-g _{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}(\nu)}| \lesssim\|\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(f_{\varepsilon},g_{ \varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\nu)}.\]
For the second term, since \((\hat{f}_{\varepsilon},\hat{g}_{\varepsilon})\) attain the \(\hat{\mu}\) marginal, the second component of \(\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{\varepsilon},\hat{g}_{ \varepsilon})\) vanishes, so that
\[|\langle\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{ \varepsilon},\hat{g}_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon}, \hat{g}_{\varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}( \nu)}| =\big{|}\langle 1-\int\hat{p}_{\varepsilon}(x,y)\mathrm{d}\nu(y),\hat{f}_{ \varepsilon}-f_{\varepsilon}\rangle_{L^{2}(\hat{\mu})}\big{|}\] \[\leqslant\|\int\hat{p}_{\varepsilon}(x,y)\mathrm{d}\nu(y)-1\|_{L ^{2}(\hat{\mu})}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}\] \[\lesssim\|\hat{p}_{\varepsilon}\|_{L^{\infty}(\hat{\mu}\otimes \nu)}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}.\]
Now, let \(\mathcal{E}\) denote the event described in Lemma 26. Then by the pointwise control given in Proposition 14, we have
\[\mathbb{E}[\|\hat{p}_{\varepsilon}\|_{L^{\infty}(\hat{\mu} \otimes\nu)}^{2}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{ 2}] =\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{p}_{\varepsilon}\|_{L^{ \infty}(\hat{\mu}\otimes\nu)}^{2}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^ {2}(\hat{\mu})}^{2}]\] \[+\mathbb{E}[\mathbb{1}[\mathcal{E}^{c}]\|\hat{p}_{\varepsilon}\|_ {L^{\infty}(\hat{\mu}\otimes\nu)}^{2}\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_ {L^{2}(\hat{\mu})}^{2}]\] \[\lesssim\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{p}_{\varepsilon} \|_{L^{\infty}(\hat{\mu}\otimes\nu)}^{2}\|\hat{f}_{\varepsilon}-f_{\varepsilon} \|_{L^{2}(\hat{\mu})}^{2}]+\frac{1}{n}\] \[\lesssim\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\mathbb{E }[\mathbb{1}[\mathcal{E}]\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}( \hat{\mu})}^{2}]+\frac{1}{n}\] \[\leqslant\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\mathbb{E }[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2}]+\frac{1}{n}\]
We thus conclude that
\[\mathbb{E}[\nu(\hat{g}_{\varepsilon})^{2}] \lesssim\mathbb{E}\big{[}(\langle\nabla\Phi_{\varepsilon}^{\hat{ \mu}\nu}(f_{\varepsilon},g_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{ \varepsilon},\hat{g}_{\varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu}) \times L^{2}(\nu)})^{2}\] \[+(\langle\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}(\hat{f}_{ \varepsilon},\hat{g}_{\varepsilon}),(\hat{f}_{\varepsilon}-f_{\varepsilon}, \hat{g}_{\varepsilon}-g_{\varepsilon})\rangle_{L^{2}(\hat{\mu})\times L^{2}( \nu)})^{2}+\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\hat{\mu})}^{2} \big{]}\] \[\lesssim\mathbb{E}\big{[}\|\nabla\Phi_{\varepsilon}^{\hat{\mu}\nu}( f_{\varepsilon},g_{\varepsilon})\|_{L^{2}(\hat{\mu})\times L^{2}(\nu)}^{2}+\Big{(} \frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\|\hat{f}_{\varepsilon}-f_{\varepsilon} \|_{L^{2}(\hat{\mu})}^{2}\big{]}+\frac{1}{n}.\]
The squared gradient term can be calculated as in (5.2) and then bounded with Lemma 16 and Proposition 43, and the latter bounded with Lemma 24 and Lemma 27, to yield the result.
### Fast rates with MID scaling for maps and densities
In this section, we give the proof of Corollary 10 and Corollary 11. To this end, the following Lemma is convenient.
**Lemma 35**.: _We have_
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2} ]\lesssim\frac{1}{\varepsilon^{2}}\cdot\Big{(}\frac{L}{\varepsilon}\Big{)}^{2 d_{\nu}}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\mu)}^{2}+\| \hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}]+\frac{1}{n}.\]
_And we also have the analogous bounds_
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\mu\otimes\hat{ \nu})}^{2}]\lesssim\frac{1}{\varepsilon^{2}}\cdot\Big{(}\frac{L}{\varepsilon} \Big{)}^{2d_{\nu}}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}( \mu)}^{2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}]+ \frac{1}{n},\]
_as well as,_
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\hat{\mu}\otimes \nu)}^{2}]\lesssim\frac{1}{\varepsilon^{2}}\cdot\Big{(}\frac{L}{\varepsilon} \Big{)}^{2d_{\nu}}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}( \hat{\mu})}^{2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}]+ \frac{1}{n}.\]
With this lemma in hand, the proofs of Corollary 10 and Corollary 11 are a simple application of the results from the previous section.
Proof of Corollary 10.: To prove this result, let us first assume that \(0\in\operatorname{supp}(\nu)\) so that for each \(y\in\operatorname{supp}(\nu)\), we have \(\|y\|\leqslant R\). By Young's inequality,
\[\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(\mu)} ^{2}] =\mathbb{E}[\|\hat{T}_{\varepsilon}-T_{\varepsilon}\|_{L^{2}(\mu)} ^{2}]\] \[\lesssim\mathbb{E}\Big{[}\Big{\|}\hat{T}_{\varepsilon}-\frac{1} {n}\sum_{j=1}^{n}y_{j}p_{\varepsilon}(x,y_{j})\Big{\|}_{L^{2}(\mu)}^{2}\Big{]} +\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{j=1}^{n}y_{j}p_{\varepsilon}(x,y_{j })-T_{\varepsilon}\Big{\|}_{L^{2}(\mu)}^{2}\Big{]}.\]
The second term is merely a variance, and can be bounded as
\[\mathbb{E}\Big{[}\Big{\|}\frac{1}{n}\sum_{j=1}^{n}y_{j}p_{ \varepsilon}(x,y_{j})-T_{\varepsilon}\Big{\|}_{L^{2}(\mu)}^{2}\Big{]} =\frac{1}{n}\|yp_{\varepsilon}(x,y)-T_{\varepsilon}(x)\|_{L^{2}( \mu\otimes\nu)}^{2}\] \[\lesssim\frac{R^{2}}{n}\|p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu) }^{2}\lesssim\frac{R^{2}}{n}\Big{(}\frac{L}{\varepsilon}\Big{)}^{d_{\nu}},\]
where the final inequality follows by applying Lemma 16 with Proposition 43. We now focus on the remaining expectation, and apply triangle inequality and Jensen's to yield, for any \(x\in\operatorname{supp}\mu\),
\[\Big{\|}\hat{T}_{\varepsilon}-\frac{1}{n}\sum_{j=1}^{n}y_{j}p_{ \varepsilon}(x,y_{j})\Big{\|}^{2} \lesssim\frac{1}{n}\sum_{j=1}^{n}\|y_{j}\|^{2}(\hat{p}_{ \varepsilon}(x,y_{j})-p_{\varepsilon}(x,y_{j}))^{2}\] \[\lesssim\frac{R^{2}}{n}\sum_{j=1}^{n}(\hat{p}_{\varepsilon}(x,y_{ j})-p_{\varepsilon}(x,y_{j}))^{2}.\]
Taking expectations and applying Lemma 35 yields
\[\mathbb{E}\Big{[}\Big{\|}\hat{T}_{\varepsilon}-\frac{1}{n}\sum_{j =1}^{n}y_{j}p_{\varepsilon}(x,y_{j})\Big{\|}_{L^{2}(\mu)}^{2}\Big{]}\lesssim R ^{2}\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\mu\otimes \hat{\nu})}]\] \[\lesssim\frac{R^{2}}{\varepsilon^{2}}\Big{(}\frac{L}{\varepsilon} \Big{)}^{2d_{\nu}}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}( \mu)}^{2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\hat{\nu})}^{2}]+ \frac{R^{2}}{n}.\]
The result follows from Lemma 24 and Lemma 27. For the general case where \(0\) may not be in \(\operatorname{supp}(\nu)\), we can perform the above argument with suitably offset \(y\).
Proof of Corollary 11.: By Lemma 35,
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2} ]\lesssim\frac{1}{\varepsilon^{2}}\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{ \nu}}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\mu)}^{2}+\| \hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}]+\frac{1}{n}.\]
The result follows by applying Lemma 24 and Lemma 27 to control the \(\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2}(\mu)}^{2}\) term, and Lemma 30 to control the \(\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}\) term.
Acknowledgements.The author gratefully acknowledges partial support from NSF awards IIS-1838071 and CCF-2106377, and is indebted to Enric Boix-Adsera, Sinho Chewi, Simone Di Marino, Augusto Gerolin, Dheeraj Nagaraj, Jonathan Niles-Weed, Aram-Alexandre Pooladian, Philippe Rigollet, George Stepaniants, and Stephanie Wu for enlightening conversations. Special thanks go to Sinho Chewi, Aram-Alexandre Pooladian, and Stephanie Wu for helpful comments on earlier drafts.
## Appendix A Background on embedded manifolds and RGGs
### Preliminaries on embedded manifolds
For a comprehensive introduction to Riemannian manifolds, we refer the reader to the books [16, 17]. In this section, we will establish notation while reviewing the definitions of some necessary geometric quantities.
To this end, suppose \((N_{0},h_{0})\) is a compact, connected Riemannian manifold. Recall that, in coordinates, the canonical Riemannian volume has density
\[d\operatorname{vol}_{N_{0}}(y)=\sqrt{\det h_{0}(y)}.\]
Given \(p\in N_{0}\) and \(u,v\in T_{p}N_{0}\), we shall write \(\langle u,v\rangle_{h_{0}}:=h_{0}(u,v)\), and similarly \(\|u\|_{h_{0}}^{2}:=\langle u,u\rangle_{h_{0}}\). Recall that for a point \(p\in N_{0}\), the exponential map \(\exp_{p}:T_{p}N_{0}\to N_{0}\) is defined as \(\exp_{p}(v):=\gamma_{v}(1)\) where \(\gamma_{v}\) is the constant-speed geodesic such that \(\gamma_{v}(0)=p\) and \(\dot{\gamma}_{v}(0)=v\). For a point \(p\in N_{0}\), the injectivity radius \(\operatorname{inj}(p)\) is defined as
\[\operatorname{inj}(p):=\sup\big{\{}R\geqslant 0\ \big{|}\ \exp_{p}\colon B(0,R) \subset T_{p}N_{0}\to N_{0}\text{ is a diffeomorphism}\big{\}}.\]
The injectivity radius of \(N_{0}\) is then defined as
\[\operatorname{inj}(N_{0}):=\inf_{p\in N_{0}}\operatorname{inj}(p).\]
It is an elementary fact that when \(N_{0}\) is compact, \(\operatorname{inj}(N_{0})>0\), see e.g. [17, Lemma 6.16].
The Riemannian curvature tensor \(\operatorname{Rm}\) maps from \(4\)-tuples of smooth vector fields to smooth functions \(\operatorname{Rm}\colon\mathscr{X}(N_{0})^{4}\to C^{\infty}(N_{0})\) and is defined as, for smooth vector fields \(W,X,Y,Z\in\mathscr{X}(N_{0})\),
\[\operatorname{Rm}(X,Y,Z,W):=\langle\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X} Z-\nabla_{[X,Y]}Z,W\rangle_{h_{0}}.\]
Given \(p\in M\) and \(u,v\in T_{p}N_{0}\) linearly independent, the sectional curvature of \(u,v\) is defined as
\[\sec(u,v):=\frac{\operatorname{Rm}(u,v,v,u)}{\|u\|_{h_{0}}\|v\|_{h_{0}}-\langle u,v\rangle_{h_{0}}}.\]
The sectional curvatures describe the geometry of the embedded surface generated by \(u,v\), and when \(u,v\) range over \(T_{p}N_{0}\), determine the full Riemannian curvature tensor \(\operatorname{Rm}\). It is, again, an elementary fact that when \(N_{0}\) is compact, the sectional curvatures are uniformly bounded in absolute value, see [1, Section 9.3].
Given a smooth manifold \(N_{0}\), we say that \(N_{0}\) is embedded in \(\mathbb{R}^{D}\) if there is a smooth injection \(\iota\colon N_{0}\hookrightarrow\mathbb{R}^{D}\) which is a homeomorphism onto its image and such that for all \(p\in N_{0}\), the differential \(d\iota_{p}\colon T_{p}N_{0}\to T_{\iota(p)}\mathbb{R}^{D}\) has trivial kernel. In such a case, we identify \(N_{0}\) with its image under \(\iota\), and so write \(N_{0}\subseteq\mathbb{R}^{d}\). When \(N_{0}\) is endowed with a Riemannian structure \((N_{0},h_{0})\), we say that \((N_{0},h_{0})\) is an embedded Riemannian manifold in \(\mathbb{R}^{d}\) if \(N_{0}\) is embedded in \(\mathbb{R}^{d}\) and \(h_{0}(u,v)=\langle d\iota_{p}(u),d\iota_{p}(v)\rangle\) for all \(u,v\in T_{p}N_{0}\) and the inner product is the standard Euclidean one.
The main quantitative property of subsets in \(\mathbb{R}^{D}\) we need is called the reach. To define this quantity, suppose \(S\subseteq\mathbb{R}^{D}\), and for all \(x\in\mathbb{R}^{D}\) let \(d(x,S):=\inf_{y\in S}\|x-y\|\). Then \(\operatorname{reach}(S)\) is the supremum over all \(\varepsilon\geqslant 0\) such that for all \(x\in\mathbb{R}^{D}\) for which \(d(x,S)\leqslant\varepsilon\), there is a unique \(y\in S\) so that \(\|x-y\|=d(x,S)\). Once more, in our setting of compact smooth embedded manifolds \(N_{0}\), \(\operatorname{reach}(N_{0})>0\)[20, Prop. 14].
### Spectral gap for the RGG of \(\nu\)
The interface between our results and those relating to embedded Riemannian manifolds in Euclidean spaces is primarily contained in Theorem 36 below, which gives a spectral gap for the random geometric graph (RGG) of \(\nu\), and is used to establish our main technical result in this embedded manifold setting, Lemma 23, which gives a weak form of strong concavity for the empirical dual function.
The RGG of \(\nu\) is defined, for a fixed threshold \(\delta>0\), as the graph on \([n]\) with weights for \(j,k\in[n]\) defined as
\[w_{jk}:=\begin{cases}\frac{C}{n\delta^{d\iota+2}}&\text{ if }\|y_{j}-y_{k}\|< \delta,\\ 0&\text{ else},\end{cases}\]
where \(C=C(d_{\nu})\) is constant only depending on the intrinsic dimension \(d_{\nu}\). We write \(j\sim k\) when \(\|y_{j}-y_{k}\|<\delta\), and the resulting weighted graph on \([n]\) is written as \(\Gamma\). For \(\alpha\in L^{\infty}(\hat{\nu})\), define the associated Dirichlet form by
\[D(\alpha):=\frac{1}{n}\sum_{j\sim k}w_{jk}(\alpha(y_{k})-\alpha(y_{j}))^{2},\]
where \(C=C(d_{\nu})\) is a constant only depending on the intrinsic dimension \(d_{\nu}\). The un-normalized graph Laplacian on \(\Gamma\) with respect to \(L^{2}(\hat{\nu})\) is then
\[\Delta_{\Gamma}(\alpha)(y_{j}):=\sum_{k\colon k\sim j}w_{jk}(\alpha(y_{k})- \alpha(y_{j})).\]
We seek a spectral gap for \(\Delta_{\Gamma}\), meaning that its second eigenvalue is bounded away from \(0\). To sketch how this is possible, note that under our assumptions on \(N\) and \(\nu\), such a result holds for the continuous analog of \(\Delta_{\Gamma}\). By employing recent work on the convergence of the spectrum of RGGs to their continuous analogs, we can transfer a continuous spectral gap to a (random) discrete one [10]. The result we ultimately obtain is as follows.
**Theorem 36** (Spectral gap for the RGG of \(\nu\)).: _Suppose \(\delta\lesssim 1\) and \(n\) is large enough that \(\delta^{d_{\nu}}\gtrsim(\log n)/n\). Then with probability at least \(1-C/n\), \(\lambda_{2}(\Delta_{\Gamma})\gtrsim 1\). In this case, we have the inequality_
\[D(\alpha)\gtrsim\hat{\nu}((\alpha-\hat{\nu}(\alpha))^{2})\qquad\forall\alpha \in L^{\infty}(\hat{\nu}).\]
To derive this theorem from the results of [10], we first introduce the continuous analog of the un-normalized graph Laplacian and subsequently discuss its spectrum.
A certain weighted Laplacian.For a thorough introduction to weighted Riemannian Laplacians we refer the reader to the book [11]. Suppose \(\nu\) has density \(p\colon N\to\mathbb{R}_{\geqslant 0}\) with respect \(\operatorname{vol}_{N}\). Consider the weighted Laplacian operator
\[\Delta_{\nu}(\cdot):=-\frac{1}{p}\operatorname{div}(p^{2}\nabla\cdot),\]
where \(\operatorname{div}\) is the Riemannian divergence on \(N\). In fact, this weighted Laplacian is the correct continuous analog of the un-normalized graph Laplacian \(\Delta_{\Gamma}\)[1, 1], and we will thus consider its spectrum. Because \(p\) is bounded away from \(0\) and \(N\) is assumed to be compact, \(\Delta_{\nu}\) has a discrete, non-negative spectrum, written \(0=\lambda_{1}\leqslant\lambda_{2}\leqslant\cdots\) (with multiplicity), and the minimax principle holds [10, Section 1.4]. Since \(N\) is connected and \(p\) is bounded above and away from \(0\), it follows that \(\lambda_{2}>0\). Summarizing, we have the following Proposition.
**Proposition 37**.: _The operator \(\Delta_{\nu}\) has a discrete, non-negative spectrum \(\{\lambda_{k}\}_{k\geqslant 1}\), and \(\lambda_{1}=0\), while \(\lambda_{2}>0\)._
We we will simply treat \(\lambda_{2}\) as a constant depending on \(N\) and \(\nu\), but we remark that precise quantitative control on \(\lambda_{2}\) is intimately related to the Ricci curvature of the manifold \(N\)[14, Chapter 5].
Spectral convergence of the RGG of \(\nu\).The results of [10] quantify the convergence of the spectrum of \(\Delta_{\Gamma}\) to that of \(\Delta_{\nu}\). For our purposes, it is enough to consider their result only in terms of lower bound control of the second eigenvalue of \(\Delta_{\Gamma}\), namely \(\lambda_{2}(\Delta_{\Gamma})\), in terms of the second eigenvalue of \(\Delta_{\nu}\), namely \(\lambda_{2}\), but we emphasize that their results are far stronger than we represent here. To make the geometric constants involved in this bound explicit, let \(i_{0}:=\operatorname{inj}(N)\) denote the injectivity radius of \(N\), \(K\) be a uniform upper bound on the absolute value of the sectional curvatures of \(N\), and \(r:=\operatorname{reach}(N)\) the reach (these quantities are defined in the previous section). Since \(N\) is compact, \(i_{0},r>0\) and \(K<\infty\).
**Theorem 38** (Theorem 4 (adapted) [10]).: _Suppose \(\delta>0\) is such that_
\[\delta<\min\big{\{}1,\frac{i_{0}}{10},\frac{1}{\sqrt{d_{\nu}K}},\frac{r}{ \sqrt{27d_{\nu}}}\big{\}}.\]
_Suppose \(W_{\infty}(\hat{\nu},\nu)(d_{\nu}+5)<\delta\). Then, if_
\[\sqrt{\lambda_{2}}\delta<C\]
_we have_
\[\lambda_{2}(\Delta_{\Gamma})\geqslant\lambda_{2}(1-C(\delta+\frac{1}{\delta}W_{ \infty}(\hat{\nu},\nu)+\sqrt{\lambda_{2}}\delta+K\delta^{2})),\]
_where the constants \(C\) only depend on \(d_{\nu}\) and uniform bounds on \(p\) and its Lipschitz constant._
This result combined with Proposition 37 shows that, so long as \(\delta\) is sufficiently small relative to constants depending on the geometry of \(N\), the intrinsic dimension \(d_{\nu}\), and uniform bounds on \(p\) and its Lipschitz constant, and \(W_{\infty}(\hat{\nu},\nu)\lesssim\delta\), we have \(\lambda_{2}(\Delta_{\Gamma})\gtrsim 1\).
To make this bound usable, we also need to get control on \(W_{\infty}(\hat{\nu},\nu)\).
**Theorem 39** (Theorem 2 (adapted) [10]).: _With probability at least \(1-C/n\),_
\[W_{\infty}(\hat{\mu},\mu)\lesssim\Big{(}\frac{\log n}{n}\Big{)}^{1/d_{\nu}},\]
_where the constants only depend on \(d_{\nu}\), upper and lower bounds on \(p\) and its Lipschitz constant, and the injectivity radius of \(N\), uniform bound on the absolute value of the sectional curvature of \(N\), and the reach of \(N\)._
Combining Proposition 37, Theorem 38, and Theorem 39, we obtain Theorem 36.
Scale of our results.The statement of Theorem 38 allows us to describe the required size of \(\delta\) in Theorem 36, and hence \(\varepsilon/L\) in our results from section 3.3, in terms of the injectivity radius \(i_{0}\), uniform bound \(K\) on the absolute value of the sectional curvature, reach \(r\) (these geometric quantities are described in detail in section A.1) and second eigenvalue \(\lambda_{2}\), as
\[\frac{\varepsilon}{L}<\min\Big{\{}1,\frac{i_{0}}{10},\frac{1}{\sqrt{d_{\nu}K} },\frac{r}{\sqrt{27d_{\nu}}},\frac{C}{\sqrt{\lambda_{2}}}\Big{\}},\] (A.1)
where \(C\) is a constant depending on \(d_{\nu}\), uniform upper and lower bounds on \(p\) and its Lipschitz constant. In fact, we will use two more small-scale facts about the embedded manifold \(N\) beyond Theorem 38, namely Proposition 42 and Proposition 43 below, but this bound is enough to permit their use.
### Additional facts about embedded manifolds
We also use Assumptions 2 and 3 to derive a Poincare inequality for \(\nu\) in Proposition 40, as well as to give convenient bounds on the covering numbers \(\mathcal{N}(\nu,\cdot)\) in Proposition 43.
Similar reasoning as in the previous section implies that the weighted Laplacian \(\Delta(\cdot)=-\frac{1}{p}\operatorname{div}(p\nabla\cdot)\) sports a spectral gap, and entails the following result.
**Proposition 40**.: _For all locally Lipschitz functions \(\zeta\colon N\to\mathbb{R}\) we have_
\[\operatorname{Var}_{\nu}(\zeta)\lesssim\int\|\nabla_{N}\zeta(y)\|_{h}^{2} \mathrm{d}\nu(y).\]
The following Proposition is an elementary consequence of the fact that \(N\) is an embedded manifold of \(\mathbb{R}^{d}\).
**Proposition 41**.: _Let \(\zeta\colon N\to\mathbb{R}\) and \(p\in N\). Suppose \(\zeta\) has a \(\nabla_{N}\) gradient at \(p\), and admits a local extension \(\bar{\zeta}\colon U\to\mathbb{R}\) on some neighborhood \(U\subset\mathbb{R}^{d}\) of \(p\) such that \(\bar{\zeta}\) has a Euclidean gradient at \(p\). Then_
\[\|\nabla_{N}\zeta(p)\|_{h}^{2}\leqslant\|\nabla\bar{\zeta}(p)\|^{2}.\]
Proof of Proposition 41.: Let \(\iota\colon N\hookrightarrow\mathbb{R}^{d}\) be the embedding of \(N\) into \(\mathbb{R}^{d}\). Then for all \(v\in T_{p}N\),
\[d\zeta_{p}(v)=\langle\nabla_{N}\zeta(p),v\rangle_{h}=\langle d_{p}(\nabla_{N} \zeta(p)),d_{p}(v)\rangle,\]
and on the other hand,
\[d\zeta_{p}(v)=d\bar{\zeta}_{\iota(p)}\circ d_{p}(v)=\langle\nabla\bar{\zeta}( \iota(p)),d_{p}(v)\rangle.\]
Since this is true for all \(v\in T_{p}N\), it follows that \(d_{p}(\nabla_{N}\zeta(p))\) is the orthogonal projection of \(\nabla\bar{\zeta}(p)\) onto \(\operatorname{im}(d_{p})\subset T_{\iota(p)}\mathbb{R}^{d}\), and so
\[\|\nabla_{N}\zeta(p)\|_{h}^{2}=\|d_{p}(\nabla_{N}\zeta(p))\|^{2}\leqslant\| \nabla\bar{\zeta}(p)\|^{2}.\]
**Proposition 42**.: _Let \(i_{0}:=\operatorname{inj}(N)\) be the injectivity radius of \(N\)\(K\) be a uniform upper bound on the absolute value of the sectional curvatures of \(N\), and \(r:=\operatorname{reach}(N)\). Suppose \(\tau>0\) is such that_
\[\tau\leqslant\min\Big{\{}\frac{1}{\sqrt{K}},i_{0},\frac{r}{2}\Big{\}}.\]
_Then for all \(y\in N\),_
\[\tau^{d_{\nu}}\lesssim\nu(B(y,\tau))\lesssim\tau^{d_{\nu}}.\]
Proof of Proposition 42.: Fix some \(y\in N\). By [1, Proposition 2], for any \(y^{\prime}\in N\) such that \(\|y-y^{\prime}\|\leqslant r/2\),
\[\|y-y^{\prime}\|\leqslant\operatorname{dist}_{N}(y,y^{\prime})\leqslant\|y-y ^{\prime}\|+\frac{8}{r^{2}}\|y-y^{\prime}\|^{3}\]
Hence
\[B_{\operatorname{dist}_{N}}(y,\tau)\subset B_{\|\cdot\|}(y,\tau)\cap N\subset B _{\operatorname{dist}_{N}}\big{(}y,\tau+\frac{8}{r^{2}}\tau^{3}\big{)}.\]
Whence
\[\nu(B(y,\tau))=\nu(B_{\|\cdot\|}(y,\tau))\geqslant\nu(B_{\operatorname{ dist}_{N}}(y,\tau))\gtrsim\operatorname{vol}_{N}(B_{\operatorname{dist}_{N}}(y, \tau)),\]
where the final inequality follows because the density of \(\nu\) with respect to \(\operatorname{vol}_{N}\) is bounded away from \(0\) under Assumptions 2 and 3. By [1, Equation 1.35], when \(\tau\lesssim 1\),
\[\operatorname{vol}_{N}(B_{\operatorname{dist}_{N}}(y,\tau))\gtrsim\tau^{d_{ \nu}},\]
yielding the lower bound. Similarly,
\[\nu(B(y,\tau))=\nu(B_{\|\cdot\|}(y,\tau))\leqslant\nu\big{(}B_{\mathrm{dist}_{N}} \big{(}y,\tau+\frac{8}{r^{2}}\tau^{3}\big{)}\big{)}\lesssim\mathrm{vol}_{N} \left(B_{\mathrm{dist}_{N}}\big{(}y,\tau+\frac{8}{r^{2}}\tau^{3}\big{)}\right),\]
and we can conclude the upper bound with another application of [13, Equation 1.35] where we have the opposite inequality
\[\mathrm{vol}_{N}\left(B_{\mathrm{dist}_{N}}\big{(}y,\tau+\frac{8}{r^{2}}\tau^{3 }\big{)}\right)\lesssim\big{(}\tau+\frac{8}{r^{2}}\tau^{3}\big{)}^{d_{\nu}} \lesssim\tau^{d_{\nu}}.\]
The following control on the covering numbers of \(N\) will also be convenient.
**Proposition 43**.: _Suppose \(\tau\) is as in Proposition 42. Then_
\[\tau^{-d_{\nu}}\lesssim\mathcal{N}(N,\|\cdot\|,\tau)\lesssim\tau^{-d_{\nu}}.\]
Proof of Proposition 43.: For the upper bound, take a maximal \(\tau/2\) packing of \(N\) in the Euclidean norm, \(z_{1},\dots,z_{K}\). Then
\[1\geqslant\sum_{k=1}^{K}\nu(B(z_{k},\tau/2)).\]
For \(\tau\lesssim 1\), by Proposition 42,
\[1\gtrsim K\tau^{d_{\nu}}.\]
Observe that since the packing is maximal, \(K\geqslant\mathcal{N}(N,\|\cdot\|,\tau)\), completing the proof of the upper bound.
For the lower bound, we argue similarly. Take any \(\tau\) cover of \(N\) in the Euclidean norm, \(z_{1},\dots,z_{K}\). Then by Proposition 42
\[1\leqslant\sum_{k=1}^{K}\nu(B(z_{k},\tau))\lesssim K\tau^{d_{\nu}}.\]
Taking a minimal \(\tau\)-covering yields the result.
## Appendix B Deferred proofs
### On the tightness of Theorem 2
In this section, we give our results concerning the tightness of Theorem 2 in terms of covering number dependence. Our main observation in this vein follows.
**Proposition 44**.: _Suppose there exists \(r\geqslant 0\) such that for all \(\varepsilon>0\), probability measures \(\mu,\nu\) and cost functions \(c\) satisfying Assumption 1, there is a numerical constant such that_
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim(1+\varepsilon)\sqrt{\frac{\mathcal{N}\big{(}\mu,\big{(}\frac{ \varepsilon}{L}\big{)}^{r}\big{)}\wedge\mathcal{N}\big{(}\nu,\big{(}\frac{ \varepsilon}{L}\big{)}^{r}\big{)}}{n}}.\]
_Then \(r\geqslant 1\)._
To prove this result, we establish the following fact.
**Proposition 45** (Entropic estimation of \(W_{1}\) distances).: _Suppose \(c(x,y)=\|x-y\|\), and there exists \(r\geqslant 0\) such that for all \(\varepsilon>0\), and all \(\mu,\nu\) supported in \(B(0,1/2)\), there is a numerical constant for which_
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim(1+\varepsilon)\sqrt{\frac{\mathcal{N}\big{(}\mu,\big{(}\frac{ \varepsilon}{L}\big{)}^{r}\big{)}\wedge\mathcal{N}\big{(}\nu,\big{(}\frac{ \varepsilon}{L}\big{)}^{r}\big{)}}{n}}.\]
_Then for dimension-dependent constants \(C_{d},C_{d}^{\prime}\),_
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-W_{1}(\mu,\nu)|]\lesssim C_{d }\big{\{}\frac{1}{rd+4}\log\big{(}C_{d}^{\prime}n)+1\big{\}}\cdot n^{-1/(rd+2)}\]
_In particular, Theorem 2 implies the rate \(n^{-1/(d+2)}\) for \(W_{1}\) distance estimation, up to logarithmic factors and dimension-dependent constants._
Proposition 44 follows from this result by recalling that for \(d\geqslant 3\), the minimax rates for \(W_{1}\) distance estimation over all distributions supported in \(B(0,1/2)\) are \(n^{-1/d}\) up to logarithmic factors [25, Theorem 11].
Proof of Proposition 45.: Using standard upper bounds on covering numbers in Euclidean spaces [23, Proposition 4.2.12], the hypothesis implies
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-S_{\varepsilon}(\mu,\nu)|] \lesssim(1+\varepsilon)\cdot\Big{(}1+\frac{1}{\varepsilon^{r}}\Big{)}^{d/2} \cdot\frac{1}{\sqrt{n}}.\] (B.1)
Note that for \(\varepsilon=0\) we recover the Wasserstein-1 distance, and the rate of approximation is known to be bounded as
\[|S_{\varepsilon}(\mu,\nu)-W_{1}(\mu,\nu)|\leqslant C_{d}\varepsilon\log\Big{(} \frac{C_{d}^{\prime}}{\varepsilon}\Big{)},\] (B.2)
where \(C_{d},C_{d}^{\prime}\) are dimension-dependent constants [1, Theorem 1]. Using \(S_{\varepsilon}(\hat{\mu},\hat{\nu})\) to estimate \(W_{1}(\mu,\nu)\), suppose we take \(\varepsilon=n^{-1/(rd+2)}\). We then find, for dimension-dependent constants \(C_{d},C_{d}^{\prime}\) (potentially different than before), that
\[\mathbb{E}[|S_{\varepsilon}(\hat{\mu},\hat{\nu})-W_{1}(\mu,\nu)|]\lesssim C_{ d}\big{\{}\frac{1}{rd+4}\log\big{(}C_{d}^{\prime}n)+1\big{\}}\cdot n^{-1/(rd+2)}\]
### Deferred proofs from section 6
Proof of Lemma 25.: Let \(z_{1},\ldots,z_{K}\) be a proper \(\varepsilon/2L\)-net of \(N\) such that \(K=\mathcal{N}^{\mathrm{pr}}(N,\frac{\varepsilon}{2L})\). For each \(z\in N\), there exists \(k\in[K]\) such that \(z\in B(z_{k},\frac{\varepsilon}{2L})\), so that \(B(z_{k},\frac{\varepsilon}{2L})\subset B(z,\frac{\varepsilon}{L})\). Thus
\[\inf_{z\in N}\hat{\nu}(B(z,\frac{\varepsilon}{L}))\geqslant\min_{k\in[K]}\hat {\nu}(B(z_{k},\frac{\varepsilon}{2L})).\]
Therefore, for all \(t>0\),
\[\mathbb{P}[\inf_{z\in N}\hat{\nu}(B(z,\frac{\varepsilon}{L}))\leqslant \inf_{z\in N}\nu(B(z,\frac{\varepsilon}{2L}))-t]\] \[\qquad\leqslant\mathbb{P}[\min_{k\in[K]}\hat{\nu}(B(z_{k},\frac{ \varepsilon}{2L}))\leqslant\inf_{z\in N}\nu(B(z,\frac{\varepsilon}{2L}))-t]\] \[\qquad\leqslant\sum_{k=1}^{K}\mathbb{P}[\hat{\nu}(B(z_{k},\frac{ \varepsilon}{2L}))\leqslant\nu(B(z_{k},\frac{\varepsilon}{2L}))-t].\]
By Hoeffding's inequality, for each \(k\in[K]\),
\[\mathbb{P}[\hat{\nu}(B(z_{k},\frac{\varepsilon}{2L}))\leqslant\nu(B(z_{k}, \frac{\varepsilon}{2L}))-t]\leqslant e^{-2nt^{2}}.\]
Therefore, with probability at least \(1-\frac{1}{n}e^{-\frac{10}{\varepsilon}}\),
\[\inf_{z\in N}\hat{\nu}(B(z,\frac{\varepsilon}{L}))\gtrsim\inf_{z\in N}\nu(B(z,\frac{\varepsilon}{2L}))-\sqrt{\frac{1/\varepsilon+\log(nK)}{n}}.\]
Note that \(K=\mathcal{N}^{\mathrm{pr}}(N,\frac{\varepsilon}{2L})\leqslant\mathcal{N}(N, \frac{\varepsilon}{4L})\), and so we may conclude with Proposition 42 and Proposition 43 combined with our assumption on \(n\) in Equation (6.2).
Proof of Lemma 26.: For the first statement, observe that for all \(x\in\operatorname{supp}\mu\)
\[1=\int p_{\varepsilon}(x,y^{\prime})\mathrm{d}\nu(y^{\prime}).\]
By Proposition 15 for any \(y\in\operatorname{supp}(\nu)\),
\[1\gtrsim p_{\varepsilon}(x,y)\nu(B(y,\frac{\varepsilon}{L}))\gtrsim p_{ \varepsilon}(x,y)\big{(}\frac{\varepsilon}{L}\big{)}^{d_{\nu}},\]
where the second inequality follows by Proposition 42. Re-arranging yields the first statement since \(x\in\operatorname{supp}\mu\) and \(y\in\operatorname{supp}\nu\) were arbitrary.
For the second statement, it suffices to show the statement in the event described by Lemma 25. Use Proposition 15 to find that, for any \(x\in\operatorname{supp}\mu,y\in\operatorname{supp}\nu\),
\[\hat{p}_{\varepsilon}(x,y)\leqslant\min_{j\in[n]}\big{\{}e^{\frac{2L}{ \varepsilon}\|y-y_{j}\|}\hat{p}_{\varepsilon}(x,y_{j})\big{\}}.\]
Reasoning as above,
\[\hat{p}_{\varepsilon}(x,y_{j})\lesssim\hat{\nu}(B(y_{j},\frac{\varepsilon}{L} ))^{-1}\leqslant\sup_{y^{\prime}\in N}\hat{\nu}(B(y^{\prime},\frac{\varepsilon }{L}))^{-1}.\]
Since we are working in the event described by Lemma 25,
\[\sup_{y^{\prime}\in N}\hat{\nu}(B(y^{\prime},\frac{\varepsilon}{L}))^{-1} \lesssim\Big{(}\frac{L}{\varepsilon}\Big{)}^{d_{\nu}}.\]
It follows that there exists a \(y_{j}\) such that \(\|y-y_{j}\|\leqslant\frac{\varepsilon}{L}\), in which case
\[\hat{p}_{\varepsilon}(x,y)\lesssim\left(\frac{L}{\varepsilon}\right)^{d_{\nu}}.\]
Proof of Lemma 35.: We give the proof of the first claim, the remaining two follow in the same fashion. Let \(\mathcal{E}\) denote the event in Lemma 26. Observe that, by the pointwise bounds in Proposition 14,
\[\mathbb{E}[\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\mu \otimes\nu)}^{2}] =\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{p}_{\varepsilon}-p_{ \varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}]+\mathbb{E}[\mathbb{1}[\mathcal{E}^ {c}]\|\hat{p}_{\varepsilon}-p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}]\] \[\lesssim\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{p}_{\varepsilon }-p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}]+e^{\frac{10}{\varepsilon}} \mathbb{E}[\mathbb{1}[\mathcal{E}^{c}]]\] \[\leqslant\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{p}_{\varepsilon }-p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}]+\frac{1}{n}.\]
Note that \(|e^{a}-e^{b}|\leqslant e^{a\lor b}|a-b|=(e^{a}\lor e^{b})|a-b|\) for all \(a,b\in\mathbb{R}\), and so
\[\mathbb{E}[\mathbb{1}[\mathcal{E}]\|\hat{p}_{\varepsilon}-p_{ \varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}] \leqslant\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\mathbb{ E}[\mathbb{1}[\mathcal{E}]\|\log\hat{p}_{\varepsilon}-\log p_{\varepsilon}\|_{L^{2}( \mu\otimes\nu)}^{2}]\] \[\leqslant\Big{(}\frac{L}{\varepsilon}\Big{)}^{2d_{\nu}}\mathbb{ E}[\|\log\hat{p}_{\varepsilon}-\log p_{\varepsilon}\|_{L^{2}(\mu\otimes\nu)}^{2}]\] \[=\frac{1}{\varepsilon^{2}}\cdot\Big{(}\frac{L}{\varepsilon} \Big{)}^{2d_{\nu}}\mathbb{E}[\|\hat{f}_{\varepsilon}-f_{\varepsilon}\|_{L^{2} (\mu)}^{2}+\|\hat{g}_{\varepsilon}-g_{\varepsilon}\|_{L^{2}(\nu)}^{2}].\]
The first inequality follows.
|
2301.10425
|
k-Power Graphs of Finite Groups
|
For a finite group $G$ and for a fixed positive integer $k$, $k\geq 2$, the
$k$-power graph of $G$ is an undirected simple graph with vertex set $G$ in
which two distinct vertices $x$ and $y$ are adjacent if and only if $x^k=y$ or
$y^k=x$. In this paper, we investigate some graph parameters such as number of
edges, clique number, connectedness, etc. of $k$-power graphs of finite groups.
Also find some properties of $k$-power graphs of finite cyclic groups, and
finally we present an application
|
Swathi V V, M S Sunitha
|
2023-01-25T06:33:05Z
|
http://arxiv.org/abs/2301.10425v1
|
# \(k\)-Power Graphs of Finite Groups
###### Abstract
For a finite group \(G\) and for a fixed positive integer \(k\), \(k\geq 2\), the \(k\)-power graph of \(G\) is an undirected simple graph with vertex set \(G\) in which two distinct vertices \(x\) and \(y\) are adjacent if and only if \(x^{k}=y\) or \(y^{k}=x\). In this paper, we investigate some graph parameters such as number of edges, clique number, connectedness, etc. of \(k\)-power graphs of finite groups. Also find some properties of \(k\)-power graphs of finite cyclic groups, and finally we present an application.
**Keywords:**\(k\)-power graphs, Power graphs, Graphs of groups.
## 1 Introduction
The directed power graph \(\overrightarrow{P}(S)\) of a semigroup \(S\) was defined by Kelarev and Quinn [1], as a digraph with vertex set \(S\) in which there is an arc from a vertex \(x\) to another vertex \(y\) if and only if \(y=x^{n}\) for some positive integer \(n\). Following this, the (undirected) power graph \(P(S)\) of a semigroup \(S\) was defined by Chakrabarty et al.[2] as an undirected simple graph with vertex set \(S\) in which two distinct vertices \(x\) and \(y\) are adjacent if and only if \(x^{n}=y\) or \(y^{n}=x\) for some positive integer \(n\). The authors proved that power graph of a finite group \(G\) is complete if and only if \(G\) is cyclic of order \(1\) or \(p^{m}\) where \(p\) is a prime and \(m\) is a positive integer.
We find a number of research publications where researchers describe various graph parameters for power graphs completely or in part. [3, 4, 5, 6, 7, 8, 9, 10, 11] are some
of the literatures which discuss numerous properties of power graphs of finite groups.
For fixed positive integer \(k\) with \(k\geq 2\) and for a semigroup \(S\), Sriparna Chattopadhyay and Pratima Panigrahi [12] defined the \(k-\)power graph \(P(S,k)\) as a graph with vertex set \(S\) in which two distinct vertices \(x\) and \(y\) are adjacent if and only if \(x^{k}=y\) or \(y^{k}=x\). They studied cycle structures, connectedness and symmetry of \(k-\)power graphs of cyclic groups. Clearly, for every positive integer \(k\geq 2\), \(P(S,k)\) is a spanning subgraph of \(P(S)\).
In [13] the authors defined square graph of a finite group which is a particular \(k\)-power graph for \(k=2\), and they investigated upper bounds for clique number and chromatic number.
In this paper, notation and terminology of [14] for graphs, [15] for groups and [16] for number theory are followed.
We denote the order of a group \(G\) by \(o(G)\) and order of an element \(a\) in \(G\) by \(o(a)\). The identity element in \(G\) is denoted by \(e\). Also, a cyclic group of order \(n\) is denoted by \(\mathbb{Z}_{n}\).
The order of a graph \(H\) is the number of vertices in that graph. We denote \(deg(x)\) as the degree of the vertex \(x\), which is the number of edges incident to \(x\). The greatest distance between any two vertices of a connected graph \(H\) is called the diameter of \(H\) and is denoted by \(diam(H)\). The complete graph of order \(n\) is denoted by \(K_{n}\) and a path of order \(n\) by \(P_{n}\). We denote \(x\sim y\) if the vertices \(x\) and \(y\) are adjacent in the graph.
The greatest common divisor of two integers \(m\) and \(n\) are denoted as \(gcd(m,n)\). For a positive integer \(n>1\) and for an integer \(a\) such that \(gcd(a,n)=1\), the order of \(a\) modulo \(n\) is defined as the least positive integer \(k\) such that \(a^{k}\equiv 1(mod\,n)\), which is denoted by \(ord_{n}(a)\). If \(a\) has order \(\phi(n)\), where \(\phi(m)\) is the Euler totient function of \(m\), then \(a\) is called the primitive root modulo \(n\).
In this paper we find some structural properties of \(k\)-power graph of a general finite group \(G\) of order \(n\) and \(2\leq k\leq n\). Also we prove some properties of \(P(\mathbb{Z}_{n},k)\) and give an application also.
## 2 \(k\)-power graphs
Let \(G\) be a finite group such that \(o(G)=n\) and \(P(G,k)\) be the \(k\)-power graph of \(G\). For any two integers \(k_{1},k_{2}\geq 2\), if \(k_{1}\equiv k_{2}(mod\,n)\) then clearly \(P(G,k_{1})=P(G,k_{2})\). We begin this section by noting that for a finite group \(G\) of order \(n\), the edges in \(P(G,k)\) is at most \(n-1\), since only the \(k-\)th power of each element is considered. We calculate the number of edges in \(P(G,k)\) in the following theorem.
**Theorem 1**: _Let \(G\) be a finite group group of order \(n\). Then the number of edges in the \(k-\)power graph of \(G\) is given by \(\mid E(P(G,k))\mid=n-\sum_{d\mid k_{1}}t_{d}-\sum_{d\mid k_{2},d\mid k_{1}} \frac{t_{d}}{2}\), where \(k_{1}=gcd(k-1,n),k_{2}=gcd(k^{2}-1,n)\) and \(t_{d}=\)number of elements of order \(d\) in \(G\)._
Proof.: Corresponding to each element \(a\in G\) there exists an edge in \(P(G,k)\) between \(a\) and \(a^{k}\) if \(a\neq a^{k}\). Now, if \(x\) is an element \(G\) which has order \(d\) and \(d\mid k-1\), then \(x^{k}=x\). Hence, if \(d\mid gcd(k-1,n)\), there are no edges corresponding to the elements of order \(d\) in \(G\). Now, edges corresponding to two elements \(x\) and \(y\) coincide if \(x^{k}=y\) and \(y^{k}=x\). In this case, \(o(x)\mid k^{2}-1\) and \(o(y)\mid k^{2}-1\). Also if \(x\in G\) has order \(d\), \(d\mid gcd(k^{2}-1,n)\) and \(d\nmid gcd(k-1,n)\), then \(x\neq x^{k}\) and edges corresponding to \(x\) and \(x^{k}\) coincide. Note that number of elements in \(G\) of order \(d\) is even if \(d>2\), and if \(d=2\) and \(d\mid k_{2}\), then \(d\mid k_{1}\).
Since there are \(\phi(d)\) elements of order \(d\) in a cyclic group, the following corollary is obvious.
**Corollary 2**: _Let \(\mathbb{Z}_{n}\) be the cyclic group of order \(n\). Then \(\mid E(P(\mathbb{Z}_{n},k))\mid=n-\sum_{d\mid k_{1}}\phi(d)-\sum_{d\mid k_{2},d \nmid k_{1}}\frac{\phi(d)}{2}\)._
_Example 1_ The \(k\)-power graphs of the symmetric group on 3 elements (\(S_{3}\)) for \(k=2,3,4\) and 5 are given in figures 1,2,3 and 4 respectively, where \(\sigma_{0}=e,\sigma_{1}\)=(1 2 3), \(\sigma_{2}=\)(1 3 2), \(\tau_{1}=\)(1 2), \(\tau_{2}=\)(2 3) and \(\tau_{3}\)=(1 3).
_Example 2_: The \(k\)-power graph of \(\mathbb{Z}_{4}\) when \(k=2\) is the following tree.
It is clear from the Figures 1-5 that \(k\)-power graphs of a finite group need not be connected always. The following proposition gives a characterization to the cyclic groups such that its \(k-\)power graph is connected.
**Proposition 3**: _[_12_]_ _The graph \(P(\mathbb{Z}_{n},k)\) is connected if and only if \(n\mid k^{m}\) for some \(m\in\mathbb{N}\). Also in this case \(P(\mathbb{Z}_{n},k)\) is a tree._
For any finite group \(G\), the characterization for \(P(G,k)\) to be connected is given in the following theorem.
**Theorem 4**: _Let \(G\) be a finite group. \(P(G,k)\) is connected if and only if for every \(x\in G\), \(o(x)\mid k^{n}\) for some \(n\in\mathbb{N}\). For \(x\in G\), let \(n_{x}=min\{n\in\mathbb{N}:o(x)\mid k^{n}\}\), then \(diam(P(G,k))\leq 2max\{n_{x}:x\in G\}\)._
Proof: Let \(x\in G\) and \(o(x)\mid k^{n}\) for some \(n\in\mathbb{N}\). Suppose \(n_{x}=min\{n\in\mathbb{N}:o(x)\mid k^{n}\}\). Then \(x\) is connected to \(e\) through a path of length \(n_{x}\) in \(P(G,k)\). Hence \(P(G,k)\) is connected if for all \(x\in G\), \(o(x)\mid k^{n}\) for some \(n\in\mathbb{N}\), and \(diam(P(G,k))\leq 2max\{n_{x}:x\in G\}\). The converse also follows.
The following theorem proves that the clique number is bounded above.
**Theorem 5**: _The clique number of \(P(G,k)\), \(\omega(P(G,k))\leq 3\) for any finite group \(G\) and \(\omega(P(G,k))=3\) if and only if there exist elements in \(G\) of order \(m\) such that \(m>3,m\mid k^{3}-1\) and \(m\nmid k-1\)._
Proof: If possible, suppose there exist elements \(x,y,z,w\in G\) which induces \(K_{4}\). Then the following six conditions must be satisfied.
1. \(x^{k}=y\) or \(y^{k}=x\)
2. \(y^{k}=z\) or \(z^{k}=y\)
3. \(z^{k}=w\) or \(w^{k}=z\)
4. \(w^{k}=x\) or \(x^{k}=w\)
5. \(x^{k}=z\) or \(z^{k}=x\)
Figure 5: \(P(\mathbb{Z}_{4},2)\)
\(y^{k}=w\) or \(w^{k}=y\)
Suppose \(x^{k}=y\) in condition 1. Then conditions 4 and 5 implies that \(w^{k}=x\) and \(z^{k}=x\), which is not possible by condition 3. Hence, \(P(G,k)\) has no subgraph isomorphic to \(K_{4}\). Hence, the clique number is atmost 3.
Now suppose \(\omega(P(G,k))=3\). Then there exists an element \(a\neq e\) such that \(a^{k}\neq a,e\), \(a^{k^{2}}\neq a,e\) and \(a^{k^{3}}=a\). Then \(o(a)\mid k^{3}-1\) and \(o(a)\nmid k-1\). Note that, if \(m\mid k^{3}-1\) and \(m\mid k^{2}-1\) then \(m\mid k-1\). The converse also follows.
Next theorem states that the chromatic number also bounded above by 3.
**Theorem 6**: _Let \(G\) be a finite group, then the chromatic number \(\chi(P(G,k))\leq 3\)._
_Proof_ Choose a vertex \(x\) from a connected component of \(P(G,k)\) and colour 1 is assigned. Assign colour 2 to the vertex \(x^{k}\). Label \(y\) to all vertices adjacent to \(x\) except \(x^{k}\). Then clearly \(y^{k}=x\). If \((x^{k})^{k}=y\), colour 3 is assigned to \(y\), otherwise assign colour 2. Now all the vertices adjacent to \(y\) are labelled as \(z\). Then \(z^{k}=y\) and colour \(z\) using colour 3. Now label \(w\) to all the vertices adjacent to \(z\), then \(w^{k}=z\). Again colour 3 is given to \(w\), if \((x^{k})^{k}=w\), otherwise give colour 2. Proceed with the same process until every vertices in that component get a colour. Apply the same procedure to each connected components of \(P(G,k)\). So we conclude that the minimum number of colours required to properly colour \(P(G,k)\) is 3.
The question of whether the \(k\)-power graph is perfect therefore arises naturally. The following remark gives an answer for that.
_Remark 1_ The \(k\)-power graph of a finite group need not be perfect. For example, \(P(\mathbb{Z}_{31},2)\) is a union of an isolated vertex and six \(5-\)cycles, which has chromatic number 3 and clique number equal to 2.
The groups given by the presentation \(Q_{4n}=\langle a,b:a^{n}=b^{2},a^{2n}=1,b^{-1}ab=a^{-1}\rangle\) are the generalized quaternion groups.
**Theorem 7**: _Let \(G\) be a finite group, then \(P(G,k)\) is a star graph if and only if one of the following holds._
* \(o(x)\mid k\ \forall x\in G\)__
* \(G=\mathbb{Z}_{4}\) _and_ \(k=2\)__
* \(G=Q_{8}\) _and_ \(k=2\:or\:6\)__
_Proof_ Let \(P(G,k)\) is a star graph and let there exist \(x\in G\) such that \(o(x)\nmid k\), then \(x^{k}\neq e\). Suppose \(x^{k}=y\) for some \(y\in G\). If \(y=x\), then \(z^{k}=x\) for all \(z\in G,z\neq x\) which is not possible since \(e^{k}\neq x\). If \(y\neq x\), then \(z^{k}=y\) for all \(z\in G,z\neq y\) and \(y^{k}=e\).
If \(x^{2}=y\), then \(y^{2}=e\), which implies \(G\) is a group with unique involution and all other elements have order \(4\). Hence \(G=\mathbb{Z}_{4}\) with \(k=2\) or \(G=Q_{8}\) with \(k=2\) or \(6\) [A \(p\)-group with unique subgroup of order \(p\) is either cyclic or generalized quaternion]. If \(x^{2}\neq y\), then \(x^{k}=y\) and \((x^{2})^{k}=y\) which implies \(x^{k}=e\) and hence \(o(x)\mid k\).
For the converse, if \(o(x)\mid k,\forall x\in G\), then \(x^{k}=e,\forall x\in G\), and hence \(P(G,k)\) is a star graph. The \(k\)-power graphs of \(\mathbb{Z}_{4}\) and \(Q_{8}\) also star graphs when \(k=2\) and \(6\).
\(\Box\)
Clearly, if \(k-1\mid o(x)\) for all \(x\in G\), then \(x^{k}=x\) and every vertex in \(G\) is an isolated vertex in \(P(G,k)\), and vice versa. Hence the following theorem is immediate.
**Theorem 8**: \(P(G,k)\) _is an empty graph if and only if \(o(x)\mid k-1\) for every \(x\in G\)._
We have already seen that \(P(G,k)\) need not be connected, and if \(P(G,k)\) is connected, then it is a tree. The following theorem characterizes groups whose \(k-\)power graphs are forests.
**Theorem 9**: _Let \(G\) be a finite group. \(P(G,k)\) is a forest if and only if there exist no element in \(G\) of order \(m>1\) such that \(gcd(k,m)=1\) and \(ord_{m}(k)>2\)._
_Proof_ By Euler's theorem in number theory, if \(m\) and \(k\) are co-prime integers, then \(a^{\phi(m)}\equiv 1(mod\,m)\). Hence, if there exists an element \(x\) in \(G\) of order \(m\) such that \(gcd(m,k)=1\) and \(ord_{m}(k)=l>2\), then \(x^{k^{l}}=x\), and \(x,x^{k},x^{k^{2}},...x^{k^{l}}\) is a cycle in \(P(G,k)\) of length \(l\) and hence \(P(G,k)\) is not a forest.
Conversely, if \(P(G,k)\) is not a forest, then there is a cycle \(x,x^{k},x^{k^{2}},..x^{k^{l}}=x\) in \(P(G,k)\), which implies \(o(x)\mid k^{l}-1\). Hence \(k^{l}\equiv 1(mod\,o(x))\) which holds only if \(gcd(k,o(x))=1\). \(\Box\)
## 3 Cyclic groups
Let \(gcd(n,k)=d\), then the congruent relation \(kx\equiv a(mod\,n)\) has at most \(d\) solutions. Hence, if \(G\) is a cyclic group of order \(n\), then the maximum degree of its \(k-\)power graph is at most \(d+1\). Then following theorem describes degree of each element in a cyclic group.
**Theorem 10**: _Consider the cyclic group \(\mathbb{Z}_{n}=\{0,1,2,...n-1\}\). Let \(d=(n,k)\) and \(a\in\mathbb{Z}_{n}\). Then,_
_If \(d\nmid a\), \(deg(a)=\begin{cases}0&\text{if }o(a)\mid k-1\\ 1&\text{if }o(a)\nmid k-1\end{cases}\)_
_If \(d\mid a\), \(deg(a)=\begin{cases}d-1&\text{if }ka\equiv a(mod\,n)\text{ and }\,o(a)\mid k-1\\ d&\text{if }ka\not\equiv a(mod\,n)\text{ and }\,o(a)\mid k-1\\ d&\text{if }ka\equiv a(mod\,n)\text{ and }\,o(a)\nmid k-1\\ d+1&\text{if }ka\not\equiv a(mod\,n)\text{ and }\,o(a)\nmid k-1\end{cases}\)_
Proof: If \(d\nmid a\), the congruence relation \(kx\equiv a(mod\,n)\) has no solution. Therefore, the vertex \(a\) is adjacent to \(a^{k}\) only. In this case, if \(o(a)\mid k-1\), then \(a^{k}=a\) and \(deg(a)=0\) and if \(o(a)\nmid k-1\), then \(a^{k}\neq a\) and \(deg(a)=1\).
Next suppose \(d\mid a\). Then the congruence relation \(kx\equiv a(mod\,n)\) has exactly \(d\) solutions. Now we have two cases,
**case 1:**\(ka\equiv a(mod\,n)\)
In this case, \(a\) itself is a solution of the congruence relation. Hence \(deg(a)=d-1\) if \(o(a)\mid k-1\) and \(deg(a)=d\) if \(o(a)\nmid k-1\).
**case 2:**\(ka\not\equiv a(mod\,n)\)
Here, all the \(d\) solutions of the congruence relation are different from \(a\), Hence \(deg(a)=d\) if \(o(a)\mid k-1\) and \(deg(a)=d+1\) if \(o(a)\nmid k-1\).
For a positive integer \(m\), let \(\pi(m)=\{p:p\mid m,p\,\mbox{is a prime}\}\). The following theorem gives another characterization for \(P(\mathbb{Z}_{n},k)\) to be connected.
**Theorem 11**: \(P(\mathbb{Z}_{n},k)\) _is connected if and only if \(\pi(n)\setminus\pi(k)=\phi\)._
Proof: Suppose \(\pi(n)\setminus\pi(k)\neq\phi\), and let \(p\in\pi(n)\) such that \(p\notin\pi(k)\). Then \(gcd(n,k)=1\) and \(k^{p-1}\equiv 1(mod\,p)\). Let \(m=exp_{p}(k)\) and let \(x\) be an element in \(G\) such that \(o(x)=p\). Then \(x^{k^{m}}=x\) and \(x\) is not connected to \(e\) through any path in \(P(\mathbb{Z}_{n},k)\). Hence \(P(\mathbb{Z}_{n},k)\) is not connected.
Conversely suppose \(\pi(n)\setminus\pi(k)=\phi\). Let \(n=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}...p_{s}^{\alpha_{s}}\) and \(k=p_{1}^{\beta_{1}}p_{2}^{\beta_{2}}...p_{r}^{\beta_{r}}\) be the prime factorization of \(n\) and \(k\) respectively, where \(r\leq s\). Let \(m=lcm(\beta_{1},\beta_{2},...\beta_{r})\). Then \(n\mid k^{m}\), and hence by Proposition 3, \(P(\mathbb{Z}_{n},k)\) is connected.
The identity element \(e\) is an isolated vertex in \(P(\mathbb{Z}_{n},k)\) if \(gcd(n,k)=1\) since, if \(x^{k}=e\) for some \(x\in\mathbb{Z}_{n}\) then \(o(x)\mid k\) and hence \(o(x)\mid gcd(n,k)\).
**Proposition 12**: _Let \(gcd(n,k)=1\). If \(x\sim y\) in \(P(\mathbb{Z}_{n},k)\), then \(o(x)=o(y)\)._
Proof: Let \(x\sim y\), then \(kx\equiv y(mod\,n)\implies o(x)kx\equiv o(x)y(mod\,n)\implies 0\equiv o(x)y(mod\,n)\implies o(y) \mid o(x)\). Also \(o(y)kx\equiv o(y)y(mod\,n)\implies o(y)kx\equiv 0(mod\,n)\implies o(x)\mid o(y)k \implies o(x)\mid o(y)\), since \(gcd(o(x),k)=1\).
The following theorem states that the number of connected components is bounded above.
**Theorem 13**: _If \(gcd(n,k)=1\), then the number of connected components of \(P(\mathbb{Z}_{n},k)\), \(c(P(\mathbb{Z}_{n},k))\geq\tau(n)\), where \(\tau(n)\) is the number of divisors of \(n\), and \(c(P(\mathbb{Z}_{n},k))=\tau(n)\) if and only if \(k\) is a primitive root modulo \(d\), for every divisor \(d\) of \(n\)._
Proof: By Proposition 3.4, all the vertices in a component have same order. Hence, the inequality follows since an element of order \(d\) exists in \(\mathbb{Z}_{n}\) if \(d\mid n\).
Suppose \(k\) is a primitive root modulo \(d\), for every \(d\mid n\). Then \(ord_{d}(k)=\phi(d)\). Hence, if \(x\) is an element in \(G\) of order \(d\), then \(\phi(d)\) is the smallest positive integer such that \(x^{k^{\phi(d)}}\equiv x(mod\,d)\). Hence \(x,x^{k},x^{k^{2}},...,x^{k^{\phi(d)-1}}\) are the distinct \(\phi(d)\) elements of order \(d\) in \(G\) and are in the same component of \(P(\mathbb{Z}_{n},k)\).
Conversely, suppose \(k\) is not a primitive root modulo \(d\) for some \(d\mid n\). Then, \(ord_{d}(k)=m<\phi(d)\) and hence \(k^{m}\equiv 1(mod\,d)\) and \(x^{k^{m}}=x\) for every element \(x\) of order \(d\). Hence, the component containing \(x\) has exactly \(m\) vertices. Therefore, the elements in \(\mathbb{Z}_{n}\) of order \(d\) contribute at least two components.
**Proposition 14**: _[_12_]_ _If \(gcd(n,k)=1\) then any component of \(P(\mathbb{Z}_{n},k)\) is an isolated vertex, the complete graph \(K_{2}\) or a cycle of length at least 3._
The following theorem shows that the converse of the Proposition 3 is also satisfies.
**Theorem 15**: _Any component of \(P(\mathbb{Z}_{n},k)\) is an isolated vertex, the complete graph \(K_{2}\) or a cycle of length at least \(3\) if and only if \(gcd(n,k)=1\)._
Proof: Suppose \(gcd(n,k)=d>1\). Then, the congruence \(kx\equiv 1(mod\,n)\) has no solutions. Considering \(1\) and \(k\) as elements of \(\mathbb{Z}_{n}\), \(1\sim k\) in \(P(\mathbb{Z}_{n},k)\). Hence, \(deg(1)=1\). Also, \(kx\equiv k(mod\,n)\) has \(d\) solutions and so \(deg(k)\geq 2\). Therefore, the edge between \(1\) and \(k\) in \(P(\mathbb{Z}_{n},k)\) is not a part of a cycle nor \(K_{2}\) as a component. Hence, the necessary part follows. The sufficient part follows from Proposition 14.
**Theorem 16**: _Let \(n\) be even and \(\frac{n}{2}\) odd, then \(P(\mathbb{Z}_{n},\frac{n}{2})=2K_{1,\frac{n}{2}-1}\) and \(P(\mathbb{Z}_{n},\frac{n}{2}+1)=\frac{n}{2}P_{2}\)._
Proof: Let \(n\) be even and \(\frac{n}{2}\) odd. If \(k=\frac{n}{2}\), then using Theorem 10, \(deg(a)=\frac{n}{2}-1\) if \(a=0\), \(\frac{n}{2}\) and \(deg(a)=1\) if \(a\neq 0\), \(\frac{n}{2}\). Also, \(\frac{n}{2}a\equiv 0(mod\,n)\) if \(a\) is even and hence all the even elements are adjacent to the vertex \(0\), and \(\frac{n}{2}a\equiv\frac{n}{2}(mod\,n)\) if \(a\) is odd and hence all the odd elements are adjacent to the vertex \(\frac{n}{2}\).
If \(k=\frac{n}{2}+1\), for \(a\in\mathbb{Z}_{n}\), \((\frac{n}{2}+1)a\equiv a(mod\,n)\) if \(a\) is even, and \((\frac{n}{2}+1)a\equiv\frac{n}{2}+a(mod\,n)\) if \(a\) is odd. Hence, the edges in \(P(\mathbb{Z}_{n},k)\) are between the vertices \(a\) and \(\frac{n}{2}+a\) only.
## 4 Application
Consider the directed \(k\)-power graph of a group \(G\), \(\overrightarrow{P}(G,k)\), defined as a digraph with vertex set \(G\) in which there is an arc from a vertex \(a\) to another vertex \(b\) if and only if \(b=a^{k}\). In this section, we present a riddle and solve that riddle using directed \(k\)-power graphs of cyclic groups.
**Shifting Chair Problem**: _In a game, \(n\) chairs are evenly spaced in a circle and \(n\) people are assigned unique numbers from \(1\) to \(n\). People should sit in ascending order in the clockwise direction when the first whistle blows. For each of the whistles thereafter, the individual who is assigned with number \(i\) should acquire his next chair by skipping exactly \(i-1\) number of chairs in the clockwise direction. The problem is to determine the smallest number of whistles required to ensure that each chair is occupied by exactly one person._
Consider the above problem, representing the movement of the individuals from their initial seats to the position after \(k^{\text{th}}\) whistle by a directed edge, we get the directed \(k\)-power graph of \(\mathbb{Z}_{n}\). Then the problem is to find the smallest \(k>1\) such that, \(od(a)=id(a)=1\) in \(\overrightarrow{P}(\mathbb{Z}_{n},k)\) for all \(a\in\mathbb{Z}_{n}\), where \(od(a)\) is the outdegree of \(a\) and \(id(a)\) is the indegree of \(a\), and the following theorem present the solution.
**Theorem 17**: _In \(\overrightarrow{P}(\mathbb{Z}_{n},k)\), \(id(a)=od(a)=1\) for all \(a\in\mathbb{Z}_{n}\) if and only if \(gcd(n,k)=1\)._
_Proof_ If \(gcd(n,k)=1\), then for each \(a\in\mathbb{Z}_{n}\), the congruence \(kx\equiv a(mod\,n)\) has exactly one solution, so \(id(a)=1\). Also \(od(a)=1\) for all \(a\in\mathbb{Z}_{n}\) since \(a\) is adjacent to \(a^{k}\).
Conversely suppose \(gcd(n,k)=d>1\). Then \(kx\equiv 1(mod\,n)\) has no solutions, so \(id(1)=0\). \(\Box\)
## 5 Declarations
**Conflict of interest** On behalf of all authors, the corresponding author states that there is no conflict of interest.
## Acknowledgements
The first author gratefully acknowledges the financial support of Council of Scientific and Industrial Research, India (CSIR) (Grant No-09/874(0029)/2018-EMR-I). The authors would like to thank the DST, Government of India, for providing support to carry out the work under the scheme 'FIST' (No.SR/FST /MS-I/2019/40).
|
2302.09424
|
Zero and Few-Shot Localization of Task-Oriented Dialogue Agents with a
Distilled Representation
|
Task-oriented Dialogue (ToD) agents are mostly limited to a few widely-spoken
languages, mainly due to the high cost of acquiring training data for each
language. Existing low-cost approaches that rely on cross-lingual embeddings or
naive machine translation sacrifice a lot of accuracy for data efficiency, and
largely fail in creating a usable dialogue agent. We propose automatic methods
that use ToD training data in a source language to build a high-quality
functioning dialogue agent in another target language that has no training data
(i.e. zero-shot) or a small training set (i.e. few-shot). Unlike most prior
work in cross-lingual ToD that only focuses on Dialogue State Tracking (DST),
we build an end-to-end agent.
We show that our approach closes the accuracy gap between few-shot and
existing full-shot methods for ToD agents. We achieve this by (1) improving the
dialogue data representation, (2) improving entity-aware machine translation,
and (3) automatic filtering of noisy translations.
We evaluate our approach on the recent bilingual dialogue dataset BiToD. In
Chinese to English transfer, in the zero-shot setting, our method achieves
46.7% and 22.0% in Task Success Rate (TSR) and Dialogue Success Rate (DSR)
respectively. In the few-shot setting where 10% of the data in the target
language is used, we improve the state-of-the-art by 15.2% and 14.0%, coming
within 5% of full-shot training.
|
Mehrad Moradshahi, Sina J. Semnani, Monica S. Lam
|
2023-02-18T21:30:36Z
|
http://arxiv.org/abs/2302.09424v1
|
# Zero and Few-Shot Localization of Task-Oriented Dialogue Agents
###### Abstract
Task-oriented Dialogue (ToD) agents are mostly limited to a few widely-spoken languages, mainly due to the high cost of acquiring training data for each language. Existing low-cost approaches that rely on cross-lingual embeddings or naive machine translation sacrifice a lot of accuracy for data efficiency, and largely fail in creating a usable dialogue agent. We propose automatic methods that use ToD training data in a source language to build a high-quality functioning dialogue agent in another target language that has no training data (i.e. zero-shot) or a small training set (i.e. few-shot). Unlike most prior work in cross-lingual ToD that only focuses on Dialogue State Tracking (DST), we build an end-to-end agent.
We show that our approach closes the accuracy gap between few-shot and existing full-shot methods for ToD agents. We achieve this by (1) improving the dialogue data representation, (2) improving entity-aware machine translation, and (3) automatic filtering of noisy translations.
We evaluate our approach on the recent bilingual dialogue dataset BiToD. In Chinese to English transfer, in the zero-shot setting, our method achieves 46.7% and 22.0% in Task Success Rate (TSR) and Dialogue Success Rate (DSR) respectively. In the few-shot setting where 10% of the data in the target language is used, we improve the state-of-the-art by 15.2% and 14.0%, coming within 5% of full-shot training.1
Footnote 1: Code can be accessed at [https://github.com/stanford-oval/dialogues](https://github.com/stanford-oval/dialogues)
## 1 Introduction
While dialogue agents in various forms have become commonplace in parts of the world, their lack of support for most human languages has prevented access to the benefits they provide for much of the world. Commercial virtual assistants for example, only support a handful of languages, as extending their functionality to each new language is extremely costly, partially due to the need for collecting new annotated training data in that language.
In recent years, several non-English task-oriented dialogue (ToD) datasets have been created; they are either collected from scratch such as RiSAWOZ Quan et al. (2020) and CrossWOZ Zhu et al. (2020), paraphrased from synthetic sentences by crowdworkers such as BiToD Lin et al. (2021), or manually translated from another language Li et al. (2021). All of these approaches are labor-intensive, expensive, and time-consuming; such investment is unlikely to be made for less widely spoken languages.
Cross-lingual transfer, i.e. using training data from other languages to build a dialogue agent for a specific language, seems especially appealing. An emerging line of work has employed machine translation of training data, and multilingual pre-trained neural networks to tackle this task Sherborne et al. (2020); Li et al. (2021); Moradshahi et al. (2023). However, work in ToD cross-lingual transfer has for the most part, focused on understanding the user input, namely Dialogue State Tracking (DST) and Natural Language Understanding (NLU). Other necessary parts of a dialogue agent like policy and response generation have mostly remained unexplored.
In this paper, we present a methodology for building a fully functional dialogue agent for a new language (e.g. English), by using training data in another language (e.g. Chinese) with little to no additional manual dataset creation effort. We found that despite prior efforts to improve modeling for existing ToD datasets, the dialogue representation used as input to these models, e.g. full dialogue history in natural language Hosseini-Asl et al. (2020), is sub-optimal, especially when the training data is either scarce or created automatically us
ing noisy machine translation. We propose a new _Distilled_ representation to fix the shortcomings of current representations. We also found that previously proposed entity-aware translation technique Moradshahi et al. (2023) to be inadequate. Our proposed technique effectively combines entity-aware neural machine translation with text similarity classifiers to automatically create training data for a new language. This paper explains all the ingredients we found useful, and motivates their use by performing extensive ablation studies.
The contributions of this paper are:
1. _A new state-of-the-art result for the BiToD dataset in both few-shot and full-shot settings on English_ according to all of our 6 automatic metrics, including an improvement of 14.0% and 2.9%, respectively, in Dialogue Success Rate (DSR). In fact, using our Distilled representation, our few-shot model trained on only 10% of the training data, achieves similar results to the previous SOTA model trained on 100% training data.
2. _The first dialogue agent created in the zero-shot cross-lingual transfer setting_, i.e. starting from no training data in the target language. Our agent achieves 71%, 62%, 40%, and 47% of the performance of a full-shot agent in terms of Joint Goal Accuracy (JGA), Task Success Rate (TSR), DSR, and BLEU score, respectively.
3. _A concise dialogue representation designed for cross-lingual ToD agents_. The Distilled dialogue representation works well with our new decomposition of agent subtasks, making significant improvements possible.
4. _An improved methodology for automatic translation of ToD training data_. We adapt and improve an existing entity-aware machine translation system that localizes entities Moradshahi et al. (2023), extend it to agent response generation, and equip it with a filtering step that increases the quality of the resulting translations.
## 2 Related Work
### Multilingual Dialogue Datasets
MultiWOZ Budzianowski et al. (2018); Ramadan et al. (2018); Eric et al. (2019) and CrossWOZ Zhu et al. (2020) are two monolingual Wizard-Of-Oz dialogue datasets that cover several domains, suitable for building travel dialogue agents in English and Chinese respectively. For the 9th Dialog System Technology Challenge (DSTC-9) Gunasekara et al. (2020), they were translated to Chinese and English using Google Translate.
GlobalWOZ Ding et al. (2021), AllWOZ Zuo et al. (2021), and Multi2WOZ Hung et al. (2022) translate MultiWOZ to even more languages such as Spanish, Hindi, and Indonesian, with human translators post-editing machine translated dialogue templates, and filling them with newly collected local entities. Although manual post-editing improves data quality and ensures fluency, it also increases the cost and time to create new datasets, thus limiting scalability.
Different from these translation approaches, Lin et al. (2021) introduced BiToD, the first bilingual dataset for _end-to-end_ ToD modeling. BiToD uses a dialogue simulator to generate dialogues in 5 tourism domains in English and Chinese, then uses crowdsourcing to paraphrase entire dialogues to be more natural. Unlike WOZ-style datasets which usually suffer from poor annotation quality due to human errors Moradshahi et al. (2023), BiToD is automatically annotated during synthesis. Since neither manual nor machine translation is used in the creation of BiToD, it does not contain translationese Eetemadi and Toutanova (2014) or other artifacts of translated text Clark et al. (2020), and provides a realistic testbed for cross-lingual transfer of task-oriented dialogue agents.
### Multilingual Dialogue State Tracking
Mrksic et al. (2017) proposed using cross-lingual word embeddings for zero-shot cross-lingual transfer of DST models. With the advent of large language models, contextual embeddings obtained from pre-trained multilingual language models Devlin et al. (2018); Xue et al. (2021); Liu et al. (2020) have been used to enable cross-lingual transfer in many natural language tasks, including DST.
Chen et al. (2018) used knowledge distillation Hinton et al. (2015) to transfer DST capabilities from a teacher DST model in the source language to a student model in the target language.
Machine translation has been used for DST, both as a way of obtaining cross-lingual representations, and to translate training data. For instance, Schuster et al. (2019) used representations obtained from machine translation models and reported that it performs better than training with machine translated training data for single-turn commands. More advanced data translation approaches like the entity-aware method of Moradshahi et al. (2023) further
improved the DST data quality achievable with machine translation.
## 3 Distilled ToD Agent
Our methodology includes a dialogue task decomposition and a Distilled dialogue representation that are tailored to cross-lingual ToD agents. In this section we describe these two components.
We follow the end-to-end task-oriented dialogue (ToD) setting Hosseini-Asl et al. (2020) where a user converses freely with an agent over several turns to accomplish his/her goal with all of its constraints (e.g. "book a restaurant that is rated at least 3."). In each turn, the agent must access its database if needed to find the requested information (e.g. find a restaurant that satisfies user constraints), decide on an action (e.g. to present the information to the user or to ask follow-up questions) and finally respond to the user in natural language based on the action it selects.
### Preliminaries
Formally, a _dialogue_\(D=\{U_{1},A_{1},...,U_{T},A_{T}\}\) is a set of alternating user utterances \(U_{t}\) and agent responses \(A_{t}\) for a number of turns \(T\).
A _belief state_ at turn \(t\), \(B_{t}\), consists of a list of \(\langle\textit{domain},\textit{intent}\rangle\) tuples and a set of \(\langle\textit{slot},\textit{relation},\textit{value}\rangle\) tuples. _Intent_ is the user intent, either search or book. _Relation_ is a comparison or membership operator. _Value_ can be one or more entity names or strings from the ontology, or a literal. To see all possible domains, slots and values please refer to Table 4 in Lin et al. (2021).
The _Levenshtein belief state_Lin et al. (2020) is the difference between belief states in consecutive turns, i.e. \(\Delta B_{t}=B_{t}-B_{t-1}\). It captures only the relations and values that have changed in the last user utterance, or tuples that have been added or removed.
An _Agent dialogue act_ at turn \(t\), \(C_{t}\), is a list of \(\langle\textit{domain},\textit{intent}\rangle\) tuples and a set of \(\langle\textit{dialogue\_act\_name},\textit{slot},\textit{value}\rangle\) tuples indicating the action the agent takes and the information offered to the user, if any.
### Task Decomposition
The task of dialogue agents is usually broken down to subtasks, which may be performed by a pipelined system Gao et al. (2018) or by a single neural network Hosseini-Asl et al. (2020); Lei et al. (2018). Here we describe our subtasks and their inputs and outputs (Figure 1).
After the user speaks at turn \(t\), the agent has access to the belief state up to the previous turn (\(B_{t-1}\)), the history of agent dialogue acts (\(C_{1},...,C_{t-1}\)), and the history of agent and user utterances so far (\(A_{1},...,A_{t-1}\) and \(U_{1},...,U_{t}\)). Our agent performs the following four subtasks:
1. _Dialogue State Tracking (DST)_: Generate \(\Delta B_{t}\), the Levenshtein belief state, for the current turn based on the previous belief state, the last two agent dialogue acts2, and the current user utterance. \(\Delta B_{t}\) is combined with \(B_{t-1}\) to produce the current belief state. Footnote 2: Our ablation study described in Section 6.1 justifies the use of the last two agent dialogue acts instead of just the last one. \[\begin{split}\Delta B_{t}=\mathrm{DST}(B_{t-1},C_{t-2},C_{t-1},U _{t})\\ B_{t}\gets B_{t-1}+\Delta B_{t}\end{split}\] (1)
2. _API Call Detection (ACD)_: Call an API to query the database, if needed. \[q_{t}=\mathrm{ACD}(B_{t},C_{t-2},C_{t-1},U_{t},R_{t-1})\] (2) \[\begin{split} R_{t}\ \leftarrow\ q_{t}?\ \mathrm{KB}(B_{t})\ : \ \emptyset\end{split}\] (3)
Figure 1: Inference-time flow diagram for our dialogue agent. DST, ACD, DAG, and RG share the same neural model. \(U\), \(A\), \(C\), \(B\), and \(R\) indicate user utterance, agent response, agent dialogue acts, dialogue state, and retrieved database results respectively. \(t\) is the turn number. \(\otimes\) indicates text concatenation. \(\oplus\) refers to the update rule in Equation 1.
In turn \(t\), ACD determines if an API call is necessary. If so, the result \(R_{t}\) is the top entity in the knowledge base KB, based on a deterministic ranking scheme, that matches the API call constraints in \(B_{t}\), and is empty otherwise. If no entities match the constraint, we set \(R_{t}\) to the special value NoResult.
3. _Dialogue Act Generation (DAG)_: Generate \(C_{t}\), the agent dialogue act for the current turn based on the current belief state, the last two agent dialogue acts, the user utterance, and the result from the API call. \[C_{t}=\mathrm{DAG}(B_{t},C_{t-2},C_{t-1},U_{t},R_{t})\] (4)
4. _Response Generation (RG)_: Convert the agent dialogue act \(C_{t}\) to the new agent utterance \(A_{t}\). Note that \(C_{t}\) contains all the necessary information for this subtask. However, providing \(U_{t}\) improves response fluency and choice of words, leading to a higher BLEU score, partly due to mirroring (Kale and Rastogi, 2020). \[A_{t}=\mathrm{RG}(U_{t},C_{t})\] (5)
### The Distilled Dialogue Representation
The design of Distilled is based on the following principles:
1. For cross-lingual agents, it is important to reduce the impact of translation errors. The representation should make minimal use of natural language by using a formal representation where possible.
2. Dialogues can get long, but the representation should be succinct, containing only the necessary information, so the neural network need not _learn_ to ignore unnecessary information from copious data. This improves data efficiency as well as the training and inference speed of neural models.
We note that BiToD's original representation (Lin et al., 2021) follows neither of these principles.3 It makes extended use of natural language: all previous user and agent natural language utterances are included in the input of all subtasks. It has many redundancies: for each subtask, it inputs the concatenation of all previous subtask's inputs and outputs. In the following, we highlight the changes we made to the (Lin et al., 2021) representation.
Footnote 3: We found this to be true for several previously-proposed popular representations of MultiWOZ as well (Lei et al., 2018; Chen et al., 2019).
Replace agent utterances with formal agent dialogue acts.Since agent responses are automatically generated, it is possible to capture all information useful to the different subtasks with formal agent dialogue acts. In this way, the neural network need not interpret previous natural language utterances.
We take two steps to generate the agent responses: DAG (Dialogue Act Generation) first produces the formal act, \(C_{t}\), which is then fed into RG (Response Generation) to generate the natural language response \(A_{t}\). Note that RG is not a part of the dialogue loop: the natural language \(A_{t}\) only serves to communicate to the user; it is the formal \(C_{t}\) from DAG that gets fed to subsequent subtasks instead. In contrast, Lin et al. (2021) generates the agent response directly from API results. Hosseini-Asl et al. (2020) also separates the response generation into two steps, but they use \(A_{t}\) instead of \(C_{t}\) as input to the semantic parser for the next turn.
Note that the agent dialogue acts are independent of the natural language used in the dialogues, if we ignore the entity values. This is beneficial to cross-lingual agents as it can learn easier from data available in other languages. Furthermore, DAG can be validated on whether the output dialogue acts match the gold answers exactly. This is not possible with natural language results, whose quality is typically estimated with BLEU score.
Shorten user utterance history.Since the belief state formally summarizes what the user has said, we remove previous user utterances \(U_{1},...,U_{t-1}\) from input to all subtasks, relying on the belief state \(B_{t-1}\) instead.
Untangle API call detection from response generation.After DST is done, depending on whether or not an API call is needed, Lin et al. (2021) either directly generates the agent response, or makes the API call and then generates the response in two steps. Our design is to always take two steps: (1) generate the API call _or indicate that there is none_, and (2) generate the agent response.
## 4 Automatic Dialogue Data Translation
Given a training dataset for one language, we automatically generate a training set in the target language we are interested in. This problem has been studied in the context of NLU for questions (Moradshahi et al., 2020; Sherborne et al., 2020; Li et al., 2021) and for dialogues (Moradshahi et al., 2023;
Ding et al., 2021; Zuo et al., 2021). One challenge is that the translated dataset should refer to entities in the target language. Thus, Moradshahi et al. (2020) proposed to first use cross-attention weights of the neural translation model to align entities in the original and translated sentences, then replace entities in the translated sentences with local entities from a target language knowledge base. Our initial experiments showed that applying this approach directly to end-to-end dialogue datasets does not yield good performance, especially for response generation. Thus, we adapted and improved this approach for dialogues as discussed below.
### Alignment for Dialogues
First, we found that while translation with alignment works for NLU, it does not work well for RG. Machine translation introduces two kinds of error: (1) Translated sentences can be ungrammatical, incorrect, or introduce spurious information. (2) The alignment for entities may be erroneous, which can seriously hurt the factual correctness of the responses. As shown in Moradshahi et al. (2023), these errors are tolerable in NLU since (1) sentences are seen by machines, not shown to users, (2) pre-trained models like mBART are somewhat robust to noisy inputs, since they are pre-trained on perturbed data. However, training with such low-quality data is not acceptable for RG, since the learned responses are shown directly to the user.
Second, we found alignment recall to be particularly low for an important category: entities that are mostly quantitative. We observe that dates, times, and prices can be easily mapped between different languages using rules. We propose to first try to translate such entities with dictionaries such as those available in dateparser Scrapinghub (2015) and num2words faire Linux (2017), and to match them in the translated text. We resort to using neural alignment only if no such match is found.
### Filtering Translation Noise for RG
To reduce translation noise for RG, we automatically filter the translated data based on the semantic textual similarity between the source and translated sentences. For this purpose, we use LaBSE Feng et al. (2020), a multilingual neural sentence encoder based on multilingual BERT Devlin et al. (2018), trained on translation pairs in various languages with a loss function that encourages encoding pairs to similar vectors. To score a pair of sentences, the model first calculates an embedding for each sentence and computes the cosine distance between those vectors. The lower the distance is, the more semantically similar the sentences are, according to the model.
In creating the RG training set, we first translate the source agent utterances to the target language and use LaBSE to remove pairs whose similarity score is below a threshold. We found a threshold of 0.8 to work best empirically. Higher thresholds would inadvertently filter correctly translated utterances. We construct the final training data by pairing aligned translated utterances that pass the filter with their corresponding translated agent dialogue acts.
## 5 Experiment Setting
### Base Dataset
We perform our experiments on BiToD, a large-scale high-quality bilingual dataset created using the Machine-to-Machine (M2M) approach. It is a multi-domain dataset, including restaurants, hotels, attractions, metro, and weather domains. It has a total of 7,232 dialogues (3,689 dialogues in English and 3,543 dialogues in Chinese) with 144,798 utterances in total. The data is split into 5,787 dialogues for training, 542 for validation, and 902 for testing. The training data is from the same distribution as validation and test data.
### Implementation details
Our code is implemented in PyTorch Paszke et al. (2019) using GenieNLP Campagna et al. (2019) library for training and evaluation metrics. We also use the Dialogues4 library for data preprocessing and evaluation. We use pre-trained models available through HuggingFace's Transformers library Wolf et al. (2019). The following model names are from that library. We use _mbart-large-50_ as the neural model for our agent in all our experiments. All models use a standard Seq2Seq architecture with a bidirectional encoder and left-to-right autoregressive decoder. mBART is pre-trained to denoise text in 50 languages, while mT5 is trained on 101 languages. mBART uses sentence-piece Kudo and Richardson (2018) for tokenization.
Footnote 4: [https://github.com/stanford-oval/dialogues](https://github.com/stanford-oval/dialogues)
In each setting, all four subtasks of DST, API detection, dialogue act generation, and response generation are done in a single model, where we specify the task by prepending a special token to the
input. We found mBART to be especially effective in zero-shot settings as the language of its outputs can be controlled by providing a language-specific token at the beginning of decoding. Additionally, its denoising pre-training objective improves its robustness to the remaining translation noise.
For translation, we use the publicly available _mbart-large-50-many-to-one-mmt_ (\(\sim\)611M parameters) model which can directly translate text from any of the 50 supported languages to English. It is an mBART model additionally fine-tuned to do translation. We use greedy decoding and train our models using teacher-forcing and token-level cross-entropy loss. We used Adam Kingma and Ba (2014) as our optimizer with a starting learning rate of \(2\times 10^{-5}\) and linear scheduling. These hyperparameters were chosen based on a limited hyperparameter search on the validation set. For the numbers reported in the paper, due to cost, we performed only a single run for each experiment.
Our models were trained on virtual machines with a single NVIDIA V100 (16GB memory) GPU on the AWS platform. For a fair comparison, all monolingual models were trained for the same number of iterations of 60K, and bilingual models for 120K. In the few-shot setting, we fine-tuned the model for 3K steps on 1% of the data and 6K steps on 10% of the data. Sentences are batched based on their input and approximate output token count for better GPU utilization. We set the total number of tokens per batch to 800 for mBART. Due to the verbosity and redundancy of the original BiToD representation, Lin et al. (2021) used a batch size of 1 example for training mbart-large. Using our Distilled representation, however, we can fit up to 6 examples in each batch and process each batch 3 times faster during training. Training and evaluating each model takes about 10 GPU-hours on average.
During error analysis, we noticed that although certain slots (max_temp and min_temp slots in Metro domain, and time and price_range slots in Weather domain) are present in the retrieved knowledge base values, the model does not learn to output them in the agent dialogue act generation subtask. This issue stems from BiToD's non-deterministic policy where an agent sometimes provides these slots and sometimes not in the gold training data. To mitigate this, during evaluation, we automatically check if these slots are present in the input and append them and their retrieved values to the generated agent dialogue acts.
At inference time, we use the predicted belief state as input to subsequent turns instead of ground truth. However, to avoid the conversation from diverging from its original direction, Lin et al. (2021) use the ground-truth natural-language agent response as input for the next turn. To make sure the settings are equivalent for a fair comparison, we use ground-truth agent acts as input for the next turn.
### Evaluation Metrics
We use the following metrics to compare different models. Scores are averaged over all turns unless specified otherwise.
* **Joint Goal Accuracy (JGA)**(Budzianowski et al., 2018): Is the standard metric for evaluating DST. JGA for a dialogue turn is 1 if all slot-relation-value triplets in the generated belief state match the gold annotation, and is 0 otherwise.
* **Task Success Rate (TSR)**(Lin et al., 2021): A task, defined as a pair of domain and intent, is completed successfully if the agent correctly provides all the user-requested information and satisfies the user's initial goal for that task. TSR is reported as an average over all tasks.
* **Dialogue Success Rate (DSR)**(Lin et al., 2021): DSR is 1 for a dialogue if all user requests are completed successfully, and 0 otherwise. DSR is reported as an average over all dialogues. We use this as the main metric to compare models, since the agent needs to complete all dialogue subtasks correctly to obtain a full score on DSR.
* **API**(Lin et al., 2021): For a dialogue turn, is 1 if the model correctly predicts to make an API call, and all the constraints provided for the call match the gold. It is 0 otherwise.
* **BLEU**(Papineni et al., 2002): Measures the natural language response fluency based on n-gram matching with the human-written gold response. BLUE is calculated at the corpus level.
* **Slot Error Rate (SER)**(Wen et al., 2015): It complements BLEU as it measures the factual correctness of natural language responses. For each turn, it is 1 if the response contains all entities present in the gold response, and is 0 otherwise.
## 6 Results and Discussion
We first show how our Distilled representation affects the performance of an agent in a full-shot setting. We then evaluate our proposed techniques on cross-lingual settings with varying amounts of available training data.
### Evaluation of Distilled Representation
To understand how our design of Distilled representation affects the performance of ToD agents in general, we train an English agent using all the English training data and perform an ablation study (Table 1). We observe that even though the Distilled representation removes a lot of natural language inputs, it improves the best previous English-only results on JGA, TSR, DSR, API, BLEU and SER by 7.6%, 6.5%, 5.9%, 8.4%, 4.1%, and 4.7%, respectively. This suggests that natural language utterances carry a lot of redundant information, and the verbosity may even hurt the performance. Note that the improvement in BLEU is also accompanied by an improvement of factuality measured by SER.
Furthermore, using the Distilled representation reduces training time by a factor of 3. See Section 5.2 for more details.
Generate full state.Our first ablation study confirms that the proposal by Lin et al. (2020) to predict the Levenshtein belief state (\(\Delta B_{t}\)) is indeed better than the cumulative state (\(B_{t}\)). Note that the training time per gradient step is more than twice as long in this ablation since the outputs are longer.
Natural agent response.Here we use natural language agent responses as input instead of agent dialogue acts, replacing \(C_{t-1},C_{t-2}\) with \(A_{t-1},A_{t-2}\). The drop in TSR and DSR shows this is an important design choice - distilling natural language into a concise formal representation improves the model's ability to understand the important information in the sentence.
Only last agent turn.When we remove \(C_{t-2}\) from the input and only use \(C_{t-1}\), we observe a drop across all metrics. This is because some turns in BiToD refer to the agent's states from two turns ago. We experimented with carrying three turns, but there was no improvement.
Previous user utterance as state.In this ablation, we use \(U_{t-1}\) instead of \(B_{t-1}\) as subtask inputs. Compared to all previous ablations, accuracy drastically decreases across all metrics, especially JGA. This is expected since the information from earlier turns present in the dialogue state is now lost. Additionally, it shows that the dataset is highly contextual and therefore a summary of the conversation history is necessary.
Remove state.We remove \(B_{t-1}\) without adding back the previous user utterance \(U_{t-1}\). Compared to the previous ablation, TSR and DSR drop by 10.5% and 5.2% respectively. This difference shows \(U_{t-1}\) does contain part of the information captured in \(B_{t-1}\).
### Evaluation of Cross-Lingual Transfer
The goal of this experiment is to create an agent in a _target_ language, given the full training data in a source language (\(\mathcal{D}_{\text{src}}\)), and a varying amount of training data in a target language (\(\mathcal{D}_{\text{tgt}}\)). We also assume that valuation and test data are available in both source and target languages. We chose Chinese as the source language and English as the target language so we can perform error analysis and the model outputs are understandable for a wider audience.
#### 6.2.1 Varying Target Training Data
Full-Shot.In the full-shot experiments, all of \(\mathcal{D}_{\text{tgt}}\) is available for training. We train two models on two data sets: (1) on a shuffled mix of \(\mathcal{D}_{\text{src}}\) and \(\mathcal{D}_{\text{tgt}}\). (2) on \(\mathcal{D}_{\text{tgt}}\) alone. The ablation "\(-\)_Mixed_" in Table 2 refers to the latter.
Zero-Shot.In our zero-shot experiments, we train with a canonicalized \(\mathcal{D}_{\text{src}}\) and an automatically translated data set, as explained below.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Representation & JGA \(\uparrow\) & TSR \(\uparrow\) & DSR \(\uparrow\) & API \(\uparrow\) & BLEU \(\uparrow\) & SER \(\downarrow\) \\ \hline Original (Lin et al., 2021) & 69.19 & 69.13 & 47.51 & 67.92 & 38.48 & 14.93 \\ Distilled (Ours) & **76.79** & **75.64** & **53.39** & **76.33** & **42.54** & **10.61** \\ \(\bullet\) Generate full state & 74.30 & 74.19 & 50.90 & 73.93 & 41.90 & 11.38 \\ \(\bullet\) Natural agent response & 75.62 & 73.41 & 49.10 & 73.93 & 40.94 & 11.90 \\ \(\bullet\) Only last agent turn & 73.97 & 74.19 & 52.71 & 74.27 & 41.83 & 11.81 \\ \(\bullet\) Prev. user utterance as state & 71.75 & 61.66 & 33.94 & 67.67 & 39.72 & 15.97 \\ \(\bullet\) Remove state & 70.84 & 51.89 & 24.43 & 66.47 & 37.10 & 19.61 \\ \hline \end{tabular}
\end{table}
Table 1: Full-shot English monolingual training with ablation. All results are reported on the English test set of BiToD using the same evaluation script. The best result is in bold.
_Canonicalization_: To increase transfer learning from the source to the target language, we use the same canonical formal representation across languages Moradshahi et al. (2020); Razumovskaia et al. (2021). To do so, we adapt \(\mathcal{D}_{\text{STC}}\) so that the domain names, slot names, agent dialogue acts, and API names in the formal representation to be the same as the target language. Note that the user utterance, agent response, and slot values will remain in the source language. The BiToD dataset has a one-to-one mapping for most of the above and we added the missing items.
_Translation_: We use machine translation to convert the user and agent utterances and slot values in \(\mathcal{D}_{\text{src}}\) to create a training set for the target language.
_Alignment_: After translating the data, we use alignment (Section 4) to localize entities while ensuring the entities in translated utterances still match the values specified in annotations.
_Filtering_: We use the filtering procedure described in Section 4.2 to remove turns where agent responses are deemed to have low translation quality.
In Table 2, _Ours_ refers to our main approach, which combines all four techniques. Each ablation incrementally takes away one of the techniques.
Few-Shot.In the few-shot setting, we start with our pre-trained zero-shot models (with various ablations) and further fine-tune it on 1% and 10% of \(\mathcal{D}_{\text{tgt}}\), which comprises 29 and 284 dialogues, respectively. Lin et al. (2021) reported the results only for the 10% setting. We use their few-shot data split in that case to be directly comparable. We add one more ablation study where we eliminate cross-lingual transfer by training a model only on the few-shot data (Few-shot Only).
#### 6.2.2 Baseline
We compare our results to the best previously reported result on BiToD from Lin et al. (2021). This SOTA result was obtained using MinTL Lin et al. (2020) and using a single mT5-small model to per
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Setting & JGA \(\uparrow\) & TSR \(\uparrow\) & DSR \(\uparrow\) & API \(\uparrow\) & BLEU \(\uparrow\) & SER \(\downarrow\) \\ \hline \multicolumn{7}{c}{Full-Shot} & \multicolumn{5}{c}{} \\ \hline MinTL(mT5) & 72.16 & 71.18 & 51.13 & 71.87 & 40.71 & 13.75 \\ – Mixed & 69.19 & 69.13 & 47.51 & 67.92 & 38.48 & 14.93 \\ \hline MinTL(mBART) & 69.37 & 42.45 & 17.87 & 65.35 & 28.76 & – \\ – Mixed & 67.36 & 56.00 & 33.71 & 57.03 & 35.34 & – \\ \hline Ours & **77.52** & 75.04 & **54.07** & 74.44 & 41.46 & 11.17 \\ – Mixed & 76.79 & **75.64** & 53.39 & **76.33** & **42.54** & **10.61** \\ \hline \multicolumn{7}{c}{Zero-Shot} & \multicolumn{5}{c}{} \\ \hline Ours & **55.33** & **46.74** & **21.95** & **63.04** & **20.01** & **20.52** \\ – Filtering & 54.83 & 45.03 & 19.68 & 60.81 & 19.11 & 20.86 \\ – Alignment & 47.21 & 4.72 & 1.13 & 52.74 & 8.26 & 39.20 \\ – Translation & 14.73 & 3.52 & 1.58 & 6.26 & 0.69 & 41.30 \\ – Canonicalization & 2.13 & 1.20 & 0.00 & 0.26 & 0.25 & 42.39 \\ \hline \multicolumn{7}{c}{Few-Shot (1%)} & \multicolumn{5}{c}{} \\ \hline Ours & **64.60** & **57.89** & **34.16** & **62.09** & **28.15** & **17.94** \\ – Filtering & 63.88 & 57.80 & 32.35 & 59.95 & 28.00 & 18.57 \\ – Alignment & 58.86 & 51.89 & 23.76 & 57.12 & 26.84 & 21.56 \\ – Translation & 49.58 & 41.34 & 19.68 & 46.05 & 22.73 & 24.86 \\ – Canonicalization & 44.56 & 42.97 & 20.36 & 46.23 & 23.08 & 24.77 \\ Few-shot Only & 25.08 & 24.61 & 11.09 & 23.67 & 18.71 & 32.62 \\ \hline \multicolumn{7}{c}{Few-Shot (10%)} & \multicolumn{5}{c}{} \\ \hline MinTL(mT5) & 58.85 & 56.43 & 34.16 & 57.54 & 31.20 & – \\ – Translation & 48.77 & 44.94 & 24.66 & 47.60 & 29.53 & 19.75 \\ Few-shot Only & 19.86 & 6.78 & 1.36 & 17.75 & 10.35 & – \\ \hline MinTL(mBART) & 37.50 & 21.61 & 10.18 & 27.44 & 17.86 & – \\ – Translation & 42.84 & 36.19 & 16.06 & 41.51 & 22.50 & – \\ Few-shot Only & 4.64 & 1.11 & 0.23 & 0.60 & 3.17 & – \\ \hline Ours & **72.70** & **71.61** & **48.19** & **72.56** & **36.02** & **12.71** \\ – Filtering & 72.45 & 69.55 & 44.57 & 69.55 & 34.67 & 13.62 \\ – Alignment & 68.40 & 63.38 & 38.24 & 63.38 & 32.99 & 16.63 \\ – Translation & 67.13 & 63.12 & 41.40 & 63.64 & 32.86 & 16.40 \\ – Canonicalization & 64.51 & 63.64 & 40.27 & 62.69 & 32.71 & 16.63 \\ Few-shot Only & 57.18 & 54.80 & 28.73 & 55.66 & 29.61 & 19.66 \\ \hline \end{tabular}
\end{table}
Table 2: All results are reported on the original English test set of BiToD using the same evaluation script. The best result in each section is in bold. Each “\(-\)” removes one additional component from the previous row. All MinTL results are from Lin et al. (2021). SER numbers are not available for some models. An upward arrow is show for columns where bigger numbers are better, and vice versa.
form all dialogue subtasks.
Contrary to what Lin et al. (2021) reported, we found that mBART-large model outperforms mT5-small in all settings. We have included all the results including MinTL(mBART) in Table 2 for comparison.
#### 6.2.3 Results
The results for our cross-lingual experiment are reported in Table 2. Overall, in the full-shot setting, when training on both source and target language data, we improve the SOTA in JGA by 5.3%, TSR by 3.8%, DSR by 2.9%, API by 2.6%, BLEU by 0.8%, and SER by 2.6%.
Our zero-shot agent achieves 71%, 62%, 40%, and 47% of the performance of a full-shot agent in terms of JGA, TSR, DSR, and BLEU score, respectively. In the 10% few-shot setting, our approach establishes a new SOTA by increasing JGA, TSR, DSR, API, and BLEU absolutely by 13.9%, 15.2%, 14.0%, 15.0%, and 4.8% respectively. Prominently, training with just 10% of the data beats the full-shot baseline which is trained on 100% of the training data, on all metrics except for DSR and BLEU. It also comes within 5% of full training using the Distilled representation on all metrics.
_Our Distilled representation improves the performance, especially in few-shot_. Comparing our results with that of Lin et al. (2021), in the full-shot monolingual setting (MinTL(mTS) "\(-\)Mixed" vs. Ours "\(-\)Mixed"), models trained on data with our representation outperform the baseline on all metrics. In the pure few-shot (10%) setting, Ours outperforms MinTL(mT5) significantly in all metrics. This suggests that our Distilled representation and task decomposition are much more effective in low-data settings.
_Canonicalization is useful._ Comparing "\(-\)Translation" with "\(-\)Canonicalization", training on canonicalized data significantly improves the results in the zero-shot setting. This is intuitive since canonicalization brings training data closer in vocabulary to the test data in the target language. This improvement comes at almost no cost since translation is done automatically using a dictionary.
_Automatic naive translation of the training set does not work for zero-shot._ The naive translation approach (i.e. without alignment) completely fails in the zero-shot setting by achieving only 4.7% in TSR, and 1.1% in DSR, as translated entities might no longer match with ones in the annotation. Adding few-shot data helps significantly as the gap closes between "\(-\)Alignment" and "\(-\)Translation" ablations.
_Alignment improves translation quality in all settings and metrics._ With alignment, the translation approach performs much better in all settings, establishing a new state-of-the-art in zero and few-shot settings according to almost all metrics. As a general trend, the lower data settings benefit more from alignment. We additionally performed an experiment using the alignment proposed by Morashahi et al. (2023). There is a 4.0% drop in TSR and 4.5% in DSR, confirming the benefit of our improved alignment.
_Filtering noise for RG improves fluency._ We perform an ablation by training separate models on filtered and unfiltered translated agent utterances. The filtering process is described in Section 4.2. In 10% fewshot setting, both BLEU and SER improve by 1.4% confirming that automatically removing poor translations from training data improves the agent response quality. Interestingly, we observe an increase in other metrics too. Since model parameters are shared between all subtasks, enhancing the data quality for one subtask will have a positive impact on the others as well.
## 7 Conclusion
This paper shows how to build a dialogue agent in a new language automatically, given a dialogue dataset in another language, by using entity-aware machine translation and our new Distilled dialogue representation. The performance can be further improved if a few training examples in the target language are available, and we show that our approach outperforms existing ones in this setting as well.
On the BiToD dataset, our method achieves 3.9% and 2.9% improvement in TSR and DSR, respectively, over the previous SOTA in full-shot setting, and 15.2% and 14.0% in a 10% few-shot setting, showing the effectiveness of our approach. More importantly, training on translated data and only 10% of original training data comes within 5% of full training.
We have implemented our methodology as a toolkit for developing multilingual dialogue agents, which we have released open-source. Our proposed methodology can significantly reduce the cost and time associated with data acquisition for task-oriented dialogue agents in new languages.
## 8 Limitations
As discussed in Section 2.1, organic (i.e. without the use of translation) multilingual dialogue datasets are scarce, which has limited the scope of our experiments. Our guidelines to improve dialogue representation mentioned in Section 4 are general and applicable to any Human-to-Human or Machine-to-Machine dialogues annotated with slot-values. We have yet to evaluate the generalization of our cross-lingual approach across different languages and datasets, and to Human-to-Human dialogues. For instance, we use a Chinese to English translator in this work. Available translation models for low-resource languages have much lower quality, and this will likely lower the performance of this approach.
Another limitation is the lack of human evaluation for agent responses. BLEU score does not correlate well with human judgment, and SER only accounts for the factuality of the response but not the grammaticality or fluency. This problem is also reported in prior works (see Section 5). Although finding native speaker evaluators for different languages is a challenge Pavlick et al. (2014), in the future, we wish to address this by conducting human evaluations.
## 9 Ethical Considerations
We do not foresee any harmful or malicious missues of the technology developed in this work. The data used to train models is about seeking information about domains like restaurants, hotels and tourist attractions, does not contain any offensive content, and is not unfair or biased against any demographic. This work does focus on two widely-spoken languages, English and Chinese, but we think the cross-lingual approach we proposed can improve future dialogue language technologies for a wider range of languages.
We fine-tune multiple medium-sized (several hundred million parameters) neural networks for our experiments. We took several measures to avoid wasted computation, like performing one run instead of averaging multiple runs (since the numerical difference between different models is large enough), and improving batching and representation that improved training speed, and reduced needed GPU time. Please refer to Appendix 5.2 for more details about the amount of computation used in this paper.
## Acknowledgements
This work is supported in part by the National Science Foundation under Grant No. 1900638, the Alfred P. Sloan Foundation under Grant No. G-2020-13938, Microsoft, Stanford HAI and the Verdant Foundation.
|
2310.01269
|
Nonlinear expansions in reproducing kernel Hilbert spaces
|
We introduce an expansion scheme in reproducing kernel Hilbert spaces, which
as a special case covers the celebrated Blaschke unwinding series expansion for
analytic functions. The expansion scheme is further generalized to cover Hardy
spaces $H^p$, $1<p<\infty$, viewed as Banach spaces of analytic functions with
bounded evaluation functionals. In this setting a dichotomy is more
transparent: depending on the multipliers used, the expansion of $f \in H^p$
converges either to $f$ in $H^p$-norm or to its projection onto a model space
generated by the corresponding multipliers. Some explicit instances of the
general expansion scheme, which are not covered by the previously known
methods, are also discussed.
|
Javad Mashreghi, William Verreault
|
2023-10-02T15:13:24Z
|
http://arxiv.org/abs/2310.01269v1
|
# Nonlinear expansions in reproducing kernel Hilbert spaces
###### Abstract.
We introduce an expansion scheme in reproducing kernel Hilbert spaces, which as a special case covers the celebrated Blaschke unwinding series expansion for analytic functions. The expansion scheme is further generalized to cover Hardy spaces \(H^{p}\), \(1<p<\infty\), viewed as Banach spaces of analytic functions with bounded evaluation functionals. In this setting a dichotomy is more transparent: depending on the multipliers used, the expansion of \(f\in H^{p}\) converges either to \(f\) in \(H^{p}\)-norm or to its projection onto a model space generated by the corresponding multipliers. Some explicit instances of the general expansion scheme, which are not covered by the previously known methods, are also discussed.
Key words and phrases:Toeplitz operators, oscillatory expansion, Blaschke product, model spaces 2020 Mathematics Subject Classification: 30H10, 42C15, 30J10, 30B50, 42C40
## 1. Introduction
An entire function \(f\) has the Taylor series expansion
\[f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}, \tag{1.1}\]
which converges for all values of \(z\in\mathbb{C}\). We usually express \(a_{n}\) either by the Brook Taylor formula (1715)
\[a_{n}=\frac{f^{(n)}(0)}{n!},\]
or by the Cauchy integral formula (1875)
\[a_{n}=\frac{1}{2\pi i}\int_{\Gamma}\frac{f(z)}{z^{n+1}}\,dz.\]
In the mid-1990s, R. Coifman had a revolutionary interpretation of the expansion (1.1). The idea was developed in further depth in the doctoral thesis of M. Nahon [10].
Briefly speaking, their strategy is as follows. We may write \(f(z)\) as
\[f(z)=f(0)+\big{(}f(z)-f(0)\big{)}. \tag{1.2}\]
The first term on the right side is the constant \(f(0)=a_{0}\). The second term is a function that vanishes at the origin. Hence, we can write \(f(z)-f(0)=zf_{1}(z)\), where \(f_{1}\) is another holomorphic function. Thus, we have extracted a zero of \(f(z)-f(0)\) as the multiplicative factor \(z\), and then we deal with the new function \(f_{1}\) by iterating the above procedure. More explicitly, we write
\[f_{1}(z)=f_{1}(0)+\big{(}f_{1}(z)-f_{1}(0)\big{)}, \tag{1.3}\]
and \(f_{1}(z)-f_{1}(0)=zf_{2}(z)\) for yet another holomorphic function \(f_{2}\). Plugging back (1.3) into (1.2), we see that
\[f(z)=f(0)+f_{1}(0)z+z^{2}f_{2}(z),\]
which at the same times shows \(f_{1}(0)=a_{1}\). If we continue this procedure infinitely many times, we obtain
\[f(z)=f(0)+f_{1}(0)z+f_{2}(0)z^{2}+\cdots, \tag{1.4}\]
which is a new interpretation of the Taylor series expansion (1.1). The novel vision of the Coifman school was to extract _more_ zeros in each step via finite Blaschke products. In the above procedure, instead of factoring out the zero at the origin, we may factor out all the zeros in the open unit disc \(\mathbb{D}\). Since \(f\) is entire, it has finitely many zeros in \(\mathbb{D}\), and thus an appropriate apparatus to extract zeros in \(\mathbb{D}\) are finite Blaschke products. Given \(\alpha_{1},\dots,\alpha_{N}\in\mathbb{D}\), repetition allowed, the rational function
\[B(z)=\prod_{n=1}^{N}\frac{\alpha_{n}-z}{1-\bar{\alpha}_{n}z}\]
is called a _finite Blaschke product_. For a treatment of this topic, we refer to the monographs [6, 9]. After the initial step (1.2), we factor \(f(z)-f(0)\) as
\[f(z)-f(0)=B_{1}(z)f_{1}(z),\]
where \(f_{1}\) is analytic on \(\overline{\mathbb{D}}\) and has no roots in \(\mathbb{D}\). All roots of \(f(z)-f(0)\) in \(\mathbb{D}\), including the root at the origin, and counting their multiplicities, are gathered in \(B_{1}\). Hence, we can write
\[f(z)=f(0)+B_{1}(z)f_{1}(z).\]
Since \(f_{1}\) is a holomorphic function on \(\overline{\mathbb{D}}\), we can iterate the above procedure with \(f_{1}(z)-f_{1}(0)\). The outcome, after the second step, is
\[f(z)=f(0)+f_{1}(0)B_{1}(z)+B_{1}(z)B_{2}(z)f_{2}(z).\]
If we reiterate infinitely many times, we obtain the expansion
\[f(z)=f(0)+f_{1}(0)B_{1}(z)+f_{2}(0)B_{1}(z)B_{2}(z)+\cdots, \tag{1.5}\]
which is known as the _Blaschke unwinding series expansion_ of \(f\).
The convergence of the Blaschke unwinding series was a major question for a long period. In fact, some of Nahon's numerical experiments suggested that, for functions in \(H^{2}\), the unwinding expansion converges to the original function in mean. However, this fact was only firmly proved later by T. Qian [12]. This question was again studied by Coifman and Steinerberger in [2], where they obtained more general results for convergence in weighted subspaces of \(H^{2}\). Eventually, Coifman and Peyriere [1] proved the convergence of the inner-outer unwinding series for functions in the Hardy spaces \(H^{p}\).
In parallel and independently, Qian and collaborators came up with a similar process to obtain a nonlinear phase unwinding of holomorphic functions, which they called _adaptive Fourier decomposition_[12, 14, 15]. Variants of this type of unwinding have been further investigated in several papers, with a strong emphasis on the applications. Their idea is to start with the Takenaka-Malmquist-Walsh basis. Then, to decompose a function \(f\in H^{2}\) with respect to this basis, they use an adaptive algorithm akin to a greedy algorithm which relies on the existence of a point in \(\mathbb{D}\) that minimizes the distance from \(f\) to the partial sums of its unwinding series.
In this work, we provide a general expansion scheme, which can be viewed as a generalization of the Blaschke unwinding series expansion (1.5). We perform this in two contexts. First, in Section 2, we present a scheme followed by two convergence results in Theorems 2.7 and 2.9, in the general setting of reproducing kernel Hilbert spaces (RKHS). To some extent, these results can be extended to reproducing kernel Banach spaces. However, since in this note, our goal is to extend the above approach to Hardy spaces, we devoted the subsequent section to a more detailed study of \(H^{p}\) spaces. More explicitly, the construction is as follows. We consider a sequence \((b_{n})_{n\geq 1}\) of elements in the closed unit ball of \(H^{\infty}\) (not necessarily Blaschke products, or even inner functions). Then using the co-analytic Toeplitz operator \(T_{\bar{b}_{k}}\) and the closely related operators \(Q_{b_{k}}\), we introduce the expansion
\[f=Q_{b_{1}}f+b_{1}\cdot Q_{b_{2}}T_{\bar{b}_{1}}f+b_{1}b_{2}\cdot Q_{b_{3}}T_{ \bar{b}_{1}\bar{b}_{2}}f+b_{1}b_{2}b_{3}\cdot Q_{b_{4}}T_{\bar{b}_{1}\bar{b}_{ 2}\bar{b}_{3}}f+\cdots\]
for an arbitrary element \(f\in H^{p}\). The procedure is explained in Section 3 and the convergence problem is addressed in Theorems 3.8 and 3.9. In Section 4, we show that as a very special case, Theorem 3.8 leads to the Taylor series expansion. In Section 5, we show that when the \(b_{k}\)s are Blaschke factors, the general expansion scheme gives the previously known expansions which were described above, namely the Blaschke unwinding and the adaptive Fourier decomposition. In Section 6, as a
prototypical example, we study a special expansion created by outer functions, which was not possible with previously known expansions.
## 2. The nonlinear expansion in RKHS
Let \(\mathcal{H}\) be an RKHS on \(X\) with the multiplier algebra \(\mathcal{M}(\mathcal{H})\). For a thorough treatment of RKHS, see [11]. Let \(\phi\in\mathcal{M}(\mathcal{H})\) and define
\[P_{\phi}:=M_{\phi}M_{\phi}^{*}\quad\text{and}\quad Q_{\phi}:=I-M_{\phi}M_{\phi} ^{*}, \tag{2.1}\]
where \(M_{\phi}(f)=\phi f\) is the multiplication operator on \(\mathcal{H}\). While \(P_{\phi}\) and \(Q_{\phi}\) are certainly bounded operators, in general, they are not necessarily projections (idempotent). As a matter of fact, it is easy to see that they are projections if and only if
\[M_{\phi}^{*}(\phi k_{x})=k_{x},\qquad x\in X,\]
where \(k_{x}\) is the reproducing kernel of \(\mathcal{H}\).
The scheme is as follows. Trivially \(Q_{\phi}+P_{\phi}=I\). Hence, we have
\[g=Q_{\phi}g+P_{\phi}g,\qquad g\in\mathcal{H},\]
or equivalently
\[g=Q_{\phi}g+\phi M_{\phi}^{*}g,\qquad g\in\mathcal{H}. \tag{2.2}\]
This elementary observation is actually the building block in generalizing the Blaschke unwinding series expansion (1.5).
To continue, let \((\phi_{n})_{n\geq 1}\) be a sequence of elements in the closed unit ball of \(\mathcal{M}(\mathcal{H})\). Fix \(f\in\mathcal{H}\). Then, by (2.2) with \(\phi=\phi_{1}\),
\[f=Q_{\phi_{1}}f+\phi_{1}M_{\phi_{1}}^{*}f. \tag{2.3}\]
Then we apply (2.2) with \(\phi=\phi_{2}\) to \(g=M_{\phi_{1}}^{*}f\) to obtain
\[M_{\phi_{1}}^{*}f=Q_{\phi_{2}}M_{\phi_{1}}^{*}f+\phi_{2}M_{\phi_{2}}^{*}M_{ \phi_{1}}^{*}f. \tag{2.4}\]
If we plug (2.4) back into (2.3), it gives
\[f=Q_{\phi_{1}}f+\phi_{1}Q_{\phi_{2}}M_{\phi_{1}}^{*}f+\phi_{1}\phi_{2}M_{\phi_{ 1}\phi_{2}}^{*}f. \tag{2.5}\]
Note that we implicitly used \(M_{\phi_{2}}^{*}M_{\phi_{1}}^{*}=M_{\phi_{1}\phi_{2}}^{*}\). We continue and apply again (2.2) with \(\phi=\phi_{3}\) and \(g=M_{\phi_{1}\phi_{2}}^{*}f\) and plug it back into (2.5) to obtain
\[f=Q_{\phi_{1}}f+\phi_{1}Q_{\phi_{2}}M_{\phi_{1}}^{*}f+\phi_{1}\phi_{2}Q_{\phi_ {3}}M_{\phi_{1}\phi_{2}}^{*}f+\phi_{1}\phi_{2}\phi_{3}M_{\phi_{1}\phi_{2}\phi_ {3}}^{*}f. \tag{2.6}\]
By induction, we can continue this procedure as many times as we wish. The general convergence theorem is as follows.
**Theorem 2.7**.: _Let \(\mathcal{H}\) be an RKHS on \(X\) with the multiplier algebra \(\mathcal{M}(\mathcal{H})\). Let \((\phi_{n})_{n\geq 1}\) be a sequence of elements in the closed unit ball of \(\mathcal{M}(\mathcal{H})\). Write \(\Phi_{0}=1\) and_
\[\Phi_{n}:=\phi_{1}\phi_{2}\cdots\phi_{n},\qquad n\geq 1.\]
_Assume that_
\[\lim_{n\to\infty}\Phi_{n}(x)=0,\qquad x\in X.\]
_Then, for each \(f\in\mathcal{H}\),_
\[f=\sum_{n=1}^{\infty}\Phi_{n-1}\cdot Q_{\phi_{n}}M^{*}_{\Phi_{n-1}}f,\]
_where the series converges in \(\mathcal{H}\)._
Proof.: By induction, the general formula for (2.5) and (2.6) is
\[f=\sum_{n=1}^{N}\Phi_{n-1}Q_{\phi_{n}}M^{*}_{\Phi_{n-1}}f+\Phi_{N}M^{*}_{\Phi_{ N}}f.\]
Since
\[\|\Phi_{N}M^{*}_{\Phi_{N}}f\|_{\mathcal{H}}\leq\|\Phi_{N}\|_{\mathcal{M}( \mathcal{H})}\|M^{*}_{\Phi_{N}}f\|_{\mathcal{H}}\leq\|M^{*}_{\Phi_{N}}f\|_{ \mathcal{H}},\]
it is enough to show that
\[\|M^{*}_{\Phi_{N}}f\|_{\mathcal{H}}\to 0\]
as \(N\to\infty\).
Recall that
\[M^{*}_{\Phi_{N}}k_{x}=\overline{\Phi_{N}(x)}\,k_{x},\qquad x\in\mathcal{H}.\]
Hence,
\[\|M^{*}_{\Phi_{N}}k_{x}\|_{\mathcal{H}}=|\Phi_{N}(x)|\,\|k_{x}\|_{\mathcal{H}}.\]
Therefore, according to our main assumption,
\[\lim_{N\to\infty}\|M^{*}_{\Phi_{N}}k_{x}\|_{\mathcal{H}}=0,\qquad x\in X. \tag{2.8}\]
For a general \(f\in\mathcal{H}\), we use two properties of reproducing kernels and multipliers of \(\mathcal{H}\). First, the linear span of the kernel functions is dense in \(\mathcal{H}\). Second, the operators \(M^{*}_{\Phi_{N}}\) are uniformly bounded. More explicitly, given \(f\in\mathcal{H}\) and \(\varepsilon>0\), there are constants \(\alpha_{1},\ldots,\alpha_{m}\in\mathbb{C}\) and points \(x_{1},\ldots,x_{m}\in X\) such that
\[\|f-(\alpha_{1}k_{x_{1}}+\cdots+\alpha_{m}k_{x_{m}})\|_{\mathcal{H}}<\varepsilon.\]
Hence,
\[\|M^{*}_{\Phi_{N}}f\|_{\mathcal{H}} \leq \|M^{*}_{\Phi_{N}}[f-(\alpha_{1}k_{x_{1}}+\cdots+\alpha_{m}k_{x_{m} })]\|_{\mathcal{H}}\] \[+ \|M^{*}_{\Phi_{N}}(\alpha_{1}k_{x_{1}}+\cdots+\alpha_{m}k_{x_{m}} )\|_{\mathcal{H}}\] \[\leq \|\Phi_{N}\|_{\mathcal{M}(\mathcal{H})}\|f-(\alpha_{1}k_{x_{1}}+ \cdots+\alpha_{m}k_{x_{m}})\|_{\mathcal{H}}\] \[+ |\alpha_{1}|\,\|M^{*}_{\Phi_{N}}k_{x_{1}}\|_{\mathcal{H}}+\cdots+ |\alpha_{m}|\,\|M^{*}_{\Phi_{N}}k_{x_{m}}\|_{\mathcal{H}}\] \[\leq \varepsilon+|\alpha_{1}|\,\|M^{*}_{\Phi_{N}}k_{x_{1}}\|_{ \mathcal{H}}+\cdots+|\alpha_{m}|\,\|M^{*}_{\Phi_{N}}k_{x_{m}}\|_{\mathcal{H}}.\]
By (2.8), we see that
\[\limsup_{N\to\infty}\|M^{*}_{\Phi_{N}}f\|_{\mathcal{H}}\leq\varepsilon.\]
Since \(\varepsilon>0\) is arbitrary, the result follows.
In Theorem 2.7, we assume that the multipliers are arranged so that
\[\lim_{n\to\infty}\Phi_{n}(x)=0,\qquad x\in X.\]
In general, this is not necessarily the case. In fact, quite often we end up with
\[\lim_{n\to\infty}\Phi_{n}(x)=\Phi(x),\qquad x\in X,\]
where \(\Phi\) is a non-zero multiplier of \(\mathcal{H}\). In this case, an extra term appears which is linked to the model spaces (see next section). In the general setting, the result is as follows.
**Theorem 2.9**.: _Let \(\mathcal{H}\) be an RKHS on \(X\) with the multiplier algebra \(\mathcal{M}(\mathcal{H})\). Let \((\phi_{n})_{n\geq 1}\) be a sequence of elements in the closed unit ball of \(\mathcal{M}(\mathcal{H})\). Write \(\Phi_{0}=1\) and_
\[\Phi_{n}:=\phi_{1}\phi_{2}\cdots\phi_{n},\qquad n\geq 1.\]
_Assume that there is a multiplier \(\Phi\in\mathcal{M}(\mathcal{H})\) such that_
\[\lim_{n\to\infty}\Phi_{n}(x)=\Phi(x),\qquad x\in X.\]
_Then, for each \(f\in\mathcal{H}\),_
\[f=\Phi M^{*}_{\Phi}f+\sum_{n=1}^{\infty}\Phi_{n-1}\cdot Q_{\phi_{n}}M^{*}_{ \Phi_{n-1}}f,\]
_where the series converges in \(\mathcal{H}\)._
## 3. The expansion in Hardy spaces
The Hardy spaces \(H^{p}\), \(1<p<\infty\), \(p\neq 2\), are not Hilbert spaces. However, with appropriate adjustments, the expansions discussed in Theorems 2.7 and 2.9 can be extended to this setting. For this purpose, some facts from the theory of Toeplitz operators are needed. For basics of Toeplitz operators, see [5, Ch. 4], and for the theory of Hardy spaces, we refer to [3, 8].
Let \(\varphi\in L^{\infty}(\mathbb{T})\), and let \(1<p<\infty\). Then, the Toeplitz operator \(T_{\varphi}:H^{p}\to H^{p}\) is defined by
\[T_{\varphi}f=P_{+}(\varphi f),\qquad f\in H^{p},\]
where \(P_{+}\) is the M. Riesz projection of \(L^{p}\) onto \(H^{p}\). As a consequence of a celebrated result of M. Riesz on the boundedness of the Hilbert transform, \(T_{\varphi}\) is also bounded and moreover
\[\|T_{\varphi}\|_{H^{p}\to H^{p}}\leq c_{p}\|\varphi\|_{H^{\infty}}. \tag{3.1}\]
It is well-known that if \(\varphi\) and \(\psi\) are in \(H^{\infty}\), then
\[T_{\varphi}f=\varphi f,\qquad f\in H^{p}, \tag{3.2}\]
and
\[T_{\bar{\varphi}}T_{\bar{\psi}}=T_{\bar{\psi}}T_{\bar{\varphi}}=T_{\bar{ \varphi}\bar{\psi}}. \tag{3.3}\]
Let \(b\) be an element of the closed unit ball of \(H^{\infty}\). As in (2.1), let
\[P_{b}:=T_{b}T_{\bar{b}}\quad\text{and}\quad Q_{b}:=I-P_{b}.\]
Here, the bounded operators \(P_{b}\) and \(Q_{b}\) are projections if and only if \(b\) is an inner function. We may trivially write
\[f=Q_{b}f+P_{b}f,\qquad f\in H^{p},\]
or equivalently, by (3.2),
\[f=Q_{b}f+bT_{\bar{b}}f,\qquad f\in H^{p}. \tag{3.4}\]
As before, this identity is actually the basic step in generalizing the expansion scheme to Hardy spaces.
Let \((b_{n})_{n\geq 1}\) be a sequence of elements in the closed unit ball of \(H^{\infty}\). Fix \(f\in H^{p}\). Then, by (3.4) with \(b=b_{1}\),
\[f=Q_{b_{1}}f+b_{1}T_{\bar{b}_{1}}f. \tag{3.5}\]
Then we apply (3.4) with \(b=b_{2}\) to \(T_{\bar{b}_{1}}f\) to obtain
\[T_{\bar{b}_{1}}f=Q_{b_{2}}T_{\bar{b}_{1}}f+b_{2}T_{\bar{b}_{2}}T_{\bar{b}_{1}}f. \tag{3.6}\]
If we plug back (3.6) into (3.5) and also use (3.3), we obtain
\[f=Q_{b_{1}}f+b_{1}Q_{b_{2}}T_{\bar{b}_{1}}f+b_{1}b_{2}T_{\bar{b}_{1}\bar{b}_{2} }f. \tag{3.7}\]
The general convergence theorem is as follows.
**Theorem 3.8**.: _Let \((b_{n})_{n\geq 1}\) be a sequence of elements in the closed unit ball of \(H^{\infty}\). Write \(B_{0}=1\) and_
\[B_{n}:=b_{1}b_{2}\cdots b_{n},\qquad n\geq 1.\]
_Assume that_
\[\lim_{n\to\infty}B_{n}(z)=0,\qquad z\in\mathbb{D}.\]
_Then, for each \(f\in H^{p}\),_
\[f=\sum_{n=1}^{\infty}B_{n-1}\cdot Q_{b_{n}}T_{\bar{B}_{n-1}}f,\]
_where the series converges in \(H^{p}\)-norm._
Proof.: By induction, the general formula for (3.7) is
\[f=\sum_{n=1}^{N}B_{n-1}Q_{b_{n}}T_{\bar{B}_{n-1}}f+B_{N}T_{\bar{B}_{N}}f.\]
Since
\[\|B_{N}T_{\bar{B}_{N}}f\|_{H^{p}}\leq\|T_{\bar{B}_{N}}f\|_{H^{p}},\]
it is enough to show that
\[\|T_{\bar{B}_{N}}f\|_{H^{p}}\to 0\]
as \(N\to\infty\).
Let \(k_{\lambda}\) denote the Cauchy kernel, i.e., \(k_{\lambda}(z)=(1-\bar{\lambda}z)^{-1}\). A celebrated property of conjugate-analytic Toeplitz operators is their unusual abundance of eigenvalues and eigenvectors:
\[T_{\bar{\varphi}}k_{\lambda}=\overline{\varphi(\lambda)}\,k_{\lambda},\qquad \lambda\in\mathbb{D},\,\varphi\in H^{\infty}.\]
Hence, in light of (3.3), for each \(\lambda\in\mathbb{D}\),
\[T_{\bar{B}_{N}}k_{\lambda}=\overline{B_{N}(\lambda)}\,k_{\lambda},\]
and thus
\[\|T_{\bar{B}_{N}}k_{\lambda}\|_{H^{p}}=|B_{N}(\lambda)|\,\|k_{\lambda}\|_{H^{ p}}.\]
Therefore, according to our main assumption,
\[\lim_{N\to\infty}\|T_{\bar{B}_{N}}k_{\lambda}\|_{H^{p}}=0.\]
For a general \(f\in H^{p}\), incidentally we know that the span of the Cauchy kernels is also dense in \(H^{p}\). Therefore, given \(f\in H^{p}\) and \(\varepsilon>0\), there are constants \(\alpha_{1},\ldots,\alpha_{m}\in\mathbb{C}\) and points \(\lambda_{1},\ldots,\lambda_{m}\in\mathbb{D}\) such that
\[\|f-(\alpha_{1}k_{\lambda_{1}}+\cdots+\alpha_{m}k_{\lambda_{m}})\|_{H^{p}}<\varepsilon.\]
Hence, by (3.1),
\[\|T_{\bar{B}_{N}}f\|_{H^{p}} \leq \|T_{\bar{B}_{N}}[f-(\alpha_{1}k_{\lambda_{1}}+\cdots+\alpha_{m}k_{ \lambda_{m}})]\|_{H^{p}}\] \[+ \|T_{\bar{B}_{N}}(\alpha_{1}k_{\lambda_{1}}+\cdots+\alpha_{m}k_{ \lambda_{m}})\|_{H^{p}}\] \[\leq c_{p}\|\bar{B}_{N}\|_{H^{\infty}}\|f-(\alpha_{1}k_{\lambda_{1}}+ \cdots+\alpha_{m}k_{\lambda_{m}})\|_{H^{p}}\] \[+ |\alpha_{1}|\,\|T_{\bar{B}_{N}}k_{\lambda_{1}}\|_{H^{p}}+\cdots+| \alpha_{m}|\,\|T_{\bar{B}_{N}}k_{\lambda_{m}}\|_{H^{p}}\] \[\leq c_{p}\varepsilon+|\alpha_{1}|\,\|T_{\bar{B}_{N}}k_{\lambda_{1}}\| _{H^{p}}+\cdots+|\alpha_{m}|\,\|T_{\bar{B}_{N}}k_{\lambda_{m}}\|_{H^{p}}.\]
By the preceding paragraph,
\[\limsup_{N\to\infty}\|T_{\bar{B}_{N}}f\|_{H^{p}}\leq c_{p}\varepsilon.\]
Since \(\varepsilon>0\) is arbitrary, the result follows.
Despite the above proof resembling that of Theorem 2.7, for just a harmless multiplicative factor \(c_{p}\) shows up at the end of the proof, it is important to note that it heavily rests upon the M. Riesz theorem on the boundedness of the Hilbert transform.
One may naturally wonder what happens if \(B_{N}\) does not converge to zero in the topology of \(\operatorname{Hol}(\mathbb{D})\). In this case,
\[B:=\prod_{n=1}^{\infty}b_{n}\]
is a well-defined, not identically zero, function in the closed unit ball of \(H^{\infty}\) and a minor modification of Theorem 3.8 implies the following result.
**Theorem 3.9**.: _Let \((b_{n})_{n\geq 1}\) be a sequence of elements in the closed unit ball of \(H^{\infty}\). Write \(B_{0}=1\) and_
\[B_{n}:=b_{1}b_{2}\cdots b_{n},\qquad n\geq 1.\]
_Assume that_
\[B:=\prod_{n=1}^{\infty}b_{n},\]
_where the product converges uniformly on compact subsets of \(\mathbb{D}\) to the element \(B\in H^{\infty}\). Then, for each \(f\in H^{p}\),_
\[f=BT_{\bar{B}}f+\sum_{n=1}^{\infty}B_{n-1}\cdot Q_{b_{n}}T_{\bar{B}_{n-1}}f,\]
_where the series converges in \(H^{p}\)._
## 4. Taylor series
If \(b(z)=z\), then the Toeplitz operator \(T_{z}\) is the _unilateral forward shift_ operator. By the same token, \(T_{\bar{z}}\) is the _unilateral backward shift_ operator. We denote them respectively by \(\mathbf{S}\) and \(\mathbf{Z}\). In fact, a rather standard notation for the backward shift is \(\mathbf{B}\). However, in this note, in order to avoid the confusion with Blaschke products, we temporarily use \(\mathbf{Z}\) for the backward shift. Note that when we consider them as bounded operators on the Hardy-Hilbert space \(H^{2}\), it is legitimate to write \(\mathbf{Z}=\mathbf{S}^{*}\). However, for other Hardy spaces, this identity is meaningless.
Using the notation of Theorem 3.8, let
\[b_{n}(z)=z,\qquad n\geq 1.\]
Then
\[\lim_{n\to\infty}B_{n}(z)=\lim_{n\to\infty}z^{n}=0,\qquad z\in\mathbb{D}.\]
Moreover,
\[Q_{b_{n}}=I-T_{b_{n}}T_{\bar{b}_{n}}=I-T_{z}T_{\bar{z}}=I-\mathbf{S}\mathbf{Z},\]
and thus, for \(f\in H^{p}\),
\[Q_{b_{n}}T_{\bar{B}_{n-1}}f=(I-\mathbf{S}\mathbf{Z})\mathbf{Z}^{n-1}f=\frac{f ^{(n-1)}(0)}{(n-1)!},\qquad n\geq 1.\]
Since \(B_{n-1}(z)=z^{n-1}\), the expansion
\[f=\sum_{n=1}^{\infty}B_{n-1}\cdot Q_{b_{n}}T_{\bar{B}_{n-1}}f\]
reduces to
\[f(z)=\sum_{n=1}^{\infty}\frac{f^{(n-1)}(0)}{(n-1)!}\cdot z^{n-1},\]
which is precisely the Taylor series expansion of \(f\).
## 5. Blaschke unwinding series
In this section and the next, we study some special cases of the expansion described in Theorems 3.8 and 3.9. Moreover, the decomposition idea for the case treated in this section is borrowed from [4], where the development is studied in the more general setting of \(H^{p}\) spaces. Here, for simplicity, we just treat the case \(p=2\) and provide a sketch of proofs to show that our abstract setting implies the classical Blaschke unwinding series as a special case.
Let \(\lambda\in\mathbb{D}\) and let
\[b_{\lambda}(z):=\frac{\lambda-z}{1-\bar{\lambda}z},\qquad z\in\mathbb{D}.\]
It is well known that \(b_{\lambda}\) is an automorphism of the disc \(\mathbb{D}\). In this case, \(P_{b_{\lambda}}\) is the orthogonal projection of \(H^{2}\) onto the Beurling subspace \(b_{\lambda}H^{2}\), and \(Q_{b_{\lambda}}\) is the orthogonal projection of \(H^{2}\) onto the model space
\[K_{b_{\lambda}}=\mathbb{C}k_{\lambda}.\]
In this case, we can provide an explicit formula for \(Q_{b_{\lambda}}\). Given \(f\in H^{2}\), put
\[g:=f-f(\lambda)\frac{k_{\lambda}}{k_{\lambda}(\lambda)}.\]
Then clearly \(g\in H^{2}\) and \(g(\lambda)=0\). Hence, by a theorem of F. Riesz [8, Page 167], \(g=b_{\lambda}h\) for some \(h\in H^{2}\). Therefore, we can write
\[f=\frac{f(\lambda)}{k_{\lambda}(\lambda)}k_{\lambda}+b_{\lambda}h.\]
This is precisely the orthogonal decomposition of \(f\) with the first component coming from \(K_{b_{\lambda}}\) and the second from \(b_{\lambda}H^{2}\). According to this decomposition, we conclude that
\[Q_{b_{\lambda}}f=\frac{f(\lambda)}{k_{\lambda}(\lambda)}k_{\lambda}=(1-| \lambda|^{2})f(\lambda)k_{\lambda}. \tag{5.1}\]
Now let \((\lambda_{n})_{n\geq 1}\) be a sequence in \(\mathbb{D}\). There are two possibilities, which are described in the following two corollaries. In each case, we are faced with the Takenaka-Malmquist-Walsh basis [5, Ch. 5]. See also [7, 16].
**Corollary 5.2**.: _Let \((\lambda_{n})_{n\geq 1}\) be a non-Blaschke sequence in \(\mathbb{D}\), i.e.,_
\[\sum_{n=1}^{\infty}(1-|\lambda_{n}|)=\infty.\]
_Let \(B_{0}=1\) and_
\[B_{n}(z)=\prod_{k=1}^{n}\frac{\lambda_{k}-z}{1-\bar{\lambda}_{k}z},\qquad n \geq 1.\]
_Let \(f\) be entire. Then_
\[f=f(\lambda_{1})+\sum_{n=1}^{\infty}\big{(}(T_{\bar{B}_{n}}f)(\lambda_{n+1})- \bar{\lambda}_{n}(T_{\bar{B}_{n-1}}f)(\lambda_{n})\big{)}B_{n}, \tag{5.3}\]
_where the series converges in \(H^{2}\)-norm._
Proof.: In this case,
\[\lim_{n\to\infty}B_{n}(z)=\lim_{n\to\infty}\prod_{k=1}^{n}\frac{\lambda_{k}-z}{1- \bar{\lambda}_{k}z}=0,\qquad z\in\mathbb{D},\]
and thus Theorem 3.8 applies. By (5.1),
\[Q_{b_{\lambda_{n}}}T_{\bar{B}_{n-1}}f=(1-|\lambda_{n}|^{2})(T_{\bar{B}_{n-1}}f )(\lambda_{n})k_{\lambda_{n}},\qquad n\geq 1.\]
Therefore, each \(f\in H^{2}\) has the decomposition
\[f=\sum_{n=1}^{\infty}(1-|\lambda_{n}|^{2})(T_{\bar{B}_{n-1}}f)(\lambda_{n})B_{ n-1}k_{\lambda_{n}}. \tag{5.4}\]
Note that the orthonormal basis of \(H^{2}\) in this decomposition is
\[(1-|\lambda_{n}|^{2})^{1/2}B_{n-1}k_{\lambda_{n}},\qquad n\geq 1,\]
and the corresponding Fourier coefficients of \(f\) with respect to this basis are
\[(1-|\lambda_{n}|^{2})^{1/2}(T_{\bar{B}_{n-1}}f)(\lambda_{n}),\qquad n\geq 1.\]
It is easy to see that \(b_{\lambda}\) and \(k_{\lambda}\) are related via the linear equation
\[(1-|\lambda|^{2})k_{\lambda}+\bar{\lambda}b_{\lambda}=1.\]
If we solve for \(k_{\lambda}\) and plug it into (5.4) (with \(\lambda=\lambda_{n}\)) and recalling that \(B_{n-1}b_{\lambda_{n}}=B_{n}\), we obtain
\[f=\sum_{n=1}^{\infty}(T_{\bar{B}_{n-1}}f)(\lambda_{n})\left(B_{n-1}-\bar{ \lambda}_{n}B_{n}\right).\]
The convergence is in \(H^{2}\) and all the coefficients of the \(B_{k}\)s are scalars. After a rearrangement, the representation (5.3) follows.
A remark is in order concerning the last part of the above proof. Even though it is not visible at first glance, a stronger assumption is needed to ensure the convergence after the rearrangement. That \(f\) is assumed to be entire is enough for us and covers the classical setting. However, it can be slightly generalized to functions analytic on the closed unit disc. See [4].
The expansion (5.3) is a very strong form of the expansion (1.5). It is rather surprising that we do not impose heavy restrictions on \(\lambda_{n}\). Let us explain what happens in the special case of (1.5). Here, in the first step we set
\[f(z)-f(0)=\mathbf{B}_{1}(z)f_{1}(z), \tag{5.5}\]
where \(\mathbf{B}_{1}\) is the finite Blaschke product formed with the zeros
\[\lambda_{1}=0,\,\lambda_{2},\,\ldots,\,\lambda_{N}\]
of \(f(z)-f(0)\) on \(\mathbb{D}\). Therefore, respecting the notation of the more general representation (5.3), we have
\[\mathbf{B}_{1}=B_{N}=b_{\lambda_{1}}b_{\lambda_{2}}\cdots b_{\lambda_{N}}. \tag{5.6}\]
Note that \(f_{1}\) is analytic on \(\overline{\mathbb{D}}\) and has no roots in \(\mathbb{D}\). However, the next set of zeros
\[\lambda_{N+1}=0,\,\lambda_{N+2},\,\ldots,\,\lambda_{M},\]
that we choose are the zeros of \(f_{1}(z)-f_{1}(0)\). As it is clear now, the origin repeats infinitely many times in this sequence and thus, after all, it is a non-Blaschke sequence. Moreover, by (5.5) and (5.6), for \(1\leq n\leq N-1\),
\[\bar{B}_{n}f=\bar{B}_{n}f(0)+\bar{B}_{n}\mathbf{B}_{1}f_{1}=\bar{B}_{n}f(0)+b_ {\lambda_{n+1}}\cdots b_{\lambda_{N}}f_{1}\]
and, for \(n=N\),
\[\bar{B}_{N}f=\bar{B}_{N}f(0)+\bar{B}_{N}\mathbf{B}_{1}f_{1}=\bar{B}_{N}f(0)+f _{1}.\]
Therefore,
\[T_{\bar{B}_{n}}f=b_{\lambda_{n+1}}\cdots b_{\lambda_{N}}f_{1},\qquad 1\leq n \leq N-1,\]
and
\[T_{\bar{B}_{N}}f=f_{1}.\]
Note that we implicitly used the fact that \(\lambda_{1}=0\) and thus \(P_{+}(\bar{B}_{n})=0\). Hence, for \(1\leq n\leq N-1\),
\[(T_{\bar{B}_{n}}f)(\lambda_{n+1})-\bar{\lambda}_{n}(T_{\bar{B}_{n-1}}f)( \lambda_{n})=0\]
and, for \(n=N\),
\[(T_{\bar{B}_{N}}f)(\lambda_{N+1})-\bar{\lambda}_{N}(T_{\bar{B}_{N-1}}f)( \lambda_{N})=f_{1}(\lambda_{N+1})=f_{1}(0).\]
In short, the expansion (5.3) becomes
\[f=f(0)+f_{1}(0)B_{N}+\sum_{n=N+1}^{\infty}\big{(}(T_{\bar{B}_{n}}f)(\lambda_ {n+1})-\bar{\lambda}_{n}(T_{\bar{B}_{n-1}}f)(\lambda_{n})\big{)}B_{n},\]
which we rewrite as
\[f=f(0)+f_{1}(0)\mathbf{B}_{1}+\sum_{n=N+1}^{\infty}\big{(}(T_{\bar{B}_{n}}f)( \lambda_{n+1})-\bar{\lambda}_{n}(T_{\bar{B}_{n-1}}f)(\lambda_{n})\big{)}B_{n}. \tag{5.7}\]
The first two terms on the right side of (5.7) are precisely the first two terms in the classical Blaschke unwinding series (1.5). As a matter of fact, it is now easy to complete the above line of reasoning and see that in (5.7), many terms are zero and it eventually reduces to the classical setting (1.5).
**Corollary 5.8**.: _Let \((\lambda_{n})_{n\geq 1}\) be a Blaschke sequence in \(\mathbb{D}\), i.e.,_
\[\sum_{n=1}^{\infty}(1-|\lambda_{n}|)<\infty.\]
_Let \(B_{0}=1\),_
\[B_{n}(z)=\prod_{k=1}^{n}\frac{\lambda_{k}-z}{1-\bar{\lambda}_{k}z},\qquad n\geq 1,\]
_and_
\[B(z)=\prod_{n=1}^{\infty}\frac{|\lambda_{n}|}{\lambda_{n}}\,\frac{\lambda_{n} -z}{1-\bar{\lambda}_{n}z}.\]
_Let \(f\) be entire. Then_
\[f=BT_{\bar{B}}f+\Big{(}f(\lambda_{1})+\sum_{n=1}^{\infty}\big{(}(T_{\bar{B}_{n }}f)(\lambda_{n+1})-\bar{\lambda}_{n}(T_{\bar{B}_{n-1}}f)(\lambda_{n})\big{)}B _{n}\Big{)},\]
_where the series converges in \(H^{2}\)-norm._
Proof.: In this case, \(B\) is a well-defined infinite Blaschke product and thus Theorem 3.9 applies. More explicitly, each \(f\in H^{2}\) has the decomposition
\[f=BT_{\bar{B}}f+\sum_{n=1}^{\infty}(1-|\lambda_{n}|^{2})(T_{\bar{B}_{n-1}}f)( \lambda_{n})B_{n-1}k_{\lambda_{n}}. \tag{5.9}\]
This is precisely the explicit description of the orthogonal decomposition \(H^{2}=BH^{2}\oplus K_{B}\). The orthonormal basis of \(K_{B}\) in this decomposition is
\[(1-|\lambda_{n}|^{2})^{1/2}B_{n-1}k_{\lambda_{n}},\qquad n\geq 1,\]
and the corresponding Fourier coefficients of the projection of \(f\) onto \(K_{B}\) are
\[(1-|\lambda_{n}|^{2})^{1/2}(T_{\bar{B}_{n-1}}f)(\lambda_{n}),\qquad n\geq 1.\]
The rest of proof is the same as the proof of Corollary 5.2.
When \(p\neq 2\), the terms in the above expansions are not orthogonal. But, in technical language, Corollaries 5.2 and 5.8 give us Schauder bases for \(H^{p}\) and \(K_{B}\), respectively. Some other Schauder basis of rational functions (not finite Blaschke products) are presented in [13] for \(H^{p}\) spaces.
## 6. Unwinding series with outer functions
In this section, we explore the development created by the outer function
\[b(z)=\frac{z-1}{2},\qquad z\in\mathbb{D}.\]
We take \(b_{1}=b_{2}=\cdots=b\). Hence, clearly
\[\lim_{n\to\infty}B_{n}(z)=\lim_{n\to\infty}b_{1}(z)b_{2}(z)\cdots b_{n}(z)= \lim_{n\to\infty}\left(\frac{z-1}{2}\right)^{n}=0,\qquad z\in\mathbb{D}.\]
Therefore, Theorem 3.8 applies and we have, for each \(f\in H^{p}\),
\[f=\sum_{n=1}^{\infty}B_{n-1}Q_{b_{n}}T_{\bar{B}_{n-1}}f=\sum_{n=1}^{\infty} \frac{Q_{b}T_{\bar{B}_{n-1}}f}{2^{n-1}}\,(z-1)^{n-1},\]
where the series converges in \(H^{p}\)-norm. We can provide a more familiar formula for \(Q_{b}\). Note that
\[T_{b}=\frac{\mathbf{S}-I}{2}\quad\text{and}\quad T_{\bar{b}}=\frac{\mathbf{Z} -I}{2}.\]
Recall the definition of \(\mathbf{S}\) and \(\mathbf{Z}\) from Section 4. Therefore,
\[Q_{b} = I-T_{b}T_{\bar{b}}\] \[= I-\frac{\mathbf{S}-I}{2}\,\frac{\mathbf{Z}-I}{2}\] \[= I-\frac{1}{4}(\mathbf{S}\mathbf{Z}-\mathbf{S}-\mathbf{Z}+I)\] \[= \frac{1}{4}(2I+k_{0}\otimes k_{0}+\mathbf{S}+\mathbf{Z}).\]
## Declarations
### Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
### Competing interests
On behalf of all authors, the corresponding author states that there are no competing interests.
### Funding information
This work was supported by the NSERC Discovery Grant (Canada), and graduate scholarships from NSERC and FRQNT (Quebec).
### Author contribution
All authors wrote the manuscript and reviewed the final version.
|
2310.16715
|
An Algorithm to Recover Shredded Random Matrices
|
Given some binary matrix $M$, suppose we are presented with the collection of
its rows and columns in independent arbitrary orderings. From this information,
are we able to recover the unique original orderings and matrix? We present an
algorithm that identifies whether there is a unique ordering associated with a
set of rows and columns, and outputs either the unique correct orderings for
the rows and columns or the full collection of all valid orderings and valid
matrices. We show that there is a constant $c > 0$ such that the algorithm
terminates in $O(n^2)$ time with high probability and in expectation for random
$n \times n$ binary matrices with i.i.d.\ Bernoulli $(p)$ entries
$(m_{ij})_{ij=1}^n$ such that $\frac{c\log^2(n)}{n(\log\log(n))^2} \leq p \leq
\frac{1}{2}$.
|
Caelan Atamanchuk, Luc Devroye, Massimo Vicenzo
|
2023-10-25T15:39:10Z
|
http://arxiv.org/abs/2310.16715v2
|
# An Algorithm to Recover Shredded Random Matrices
###### Abstract
Given some binary matrix \(M\), suppose we are presented with the collection of its rows and columns in independent arbitrary orderings. From this information, are we able to recover the unique original orderings and matrix? We present an algorithm that identifies whether there is a unique ordering associated with a set of rows and columns, and outputs either the unique correct orderings for the rows and columns, or the full collection of all valid orderings and valid matrices. We show that there is a constant \(c>0\) such that the algorithm terminates in \(O(n^{2})\) time with high probability and in expectation for random \(n\times n\) binary matrices with i.i.d. entries \((m_{ij})_{ij=1}^{n}\) such that \(\mathbb{P}\left(m_{ij}=1\right)=p\) and \(\frac{c\log^{2}(n)}{n(\log\log(n))^{2}}\leq p\leq\frac{1}{2}\).
## 1 Introduction
In this work we study the problem of reconstructing a binary matrix after being "shredded". That is, we aim to explain when and how a matrix (in our case, drawn from a random model) can be uniquely reconstructed from just the information contained in the rows and columns without knowing how they are ordered. To give the setup more precisely, let \(M=(m_{ij})_{i,j=1}^{n}\) be a \(n\times n\) binary matrix with the rows and columns given labels in \([n]=\{1,...,n\}\) and let \(\mathcal{C}(M)=\{\gamma_{1},...,\gamma_{n}\},\mathcal{R}(M)=\{\rho_{1},..., \rho_{n}\}\) be the multisets of all the columns and rows of \(M\) with some arbitrary ordering that is not necessarily that which they belong in, which we call these collections the shredded columns and rows. We say that \(M\) is uniquely reconstructible (or just reconstructible) if there exists two unique permutations \(\sigma=(\sigma_{1},...,\sigma_{n})\) and \(\tau=(\tau_{1},...,\tau_{n})\) of \([n]\) such that
\[\begin{bmatrix}\rho_{\sigma_{1}}\\ \vdots\\ \rho_{\sigma_{n}}\end{bmatrix}=\begin{bmatrix}\gamma_{\tau_{1}}&\cdots&\gamma _{\tau_{n}}\end{bmatrix}. \tag{1}\]
In particular, if a unique solution exists then both of the resulting matrices are equal to \(M\) (there is always at least one solution equal to \(M\), that being the correct, original ordering). If there are at least two pairs of permutations that satisfy (1), then the matrix \(M\) is not reconstructible and the collection of all pairs of permutations that satisfy the identities are the potential reconstructions of the original matrix. For example, the matrix that has every entry set to \(0\) is not uniquely
reconstructible and has \((n!)^{2}\) solutions to (1), even though each of the solutions corresponds to the same matrix. In fact, every matrix that has two equal rows (or columns) is not be reconstructible. Suppose that rows \(\rho_{i}\) and \(\rho_{j}\) are equal, \((\sigma,\tau)\) are a pair of permutations that satisfy (1), and \(\lambda\) is the transposition \((ij)\). Then, the pair \((\lambda\circ\sigma,\tau)\) also satisfies (1) and so the matrix is not reconstructible.
To offer an analogy, we can view \(M\) as a square binary picture. The problem is that of rebuilding the picture given the strips that come out after we send one copy of \(M\) through a paper shredder upright and one copy sideways. If two strips are exactly the same, we do not know which spots to place the two strips, and we conclude that the picture is not reconstructible. However, for the algorithm, this notion of reconstructibility is not of much importance as all potential reconstructions are outputted.
In this work, we consider a matrix \(M\) that has i.i.d. entries \(m_{ij}\) with \(\mathbb{P}\left(m_{i,j}=1\right)=p\) and \(\mathbb{P}\left(m_{i,j}=0\right)=1-p\) for some \(p\) that we view as a function of \(n\). Since the 1's and 0's are essentially just labels in our model, there is a natural symmetry around \(p=\frac{1}{2}\), and thus we assume throughout that \(p\leq\frac{1}{2}\) for our analysis.
When \(p\geq\frac{(1+\epsilon)\log(n)}{n}\) for some \(\epsilon>0\) the matrix \(M\) has pairwise distinct rows and columns with high probability (see Lemma 3). Hence, for \(p\) above that threshold, our definition of reconstructibility where matrices that have equal rows or columns are not reconstructible is with high probability equivalent to the simplified version where a matrix \(M\) is reconstructible if
\[\forall M^{\prime},\ \bigg{[}\mathcal{C}(M)=\mathcal{C}(M^{\prime})\text{ and } \mathcal{R}(M)=\mathcal{R}(M^{\prime})\implies M=M^{\prime}\bigg{]}.\]
If \(M\) is viewed as the adjacency matrix for some random directed graph on \(n\) vertices (one with loops allowed), the columns \(\gamma_{1},...,\gamma_{n}\) represent the collection of all 1-in-neighbourhoods with only the central vertex's labelled removed and \(\rho_{1},...,\rho_{n}\) represent the collection of all 1-out-neighbourhoods with only the central vertex's labelled removed (removing the labels is the same as permuting them into some arbitrary labelling). By 1-out and 1-in neighbourhood of a vertex \(v\), we mean the subgraph that contains all edges out of or into \(v\) and the other vertices these edges are incident to. This observation links the work being done here to much of the related work and motivations in section 2.
The paper is organized as follows: In section 2 we discuss related work and some of the motivations behind work in the area of reconstruction problems. In section 3 we present the reconstruction algorithm along with our main result, and in Section 4 we prove the result. In section 5 we offer and prove a result concerning when matrices can be reconstructed. Finally, section 6 houses proofs for the lemmas that are used in the preceding sections.
## 2 Related Work and Motivation
Combinatorial reconstruction problems arise naturally in a number of pure and applied settings. The largest inspiration for such exploration comes from the reconstruction conjecture in combinatorics (see Harary (1974), Harary and Plantholt (1985), Kelly (1957), and Ulam (1960)): any graph \(G\) on at least three vertices is reconstructible from the multiset of isomorphism classes of all the vertex-deleted subgraphs of \(G\), often called the deck or \(G\) and labelled \(D(G)\) (the vertex deleted subgraphs of \(G\) are all the induced subgraph obtained through deleting exactly one of the
vertices of \(G\)). To be more exact, the conjecture states that for all graphs \(G\) and \(H\) on at least three vertices, \(G\) is isomorphic to \(H\) if and only if \(D(G)=D(H)\). The use of random models has been vital in the study of this conjecture, with one important result coming from Bollobas (1990) who proved that as \(n\to\infty\), an Erdos-Renyi random graph with \(\frac{c\log(n)}{n}\leq p\leq 1-\frac{c\log(n)}{n}\) is uniquely reconstructible from a collection of only three of the vertex deleted subgraphs for any \(c>\frac{5}{2}\). In particular, this means that with high probability, for appropriate choices of \(p\), there is a subset \(\{G_{1},G_{2},G_{3}\}\subseteq D(G)\) of three subgraphs such that for any other graph \(H\), if \(\{G_{1},G_{2},G_{3}\}\subseteq D(H)\), then \(H\) is isomorphic to \(G\). Before the Bollobas result, Muller (1976) had previously explored the reconstructibility of random graphs from the whole deck.
One interesting abstraction of the reconstruction conjecture related to the random pictures model is the new digraph reconstruction conjecture. Let \(G\) and \(H\) be two directed graphs and suppose that there is a bijection \(f:V(G)\to V(H)\) such that \(G\setminus v\) is isomorphic to \(H\setminus f(v)\) for all \(v\in V(G)\). Further suppose that the in-degrees and out-degrees of \(v\) and \(f(v)\) match for all \(v\in V(G)\). Then, \(G\) and \(H\) must be isomorphic. The answer to this problem remains open. See Ramachandran and Arumugam (2004) and S. Ramachandran (1981) for discussion of the problem and the families of graphs for which the conjecture has been proven to be true.
Recently, extensive work has gone into studying the shotgun assembly problem for graphs. Introduced by Mossel and Ross (2019), the problem asks how large must \(r\) be so that a graph, commonly drawn from some random model, is uniquely determined by its collection of distance \(r\)-neighbourhoods around each vertex \(v\in V(G)\) (by a distance \(r\)-neighbourhood of \(v\) we mean the subgraph \(N_{r}(v)\) that is induced by all vertices of graph distance at most \(r\) from \(v\)). They consider both labelled and unlabelled versions of the problem. This topic has been studied for a variety of random models including Erdos-Renyi graphs, random regular graphs, and simplicial complexes (for examples, see Adhikari and Chakraborty (2022), Ding, Jiang, and Ma (2022), Gaudio and Mossel (2022), Huang and Tikhomirov (2022), Johnston et al. (2023), and Mossel and Sun (2015)). There has also been work put towards shotgun assembly problems in different contexts such as reconstructing random vertex colourings from \(r\)-neighbourhoods as seen in Ding and Liu (2022), Mossel and Ross (2019), and Przykucki and Scott (2022).
In a similar vein, there is the problem of canonically labelling graphs and random graphs, and its main application in checking graph isomorphisms (early work in the topic can be seen in Babai (1980), Babai, Erdos, and Selkow (1980), and Babai and Luks (1983)). An algorithm which canonically labels a graph \(G\), assigns the labels \(1,2,\ldots,n\) to the \(n\) vertices of \(G\) such that if \(G\) is isomorphic to some graph \(H\), then both should be given the same labelling by the algorithm. Of particular note to us are the results on canonically labelling the Erdos-Renyi graph using only the \(r\)-neighbourhoods of each vertex. Mossel and Ross (2019) showed it is possible to canonically label a graph \(G\sim G(n,p_{n})\) when \(np=\omega(\log^{2}(n))\) with using only 2 neighbourhoods. On the other hand, Gaudio, Racz, and Sridhar (2022) showed for \(np=o(\log^{2}(n)/(\log\log(n))^{3})\) there are multiple isomorphic 2-neighbourhoods with high probability, which inhibits us from creating a canonical labelling.
Another model that has received some attention is that of reconstructing random jigsaw puzzles. Once again introduced by Mossel and Ross (2019), in this problem we are given the collection of vertices in a lattice with coloured half-edges drawn from some collection of \(q\) colours. The problem asks how large \(q\) must be so that with high probability the puzzle can be constructed into a complete
picture from the collection of vertices and their coloured half-edges. Some work concerning this problem can be found in Balister, Bollobas, and Narayanan (2019), Martinsson (2016, 2019), and Nenadov, Pfister, and Steger (2017).
The topic of this paper, reconstructing random matrices, has been studied before from another point of view. In Narayanan and Yap (2023), the complete multiset of all \((n-k)^{2}\)\(k\times k\) sub-matrices is given as the information to reconstruct with.
There is no lack of motivation from other sciences for studying reconstruction problems, such as the problem of dna shotgun sequencing. In shotgun assembly, the long dna strands are "shot-gunned" into smaller pieces that are sequenced. From here, a reconstruction algorithm is used to infer what the original long strand was. For a probabilistic analysis of the unique reconstructibility of dna sequences from shotgunnned strands see Arratia and Reinert (1996), Dyer, Frieze, and Suen (1994), and Motahari, Bresler, and Tse (2013). Note that the models here are what one of the shotgun assembly problems from Mossel and Sun (2015) is based on, with the special case of the path on \(n\) vertices being studied. Shotgun assembly has also begun to appear in neural network theory. Soudry et al. (2015) consider the problem of reconstructing large neural networks from smaller sub-networks.
## 3 The Reconstruction Algorithm
For a vector \(x=(x_{1},x_{2},\ldots,x_{n})\in\{0,1\}^{n}\), we call \(|x|=\sum_{i=1}^{n}x_{i}\) the weight or Hamming weight of \(x\). If \(S\subset[n]\) is a set of indices, then \(\sum_{i\in S}x_{i}\) is the sub-weight of \(x\) on \(S\). Alternatively, the weight of \(x\) can be seen as the number of 1's which appear in the entire vector, and the sub-weight in \(S\) is the number of 1's in the vector \(x\) restricted to the positions indicated by \(S\). We have two algorithmic problems to solve:
1. Find any permutation pair \((\sigma,\tau)\) that satisfies (1).
2. Find all permutation pairs \((\sigma,\tau)\) that satisfy (1).
Our algorithm solves (ii) and hence also (i). It can be broken down into two main parts: First we partition each row \(\rho_{i}\) into sub-strings and compute the vector of the associated sub-weights for all \(i\in[n]\). Then, using a trie, we can efficiently identify each \(\rho_{i}\) with a position by matching these sub-weight vectors. If we are able to identify each \(\rho_{i}\) with a unique position, then the algorithm is complete. We show this happens with high probability.
In the case where this does not occur, we move on to part two of the algorithm, where we iterate through all possible permutations of the rows and check if the matrix is correct by checking if it contains all of the columns in \(\mathcal{C}(M)\) with the correct multiplicities. Using the information gained from part one, we are able to reduce our search space from all \(n!\) permutations of the rows, to a collection that has expected size \(O(1)\).
### Part One
Given the collection of unordered columns \(\gamma_{1},...,\gamma_{n}\), we create a Hamming weight partition of the columns \(\mathcal{P}=(\mathcal{P}_{0},\ldots,\mathcal{P}_{n})\), where \(\mathcal{P}_{i}=\{1\leq j\leq n:|\gamma_{j}|=i\}\). Now for each \(j\in[n]\), and for each integer \(k\in[\lfloor np\rfloor,\lfloor np\rfloor+\lfloor\sqrt{np}\rfloor]\), we compute
\[s_{j,k}=\sum_{i\in\mathcal{P}_{k}}\gamma_{ij},\quad\text{where}\quad\gamma_{i}= \begin{bmatrix}\gamma_{i,1}\\ \vdots\\ \gamma_{i,n}\end{bmatrix}.\]
For a row to be able to be put in position \(j\), its sub-weight on \(\mathcal{P}_{k}\) must be equal to \(s_{j,k}\) for all \(k\in[\lfloor np\rfloor,\lfloor np\rfloor+\lfloor\sqrt{np}\rfloor]\). Using the values \(s_{j,k}\) we store every potential position \(j\in[n]\) in the leaves of a trie using the vectors \(S_{j}=(s_{j,\lfloor np\rfloor}\dots,s_{j,\lfloor np\rfloor+\lfloor\sqrt{np} \rfloor})\) as input, which we call the sub-weight vectors associated with position \(j\). See Knuth (1998) for more information on tries and their uses. In our trie, we associate each input with a path. Therefore, it is possible that several paths coincide and that \(S_{j}\) is not unique, i.e., \(|\{S_{j}:1\leq j\leq n\}|<n\).
From the collection of rows \(\rho_{1},...,\rho_{n}\), we can compute the weight of each column in the original matrix \(M\) even without knowing the order, since the weight of a column is invariant under permutation of the rows. This allows us to determine which column positions have which weights. Let \(\mathcal{I}=(I_{0},I_{1},\dots,I_{n})\), where
\[I_{j}=\{i\in[n]:\text{The column in position $i$ has weight $j$}\}.\]
Now, for all \(j\in[n]\), and for each integer \(k\in[\lfloor np\rfloor,\lfloor np\rfloor+\lfloor\sqrt{np}\rfloor]\), we compute \(t_{j,k}\), which is the sub-weight of the row \(\rho_{j}\) on the indices \(I_{k}\). We collect all of them into a vector
\[T_{j}=\big{(}t_{j,\lfloor np\rfloor}\dots,t_{j,\lfloor np\rfloor+\lfloor\sqrt {np}\rfloor}\big{)},\]
which we call the signature of \(\rho_{j}\). Since entries \(S_{j}\) (the sub-weight vector of position \(j\)) and \(T_{j}\) (the signature of \(\rho_{j}\)) are generated from the same information with only potentially incorrect labels on \(T_{j}\), we know that
\[\{S_{j}:1\leq j\leq n\}=\{T_{j}:1\leq j\leq n\}.\]
It follows that if \(|\{S_{j}:1\leq j\leq n\}|=n\), then are able to identify a unique permutation for each row: For each \(j\in[n]\), we define \(\sigma_{j}\) to be the unique \(\ell\in[n]\) such that \(S_{j}=T_{\ell}\). Once the rows have been placed we have reconstructed the matrix and the permutation \(\tau\) on the unordered columns can be determined. We do this by first constructing a trie based on all of the columns \(C_{1},...,C_{n}\) in the reconstructed matrix \(M\) (these are the columns in their original, pre-shredded positions). If the trie has \(n\) distinct leaves, then we can define a permutation \(\tau\) for \(\gamma_{1},...,\gamma_{n}\) in the following way: For each \(j\in[n]\), define \(\tau_{j}\) to be the unique \(\ell\in[n]\) such that \(\gamma_{j}=C_{\ell}\). If either of the two tries do not have distinct leaves we move on to part 2.
### Part Two
There are two possible cases where we end up requiring part two to complete the algorithm. First, we require part two when there is at least one leaf in the trie containing row sub-weight vectors which coincides with multiple rows, i.e. \(|\{S_{j}:1\leq j\leq n\}|=L<n\). The second case where we require part two is when there are at least two columns coinciding with a single leaf in the trie containing the column vectors, i.e. \(C_{j}=C_{k}\) for \(j\neq k\).
For each vector \(S_{i}\in\{S_{j}:1\leq j\leq n\}\), let \(x_{i}\) be the multiplicity of that vector, i.e. the number
of rows \(\rho_{j}\) where \(S_{j}=S_{i}\). Then, since \(\rho_{j}\) can only be assigned to a position \(k\) such that \(S_{j}=T_{k}\), there are \(x_{1}!x_{2}!\ldots x_{L}!\) possible permutations of \(\rho_{1},...,\rho_{n}\) that must be checked. For each possible permutation \(\sigma\), we construct a matrix,
\[M^{\prime}=\begin{bmatrix}\rho_{\sigma_{1}}\\ \vdots\\ \rho_{\sigma_{n}}\end{bmatrix}.\]
Using the column trie, we determine if \(\mathcal{C}(M)=\mathcal{C}(M^{\prime})\). That is, we determine if both matrices contain the same set of columns with the same multiplicities. If this is the case, then \(M^{\prime}\) is a valid reconstruction. Let \(\tau_{j}\) be an \(\ell\in[n]\) such that the column in position \(j\) in \(M^{\prime}\) is equal to \(\gamma_{\ell}\) for all \(j\in[n]\) (in particular choose the \(\tau_{j}\) such that \(\tau=(\tau_{1},...,\tau_{n})\) is a permutation). Note that at this point, \(\ell\) need not be unique and so this could yield many valid matrices. The pair \((\sigma,\tau)\) permutes the rows and columns to create a valid reconstruction \(M^{\prime}\). Let \(I_{1},\ldots,I_{m}\) be the sets of column indices (\(|I_{k}|>1\)) such that for every two indices \(i,j\in I_{k}\), the columns \(C_{i},C_{j}\) in \(M^{\prime}\) are equal. Clearly the columns within each \(I_{k}\) can be permuted and still give a valid \(\tau\) for reconstructing.
Therefore, for every valid \(\sigma\) we compute one of the corresponding column permutations \(\tau\), and the sets of indices \(I_{1},\ldots,I_{m}\), and then output
\[(\sigma,\tau,S_{I_{1}}\times\cdots\times S_{I_{m}}).\]
Where \(S_{I_{k}}\) is the group of permutations on the elements in the set \(I_{k}\). The set of these triples can generate all of the pairs \((\sigma,\tau)\) which create a valid reconstruction. If we wish to retrieve every pair from the triple, we need only iterate over \(\pi\in S_{I_{1}}\times\cdots\times S_{I_{k}}\) and compute \((\sigma,\pi\tau)\).
### An Example
Consider the following collection of its rows and columns (assume that \(\mathcal{C}(M)=\{\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\}\) and \(\mathcal{R}(M)=\{\rho_{1},\rho_{2},\rho_{3},\rho_{4}\}\) are ordered left to right and top to bottom respectively):
\[\mathcal{C}(M)=\begin{pmatrix}\begin{bmatrix}1\\ 0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 1\\ 0\end{bmatrix}\end{pmatrix},\ \mathcal{R}(M)=\begin{pmatrix}\begin{bmatrix}1&0&1&0\\ 0&1&1&0\\ 0&1&1&0\\ 0&1&1&1\\ 1&1&0&1\end{bmatrix}.\]
We first construct the partition \(\mathcal{P}\) from the column collection \(\mathcal{C}(M)\). From this, for each position \(j\), we compute the sub-weight vectors \(S_{j}=(s_{j,2},s_{j,3})\),
\[\mathcal{C}(M)=\begin{pmatrix}\begin{bmatrix}1\\ 0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 1\\ 1\\ 0\end{bmatrix}\begin{bmatrix}1\\ 0\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\begin{}0\\ 1\\ \end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ \end{bmatrix}\begin{}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 0\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\end{bmatrix}\begin{bmatrix}0\\ 1\\
said leaf (in bold):
Next we compute the signatures for each of the row vectors. We do this by first computing \(\mathcal{I}\), and using the indices to determine the values of each entry,
\[\mathcal{R}(M)=\begin{Bmatrix}\begin{bmatrix}1&0&1&0\\ 0&1&1&0\\ 0&1&1&1\\ \end{bmatrix}&\mathcal{I}=\left(\emptyset,\emptyset,\left\{1,4\right\},\left\{ 2,3\right\},\emptyset\right)\rightarrow\begin{Bmatrix}(1,1),\\ (0,2),\\ (1,2),\\ (2,1)\end{Bmatrix}.\end{Bmatrix}\]
Now we use the set of signatures and search through the trie generated by the sub-weight vectors. Each signature reaches a leaf, which then tells us which positions the that row are allowed to be in. In this example, they are each mapped to a unique position, telling us that \(\sigma=(142)(3)\) is the permutation to apply on \(\mathcal{R}(M)\) in order to obtain \(M\). Doing so gives us our unique matrix
\[M=\begin{bmatrix}0&1&1&0\\ 1&1&0&1\\ 0&1&1&1\\ 1&0&1&0\end{bmatrix}.\]
Since we have no duplicate columns, there is also a unique \(\tau=(13)(24)\). The final output would be \(((142)(3),(13)(24),\left\{\mathrm{Id}\right\})\) as there is only one way to permute the columns to reconstruct the matrix.
For a second example, let us consider a case where we have duplicate sub-weight vectors and duplicate columns. Below is the result of doing part one to some matrix \(M\), we can see that the first row in \(\mathcal{R}(M)\) belongs in the second position, but the remaining rows' positions are unknown,
\[\mathcal{R}(M)=\begin{Bmatrix}\begin{bmatrix}1&0&0&0\\ 1&1&1&0\\ 0&1&1&1\end{bmatrix},\\ \begin{bmatrix}0&1&1&1\\ 0&1&1&1\end{bmatrix}\end{Bmatrix}\rightarrow\begin{Bmatrix}(1,0),\\ (1,2),\\ (1,2),\\ (1,2)\end{Bmatrix}\quad\mathcal{C}(M)=\begin{Bmatrix}\begin{bmatrix}1\\ 0\\ 0\\ 1\\ 1\end{bmatrix},\begin{bmatrix}0\\ 1\\ 0\\ 1\end{bmatrix},\begin{bmatrix}1\\ 0\\ 1\\ 1\end{bmatrix}\end{Bmatrix}\rightarrow\begin{Bmatrix}(1,2),\\ (1,0),\\ (1,2),\\ (1,2)\end{Bmatrix}.\]
As three rows have the same signature, we have \(6\) permutations of the rows to check,
\[\{(12),(12)(34),(123),(124),(1234),(1243)\},\]
which results in matrices,
\[M_{(12)} =\begin{bmatrix}1&1&1&0\\ 1&0&0&0\\ 0&1&1&1\\ 0&1&1&1\end{bmatrix}\quad M_{(12)(34)}=\begin{bmatrix}1&1&1&0\\ 1&0&0&0\\ 0&1&1&1\\ 0&1&1&1\end{bmatrix}\] \[M_{(123)} =\begin{bmatrix}0&1&1&1\\ 1&0&0&0\\ 1&1&1&0\\ 0&1&1&1\end{bmatrix}\quad M_{(124)}=\begin{bmatrix}0&1&1&1\\ 1&0&0&0\\ 1&0&1&1\\ 1&1&1&0\end{bmatrix}\] \[M_{(1234)} =\begin{bmatrix}0&1&1&1\\ 1&0&0&0\\ 1&1&1&0\\ 0&1&1&1\end{bmatrix}\]
Since there are duplicate rows, some of these permutations result in the same matrix. Regardless, using the column trie below, we can iterate through each \(M_{\sigma}\) and see if \(\mathcal{C}(M)=\mathcal{C}(M_{\sigma})\):
From this we can see that the only \(\sigma\) that give us valid matrices are from \((123)\) and \((1234)\), and since there are two identical columns in positions \(2\) and \(3\), the corresponding permutation groups are both \(S_{\{2,3\}}\). The solution set for this example is
\[\{((123),(142),S_{\{2,3\}}),((1234),(142),S_{\{2,3\}})\}.\]
### Time Complexity
The time complexity achieved by our algorithm assumes the RAM model of computation. Computing the weights of the vectors and computing all the sub-weights takes time \(O(n^{2})\), since we can upper bound both of these by computing the sum of all entries in the matrix. Creating the trie with the sub-weight vectors would take time \(O(n^{3/2})\) since the size of the strings used in the trie are bounded above by \(\sqrt{np}\leq\sqrt{n}\). Since the height of the trie is \(O(\sqrt{n})\), matching each \(R_{i}\) to a set of positions at a leaf, takes total time \(O(n^{3/2})\). Next we create the column trie, which takes \(O(n^{2})\) as we have \(n\) length \(n\) vectors to insert. It is interesting to note, the process of determining which rows belong in which positions is not the most time intensive step, in fact, simply determining the
weights of the vectors is what gives us our time complexity.
In part two, for each valid permutation, we first check that \(\mathcal{C}(M)\)=\(\mathcal{C}(M^{\prime})\) by searching for each column in \(M^{\prime}\) in the column trie, keeping track of multiplicities. This takes time \(O(n^{2})\). Once a valid \(\sigma\) is found, we must compute a single \(\tau\), which we can get from reading the columns of \(M^{\prime}\) generated by \(\sigma\) applied on the rows, in \(O(n)\) time. Using the column trie, we can create the sets \(I_{1},\ldots,I_{m}\) in \(O(n^{2})\) time.
Let \(P=x_{1}!x_{2}!\ldots x_{L}!\) be the number of permutations \(\sigma\) that we have to check. Then the entirety of part two takes expected time \(O(n^{2}\mathbb{E}[P])\). In section 4, we show that \(\mathbb{E}[P]\to 1\) as \(n\to\infty\) for \(p\) in some range, implying that the expected time for the algorithm is \(O(n^{2})\). We also show that the probability we require step two to complete the algorithm tends to 0 as \(n\to\infty\) for \(p\) in another range, implying that the completion time is also \(O(n^{2})\) with high probability.
## 4 Main Result
The time complexity discussion from the previous section culminates in our main result.
**Theorem 1**.: _If \(p\geq\frac{16(1+\epsilon)\log^{2}(n)}{n(\log\log(n))^{2}}\) for \(\epsilon>0\), then,_
\[\mathbb{P}\left(\text{Algorithm terminates at first step}\right)\to 1\text{ as }n\to\infty.\]
_Hence, the algorithm succeeds in producing a unique reconstruction in \(O(n^{2})\) time with high probability. Furthermore, if \(p\geq\frac{36(1+\epsilon)\log^{2}(n)}{n(\log\log(n))^{2}}\) for \(\epsilon>0\), the expected running time of the algorithm is also \(O(n^{2})\), with the expected number of permutations that require checking in step two converging to 1 as \(n\to\infty\)_
To complete the proof of Theorem 1, we need to bound the probability that two rows \(\rho_{i}\) and \(\rho_{j}\) share the same signature vectors \(T_{i}\) and \(T_{j}\). In order to analyze this we need to obtain some bounds on the size of each group in the partition \(|\mathcal{P}|=(|\mathcal{P}_{1}|,...,|\mathcal{P}_{n}|)\). In particular, we want the groups near the average \(np\) to be sufficiently large as these columns are the ones that the algorithm uses to generate sub-weight vectors and larger sub-strings produce sub-weights with larger variance. Since each column sum is a \(\mathrm{binomial}(n,p)\) random variable, and we have \(n\) distinct columns, \(|\mathcal{P}|\) has a multinomial distribution with parameters \(n\) and \(b=(b_{n,p,1},...,b_{n,p,n})\), where
\[b_{n,p,k}=\mathbb{P}\left(\mathrm{binomial}(n,p)=k\right)=\binom{n}{k}p^{k}(1 -p)^{n-k}.\]
The bounds we desire for \(|\mathcal{P}|\) are given by the following lemma.
**Lemma 2**.: _Suppose that \(p=p(n)\) is some sequence such that \(np\geq 16\). There exists a positive constant \(\gamma>0\) such that \(b_{n,p,\lfloor np\rfloor+i}\geq 2\gamma\frac{1}{\sqrt{np}}\) for all \(i\in[0,\lfloor\sqrt{np}\rfloor]\). Furthermore,_
\[\mathbb{P}\left(|\mathcal{P}_{\lfloor np\rfloor+i}|\leq\gamma\sqrt{\frac{n}{p} }\right)\leq e^{-\frac{1}{6}\gamma\sqrt{\frac{n}{p}}}.\]
Since the algorithm also requires passing to part two when two columns are equal, we need the next lemma as well.
**Lemma 3**.: _Let \(M\) be an \(n\times n\) random binary matrix with i.i.d. entries \(m_{ij}\) such that \(\mathbb{P}\left(m_{ij}=1\right)=p\) and \(\mathbb{P}\left(m_{ij}=0\right)=1-p\). Then, for any \(\epsilon>0\), \(\mathbb{P}\left(M\text{ has at least two equal rows or columns}\right)\to 0\) as \(n\rightarrow\infty\) if \(p\geq\frac{(1+\epsilon)\log(n)}{n}\)._
Proof of Theorem 1.: There are two cases in which we proceed to the second step of the algorithm: first, when there are at least two identical sub-weight vectors, or second, when at least two columns are identical. The probability of the second criteria is shown by Lemma 3 to converge to 0 as \(n\rightarrow\infty\) for \(p\) of the form described, so it suffices to show that the probability of the first criteria occurring also converges to 0 as \(n\rightarrow\infty\). We call this event \(A(n,p)\). Recall from section 3 that we begin step one of the algorithm by partitioning the columns according to their weight into collections \(\mathcal{P}=(\mathcal{P}_{1},...,\mathcal{P}_{n})\), and that \(I_{k}\) denotes the indices corresponding to columns in \(\mathcal{P}_{k}\).
For a particular \(k\), let the sub-strings of \(\rho_{1}\) and \(\rho_{2}\) that only contains entries with indices in \(I_{k}\) be denoted by \(X=(X_{1},...,X_{|\mathcal{P}_{k}|})\) and \(Y=(Y_{1},...,Y_{|\mathcal{P}_{k}|})\). In order to have \(t_{1,k}=t_{2,k}\), we require that \(\sum_{i=1}^{|\mathcal{P}_{k}|}X_{i}=\sum_{i=1}^{|\mathcal{P}_{k}|}Y_{i}\). The sums are equal if and only if
\[|\{i:1\leq i\leq n,\ (X_{i},Y_{i})=(0,1)\}|=|\{i:1\leq i\leq n,\ (X_{i},Y_{i})=(1,0)\}|,\]
as an outcome of \((0,0)\) or \((1,1)\) does not change the gap between the sum (for shorthand we write \(\#(0,1)\) and \(\#(1,0)\) to denote the two cardinalities). Since each of the \((X_{i},Y_{i})\) are pairs of row entries that both lie within columns of weight \(k\), and the 1's are equally likely to be anywhere in each of the columns, we can see that for any \(i\in 1,...,|\mathcal{P}_{k}|\),
\[\mathbb{P}\left((X_{i},Y_{i})=(0,1)\right)=\mathbb{P}\left((X_{i},Y_{i})=(1,0 )\right)=\frac{k(n-k)}{n(n-1)}.\]
Since we assume that \(k\in[\lfloor np\rfloor,\lfloor np\rfloor+\lfloor\sqrt{np}\rfloor]\) it holds that there is some \(\alpha\in(0,1)\) such that
\[\frac{k(n-k)}{n(n-1)}\sim\frac{(np+\alpha\sqrt{np})(n(1-p)-\alpha\sqrt{np})}{ n(n-1)}=p\left(\frac{n}{n-1}\right)\left(1+\frac{\alpha}{\sqrt{np}}\right) \left(1-p-\frac{\alpha p}{\sqrt{np}}\right),\]
and so \(\mathbb{P}\left((X_{i},Y_{i})=(0,1)\right)=\mathbb{P}\left((X_{i},Y_{i})=(1,0 )\right)=\Theta(p)\) (note that \(np\rightarrow\infty\) by the assumptions on \(p\)). For each \(m\in\{1,...,n\}\), the conditional probability \(\mathbb{P}\left(t_{1,k}=t_{2,k}|\{|\mathcal{P}_{k}|=m\}\right)\) is equal to
\[\sum_{i=0}^{\lfloor m/2\rfloor}\mathbb{P}\left(\#(0,1)+\#(0,1)=2i\right) \mathbb{P}\left(\{\#(0,1)=\#(1,0)=i\}|\{\#(0,1)+\#(1,0)=2i\}\right).\]
Since \((0,1)\) and \((1,0)\) occur with equal probability, when we condition on there being \(2i\) of them total, the values \(\#(0,1)\) and \(\#(0,1)\) follow a \(\mathrm{binomial}(2i,1/2)\) distribution. Define \(\tilde{p}:=\frac{2k(n-k)}{n(n-1)}=\mathbb{P}\left((X_{i},Y_{i})=(0,1)\text{ or }(1,0)\right)\). From here, applying Stirling's approximation we obtain some \(\beta>0\) such that
\[\mathbb{P}\left(t_{1,k}=t_{2,k}|\{|\mathcal{P}_{k}|=m\}\right)=\sum_{i=0}^{ \lfloor m/2\rfloor}\mathbb{P}\left(\mathrm{binomial}(m,\tilde{p})=2i\right) \mathbb{P}\left(\mathrm{binomial}(2i,1/2)=i\right)\]
\[\leq\beta\left(\sum_{i=0}^{\lfloor m/2\rfloor}\frac{1}{\sqrt{2i\lor 1}} \mathbb{P}\left(\text{binomial}(m,\tilde{p})=2i\right)\right)\] \[\leq\beta\mathbb{E}\left[\frac{1}{\sqrt{\text{binomial}(m,\tilde{ p})\lor 1}}\right]\] \[\leq\frac{3\beta}{\sqrt{m\tilde{p}}}.\]
See Lemma 6 for a proof of the final inequality. Since we care about the case where \(m\geq\gamma\sqrt{\frac{n}{p}}\) and take \(n\rightarrow\infty\) we can safely assume the inequality holds. Let
\[S=\left\{\{(x_{1},...,x_{n}):x_{i}\geq\gamma\sqrt{\frac{n}{p}}\ \text{ for all }i\in[\lfloor np \rfloor,\lfloor np\rfloor+\lfloor\sqrt{np}\rfloor]\right\},\]
where \(\gamma>0\) is the one from Lemma 2. When we condition on the sizes of \(|\mathcal{P}_{k}|\) for all \(k\in\{1,...,n\}\), the events \(\{t_{1,k}=t_{2,k}\}\) are all independent of each other and the sizes of all other columns. Hence, since increasing \(m\) only decreases the upper bound for \(\mathbb{P}\left(t_{1,k}=t_{2,k}|\{|\mathcal{P}_{k}|=m\}\right)\),
\[\sum_{(x_{1},...,x_{n})\in S}\mathbb{P}\,\left(T_{1}=T_{2}\Bigg{|} \bigcap_{k=1}^{n}\{|\mathcal{P}_{k}|=x_{k}\}\right)\mathbb{P}\,\left(\bigcap _{k=1}^{n}\{|\mathcal{P}_{k}|=x_{k}\}\right)\] \[\leq\sum_{(x_{1},...,x_{n})\in S}\left(\frac{3\beta}{\sqrt{ \gamma\tilde{p}\sqrt{\frac{n}{p}}}}\right)^{\sqrt{np}}\mathbb{P}\left(\bigcap _{k=1}^{n}\{|\mathcal{P}_{k}|=x_{k}\}\right)\] \[=\left(\frac{3\beta}{\sqrt{\gamma\tilde{p}\sqrt{\frac{n}{p}}}} \right)^{\sqrt{np}}\mathbb{P}\left((|\mathcal{P}_{1}|,...,|\mathcal{P}_{n}|) \in S\right)\] \[\leq\left(\frac{3\beta}{\sqrt{\gamma\tilde{p}\sqrt{\frac{n}{p}}} }\right)^{\sqrt{np}},\]
where \(T_{i}\) is the signature of \(\rho_{i}\) as defined in section 3. On the other hand for \(S^{c}\), we have that
\[\sum_{(x_{1},...,x_{n})\in S^{c}}\mathbb{P}\,\left(T_{1}=T_{2}\Bigg{|}\bigcap_ {k=1}^{n}\{|\mathcal{P}_{k}|=x_{k}\}\right)\mathbb{P}\,\left(\bigcap_{k=1}^{n }\{|\mathcal{P}_{k}|=x_{k}\}\right)\leq\mathbb{P}\left((|\mathcal{P}_{1}|,...,|\mathcal{P}_{n}|)\in S^{c}\right),\]
which is a good enough bound because Lemma 2 combined with the union bound ensures that the right side of the inequality is upper bounded by \((\sqrt{np})e^{-\frac{1}{6}\gamma\sqrt{\frac{n}{p}}}\). Putting these two pieces together
we get that
\[\mathbb{P}\left(A(n,p)\right)\leq n^{2}\mathbb{P}\left(T_{1}=T_{2}\right)\leq n^{2 }\left(\frac{3\beta}{\sqrt{\gamma\tilde{p}\sqrt{\frac{n}{p}}}}\right)^{\sqrt{np }}+n^{2}(\sqrt{np})e^{-\frac{1}{6}\gamma\sqrt{\frac{n}{p}}}. \tag{2}\]
The right term clearly tends to 0 as \(n\rightarrow\infty\). For the left term, we note that since \(\tilde{p}=\Theta(p)\), we can group up all the constants into some \(C>0\) such that
\[n^{2}\left(\frac{3\beta}{\sqrt{\gamma\tilde{p}\sqrt{\frac{n}{p}}}}\right)^{ \sqrt{np}}\leq n^{2}\left(\frac{C}{(np)^{1/4}}\right)^{\sqrt{np}}=\exp\left\{2 \log(n)+2\log(C)\sqrt{np}-\frac{1}{4}\sqrt{np}\log(np)\right\},\]
which tends to 0 as \(n\rightarrow\infty\) whenever \(p\geq\frac{16(1+\epsilon)\log^{2}(n)}{n(\log\log(n))^{2}}\) for some \(\epsilon>0\).
Now we discuss the time complexity of part two. As mentioned in Section 3, the time complexity of part two is \(O(n^{2}P)\), where \(P\) is the number of valid permutations to check. Hence, it is sufficient to show that \(\mathbb{E}[P]=O(1)\) as part one always takes \(O(n^{2})\) time. The number of permutations we need to check only depends on the sizes of the sets of rows with the same sub-weight vectors, and not their positions. Thus, we sum over \(j\) representing the number of non-unique sub-weight vectors, and then sum over \(n_{1},n_{2},\ldots,n_{j}\) such that \(n_{1}+\cdots+,n_{j}\leq n\), which represent the number of rows that share the same sub-weight vector. We also have the conditions \(n_{i}>1\) as otherwise this would imply that it is a unique sub-weight vector, and \(n_{i}\geq n_{i+1}\) as this avoids double counting. With this we get the following upper bounds for \(\mathbb{E}[P]\):
\[\sum_{j=1}^{n}\sum_{\begin{subarray}{c}n_{1}+n_{2}+\cdots+n_{j} \leq n\\ \forall i,n_{i}>1\end{subarray}}(n_{1}!)(n_{2}!)\ldots(n_{j}!)\binom{n}{n_{1},n_{2},\ldots,n_{j}}\prod_{i=1}^{j}\mathbb{P}\left(n_{i}\text{ rows have same sub-weight vector}\right)\] \[\leq \sum_{j=1}^{n}\sum_{\begin{subarray}{c}n_{1}+n_{2}+\cdots+n_{j} \leq n\\ \forall i,n_{i}>1\end{subarray}}(n_{1}!)(n_{2}!)\ldots(n_{j}!)\binom{n}{n_{1},n_{2},\ldots,n_{j}}\mathbb{P}\left(T_{1}=T_{2}\right)^{\sum_{i=1}^{j}n_{i}-1}\] \[\leq \left(1+\sum_{k=2}^{n}k!\binom{n}{k}\mathbb{P}\left(T_{1}=T_{2} \right)^{k-1}\right)^{n}.\]
The last line, after expanding the product, contains terms which upper bound each term in the previous line upon applying the bound \(\binom{n}{n_{1},n_{2},\ldots,n_{j}}\leq\binom{n}{n_{1}}\ldots\binom{n}{n_{j}}\). Reusing the bound from (2) we get that
\[\mathbb{P}\left(T_{1}=T_{2}\right)\leq\left(\frac{C}{(np)^{1/4}}\right)^{\sqrt {np}}+(\sqrt{np})e^{-\frac{1}{6}\sqrt{\frac{n}{p}}}\leq n^{-(1+o(1))3\sqrt{1+ \epsilon}},\]
when \(p\geq\frac{36(1+\epsilon)\log^{2}(n)}{n(\log\log(n))^{2}}\). Combining this with the above approximation for \(\mathbb{E}[P]\) we see that
\[\mathbb{E}[P] \leq\left(1+\sum_{k=2}^{n}k!\binom{n}{k}\bigg{(}\frac{1}{n^{(1+o(1 )3\sqrt{1+\epsilon}}}\bigg{)}^{k-1}\bigg{)}^{n}\right.\] \[\leq\left(1+\sum_{k=2}^{n}n^{k}\bigg{(}\frac{1}{n^{(1+o(1))2\sqrt {1+\epsilon}}}\bigg{)}^{k-1}\bigg{)}^{n}\] \[=\left(1+n\sum_{k=1}^{n-1}\left(\frac{1}{n^{(1+o(1))2\sqrt{1+ \epsilon}}}\right)^{k}\right)^{n}\] \[\leq\left(1+n\left(\frac{1}{1-n^{-(1+o(1))2\sqrt{1+\epsilon}}}-1 \right)\right)^{n}\] \[\leq\exp\left\{\frac{n^{2}}{n^{(1+o(1))2\sqrt{1+\epsilon}}} \right\}\to 1,\]
as \(n\to\infty\). Hence \(\mathbb{E}[P]\to 1\) as \(n\to\infty\) as there is always at least one valid permutation (the original ordering before shredding).
## 5 Unique Reconstructibility
A common problem of interest in most reconstruction models is that of finding which parameters \(p=p(n)\) are such that reconstructibility of the structure being studied is guaranteed with high probability. Our algorithm gives an upper bound of \(\frac{16\log^{2}(n)}{n(\log\log(n))^{2}}\) for the critical value at which reconstructibility can be ensured, though with the first moment method approach we can improve that bound.
**Theorem 4**.: _Let \(M\) be an \(n\times n\) random binary matrix with i.i.d. entries \(m_{ij}\) with \(\mathbb{P}\left(m_{ij}=1\right)=p\) and \(\mathbb{P}\left(m_{ij}=0\right)=1-p\). Then, for any \(\epsilon>0\), \(\mathbb{P}\left(M\text{ is reconstructible}\right)\to 1\) as \(n\to\infty\) for \(p\geq\frac{2(1+\epsilon)\log(n)}{n}\)._
The following lemma offers us a second equivalent definition for reconstructibility that is better for completing the computations in the proof of Theorem 4
**Lemma 5**.: _Let \(M\) be an \(n\times n\) binary matrix with shredded column and row collections given by \(\gamma_{1},...,\gamma_{n}\) and \(\rho_{1},...,\rho_{n}\) respectively, and let \(M_{\sigma,\tau}\) denote the matrix obtained from permuting the rows by \(\sigma\) and the columns by \(\tau\), \(M_{\sigma,\tau}=(m_{\sigma(i),\tau(j)})_{i,j=1}^{n}\) for a particular pair \((\sigma,\tau)\in S_{n}^{2}\setminus\{(\mathrm{Id},\mathrm{Id})\}\) (here \(\mathrm{Id}\) just means the identity permutation that sends each \(i\in[n]\) to itself). Then,_
\[\{M\text{ is not reconstructible}\}=\bigcup_{\begin{subarray}{c}(\sigma,\tau) \in S_{n}^{2}\\ (\sigma,\tau)\neq(\mathrm{Id},\mathrm{Id})\end{subarray}}\{M_{\sigma,\tau}=M\}.\]
Proof of Theorem 4.: Define,
\[N=\sum_{(\sigma,\tau)\in(S_{n}\setminus\{\mathrm{Id}\})^{2}}\mathds{1}_{\{M_{ \sigma,\tau}=M\}}.\]
A quick computation shows that \(\mathbb{E}[N]=(n!-1)^{2}\mathbb{P}\left(M_{\sigma,\tau}=M\right)\), where \((\sigma,\tau)\) are independent and both uniform over \(S_{n}\setminus\{\mathrm{Id}\}\). Before bounding this expression, we need some further exploration of the events \(\{M_{\sigma,\tau}=M\}\).
We define the permutation graph of a pair \(\sigma,\tau\in S_{n}\) to be the directed graph on \([n]^{2}=\{(i,j):1\leq i,j\leq n\}\) where each vertex \((i,j)\) has an out-going edge pointing to \((\sigma(i),\tau(j))=(\sigma_{i},\tau_{j})\). If \(\sigma,\tau\in S_{n}\) have cyclic decompositions \(\sigma=a_{1}\cdots a_{m}\) and \(\tau=b_{1}\cdots b_{k}\), a particular pair of cycles \(a_{i}\) and \(b_{j}\) acts on exactly the \(|a_{i}|\times|b_{j}|\) sub-matrix of \(M\) that corresponds to the rows that \(a_{i}\) acts on and the columns that \(b_{j}\) acts on (here \(|\cdot|\) denotes the length). In the permutation graph, this \(|a_{i}|\times|b_{j}|\) sized region corresponds exactly to a collection of \(\gcd(|a_{i}|,|b_{j}|)\) disjoint cycles, all of length \(\mathrm{lcm}(|a_{i}|,|b_{j}|)\). In order to have \(M_{\sigma,\tau}=M\), it is necessary to have all equality between all entries in \(M\) that exist within the same cycle in the permutation graph. That is,
\[\{M_{\sigma,\tau}=M\}\subseteq\bigcap_{i,j=1}^{n}\{m_{i,j}=m_{\sigma^{\ell}(i ),\tau^{\ell}(j)}\text{ for all }\ell\in\mathbb{N}\}. \tag{3}\]
Since each of the cycles are disjoint, the events in the intersection are all independent. Using (3) along with our original expression for \(\mathbb{E}[N]\) we get that
\[\mathbb{E}[N]\leq(n!)^{2}\mathbb{E}\left[\prod_{(i,j)\in S}\left(p^{\mathrm{ lcm}(|a_{i}|,|b_{j}|)}+(1-p)^{\mathrm{lcm}(|a_{i}|,|b_{j}|)}\right)^{\gcd(|a_{i}|,|b_{j}|)}\right],\]
where \(S=\{(i,j):1\leq i\leq m,1\leq j\leq k,(|a_{i}|,|b_{j}|)\neq(1,1)\}\). If we let \(c_{1}(\sigma)\) and \(c_{1}(\tau)\) denote the number of singleton cycles in \(\sigma\) and \(\tau\), then we can factor out powers of \((1-p)\) and use the fact that \(|a_{i}|\cdot|b_{j}|\geq 2\) for \((i,j)\in S\),
\[\mathbb{E}[N] \leq(n!)^{2}\mathbb{E}\left[(1-p)^{n^{2}-c_{1}(\sigma)c_{1}(\tau )}\prod_{(i,j)\in S}\left(1+\left(\frac{p}{1-p}\right)^{\mathrm{lcm}(|a_{i}|,| b_{j}|)}\right)^{\gcd(|a_{i}|,|b_{j}|)}\right]\] \[\leq(n!)^{2}\mathbb{E}\left[(1-p)^{n^{2}-c_{1}(\sigma)c_{1}(\tau )}\exp\left\{\sum_{(i,j)\in S}\left(\frac{p}{1-p}\right)^{2}\right\}\right]\] \[\leq(n!)^{2}\mathbb{E}\left[e^{-pn^{2}+pc_{1}(\sigma)c_{1}(\tau )}e^{4(n^{2}-c_{1}(\sigma)c_{1}(\tau))p^{2}}\right].\]
By bounding the expected value in the final upper bound, one can show that \(\mathbb{E}[N]\to 0\) as \(n\to\infty\) for
\[\frac{(2+\epsilon)\log(n)}{n}\leq p\leq\frac{17\log^{2}(n)}{n(\log\log(n))^{2 }},\]
which is sufficient because Theorem 1 covers the case where \(p\geq\frac{16(1+\epsilon)\log^{2}(n)}{n(\log\log(n))^{2}}\) for any \(\epsilon>0\). Applying Lemma 5 with the union bound we see that
\[\mathbb{P}\left(M\text{ is reconstructible}\right)\leq\mathbb{E}[N]+\mathbb{P} \left(\bigcup_{\begin{subarray}{c}(\sigma,\tau)\in S_{n}^{2}(\mathrm{Id}, \mathrm{Id})\\ \sigma=\mathrm{Id}\text{ or }\tau=\mathrm{Id}\end{subarray}}\{M_{\sigma,\tau}=M\} \right).\]
However, if one of \(\sigma\) or \(\tau\) are the identity there must be at least two rows or columns that are identical in \(M\) as the other cannot be the identity. Thus by Lemma 3
\[\mathbb{P}\left(\bigcup_{\begin{subarray}{c}(\sigma,\tau)\in S_{n}^{2}\setminus (\operatorname{Id},\operatorname{Id})\\ \sigma=\operatorname{Id}\text{ or }\tau=\operatorname{Id}\end{subarray}} \{M_{\sigma,\tau}=M\}\right)\to 0\text{ as }n\rightarrow\infty\text{ for }p\geq\frac{2(1+\epsilon)\log(n)}{n}.\]
Combining this with the above we obtain that \(\mathbb{P}\left(M\text{ is not reconstructible}\right)\to 0\) as \(n\rightarrow\infty\) for \(p\geq\frac{2(1+\epsilon)\log(n)}{n}\).
## 6 Proofs of Lemmas
**Lemma 2**.: _Suppose that \(p=p(n)\) is some sequence such that \(np\geq 16\). There exists a positive constant \(\gamma>0\) such that \(b_{n,p,\lfloor np\rfloor+i}\geq 2\gamma\frac{1}{\sqrt{np}}\) for all \(i\in[0,\lfloor\sqrt{np}\rfloor]\). Furthermore,_
\[\mathbb{P}\left(|\mathcal{P}_{\lfloor np\rfloor+i}|\leq\gamma\sqrt{\frac{n}{p }}\right)\leq e^{-\frac{1}{6}\gamma\sqrt{\frac{n}{p}}},\]
_where \(\mathcal{P}_{i}=\{1\leq j\leq n:|\gamma_{j}|=i\}\) and \(\gamma_{1},...,\gamma_{n}\) are the shredded columns._
Proof.: Since \(|\mathcal{P}_{k}|=\sum_{i=1}^{n}\mathds{1}_{\{\text{column }i\text{ has weight }k\}}\), and each column has weight \(k\) with probability \(b_{n,p,k}=\binom{n}{k}p^{k}(1-p)^{n-k}\), it holds that \(|\mathcal{P}_{k}|\sim\operatorname{binomial}(n,b_{n,p,k})\). By a Chernoff bound we obtain,
\[\mathbb{P}\left(|\mathcal{P}_{k}|\leq\frac{1}{2}nb_{n,p,k}\right)\leq e^{- \frac{1}{12}nb_{n,p,k}}.\]
From here it suffices to show that there is a constant \(\gamma>0\) such that \(\frac{1}{2}nb_{n,p,k}\geq\gamma\sqrt{\frac{n}{p}}\) when \(k=\lfloor np\rfloor+i\), \(i\in[0,\lfloor\sqrt{np}\rfloor]\). To do this we show the following: for any \(0\leq x\leq\sqrt{np}\) such that \(np+x\) is integer-valued, \(b_{n,p,np+x}\geq\frac{\alpha}{\sqrt{np}}\) for some \(\alpha>0\). Repeatedly apply Stirling's bounds
\[\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}\leq n!\leq e\sqrt{2\pi n}\left( \frac{n}{e}\right)^{n},\]
to yield \(b_{n,p,np+x}\geq e^{-2}A_{1}A_{2}A_{3}\), where
\[A_{1}=\frac{1}{\left(1+\frac{x}{np}\right)^{np}\left(1-\frac{x}{n(1-p)} \right)^{n(1-p)}},\;A_{2}=\left(\frac{1-\frac{x}{n(1-p)}}{1+\frac{x}{np}} \right)^{x},A_{3}=\frac{1}{\sqrt{2\pi}}\sqrt{\frac{n}{(np+x)(n(1-p)-x)}}.\]
Using the fact that \(1+y\leq e^{y}\) for all \(y\in\mathbb{R}\), we get
\[A_{1}\geq\frac{1}{e^{x}e^{-x}}=1.\]
Next, since \(p\geq\frac{16}{n}\),
\[A_{2}\geq\left(\left(1-\frac{x}{n(1-p)}\right)\left(1-\frac{x}{np}\right) \right)^{x}\geq\left(1-\frac{x}{np(1-p)}\right)^{x}\geq\left(1-\frac{2}{\sqrt{np }}\right)^{\sqrt{np}}\geq\frac{1}{2^{4}}=\frac{1}{16}.\]
Finally, using again the fact that \(\frac{1}{2}\geq p\geq\frac{16}{n}\),
\[A_{3} \geq\frac{1}{\sqrt{2\pi np(1-p)}}\frac{1}{\sqrt{(1+\frac{x}{np})( 1-\frac{x}{n(1-p)})}}\] \[\geq\frac{1}{\sqrt{2\pi np(1+\frac{x}{np(1-p)})}}\] \[\geq\frac{1}{\sqrt{2\pi np(1+\frac{2}{\sqrt{np}})}}\] \[\geq\frac{1}{\sqrt{3\pi np}}.\]
Putting everything together we get that
\[b_{n,p,np+x}\geq\left(\frac{1}{16e^{2}\sqrt{3\pi}}\right)\frac{1}{\sqrt{np}}.\]
**Lemma 3**.: _Let \(M\) be an \(n\times n\) random binary matrix with i.i.d. entries \(m_{ij}\) such that \(\mathbb{P}\left(m_{ij}=1\right)=p\) and \(\mathbb{P}\left(m_{ij}=0\right)=1-p\). Then, for any \(\epsilon>0\), \(\mathbb{P}\left(M\text{ has at least two equal rows or columns}\right)\to 0\) as \(n\rightarrow\infty\) if \(p\geq\frac{(1+\epsilon)\log(n)}{n}\)._
Proof.: Let \(r_{1},...,r_{n}\) be the rows of \(M\), \(A_{i,j}=\{r_{i}=r_{j}\}\), and let \(N=\sum_{i\neq j}1_{A_{i,j}}\). Then,
\[\mathbb{E}[N] =\binom{n}{2}\mathbb{P}\left(A_{1,2}\right)=\binom{n}{2}(p^{2}+( 1-p)^{2})^{n}\] \[=\binom{n}{2}(1-p)^{2n}\left(1+\frac{p^{2}}{(1-p)^{2}}\right)^{n}\] \[\leq\binom{n}{2}e^{-2np+4np^{2}}\to 0\quad\text{as }n\to\infty,\]
when \(p\geq(1+\epsilon)\frac{\log(n)}{n}\). Since the columns have the same distribution as the rows, the result follows by Markov's inequality.
**Lemma 5**.: _Let \(M\) be an \(n\times n\) binary matrix with shredded column and row collections given by \(\gamma_{1},...,\gamma_{n}\) and \(\rho_{1},...,\rho_{n}\) respectively, and let \(M_{\sigma,\tau}\) denote the matrix obtained from permuting the rows by \(\sigma\) and the columns by \(\tau\), \(M_{\sigma,\tau}=(m_{\sigma(i),\tau(j)})_{i,j=1}^{n}\) for a particular pair \((\sigma,\tau)\in S_{n}^{2}\setminus\{(\mathrm{Id},\mathrm{Id})\}\) (here
Id _just means the identity permutation that sends each \(i\in[n]\) to itself). Then,_
\[\{M\text{ is not reconstructible}\}=\bigcup_{\begin{subarray}{c}(\sigma,\tau)\in S _{n}^{2}\\ (\sigma,\tau)\neq(\operatorname{Id},\operatorname{Id})\end{subarray}}\{M_{ \sigma,\tau}=M\}. \tag{4}\]
Proof.: Suppose that \(M\) is not reconstructible. Then, there exists two distinct pairs of permutations \((\sigma,\tau)\) and \((\sigma^{\prime},\tau^{\prime})\) that satisfy (1). That is, there is some matrix \(M^{\prime}\) (possibly both equal to \(M\)) such that
\[M=\begin{bmatrix}\rho_{\sigma_{1}}\\ \vdots\\ \rho_{\sigma_{n}}\end{bmatrix}=\begin{bmatrix}\gamma_{\tau_{1}}&\cdots&\gamma _{\tau_{n}}\end{bmatrix},\text{ and }M^{\prime}=\begin{bmatrix}\rho_{\sigma_{1}^{\prime}}\\ \vdots\\ \rho_{\sigma_{n}^{\prime}}\end{bmatrix}=\begin{bmatrix}\gamma_{\tau_{1}^{ \prime}}&\cdots&\gamma_{\tau_{n}^{\prime}}\end{bmatrix}.\]
Suppose that \(r_{1},...,r_{n}\) and \(c_{1},...,c_{n}\) are the rows and columns in their original, pre-shredding order. Applying \(\sigma^{\prime}\circ\sigma^{-1}\) to \((r_{1},...,r_{n})\) and \(\tau\circ(\tau^{\prime})^{-1}\) to \((c_{1},...,c_{n})\) must necessarily send \(M\) back to itself:
1. Applying \(\sigma^{-1}\) to the rows of \(M\) yields the matrix associated with the shredded rows \(R=[\rho_{1}\cdots\rho_{n}]^{T}\);
2. Applying \(\sigma^{\prime}\) to \(R\) gives \([\rho_{\sigma_{1}^{\prime}}\cdots\rho_{\sigma_{n}^{\prime}}]^{T}=[\gamma_{ \tau_{1}^{\prime}}\cdots\gamma_{\tau_{n}^{\prime}}]=M^{\prime}\) by the above identity;
3. Applying \((\tau^{\prime})^{-1}\) to \(M^{\prime}\) brings us to the matrix associated with the shredded columns \(C=[\gamma_{1}\cdots\gamma_{n}]\);
4. Finally, applying \(\tau\) to \(C\) brings us back to our original matrix \([\gamma_{\tau_{1}}\cdots\gamma_{\tau_{n}}]=M\).
Both of these two permutation pairs cannot be the identity because we assume the pairs are distinct, and so the inclusion \(\subseteq\) holds in (4).
For the other direction in (4) we suppose we have \((\sigma,\tau)\in S_{n}^{2}\setminus(\operatorname{Id},\operatorname{Id})\) such that \(M_{\sigma,\tau}=M\). Then if we are given an arbitrary shredded ordering \(\gamma_{1},...,\gamma_{n}\) and \(\rho_{1},...,\rho_{n}\), we can by assumption always apply \((\sigma,\tau)\) to the correct ordering to obtain a new non-equal ordering that is valid. Hence, \(M\) is not reconstructible.
**Lemma 6**.: _Suppose that \(X\sim\operatorname{binomial}(n,p)\) and that \(np\geq 16\). Then,_
\[\mathbb{E}\left[\frac{1}{\sqrt{X\lor 1}}\right]\leq\frac{3}{\sqrt{np}}.\]
Proof.: Splitting the expectation into two pieces and then applying the Chebyshev-Cantelli inequality gives us the upper bound
\[\mathbb{E}\left[\frac{1}{\sqrt{X\lor 1}}\right]\leq\sqrt{\frac{2}{np}}+ \mathbb{P}\left(X\leq\frac{np}{2}\right)\leq\sqrt{\frac{2}{np}}+\frac{np(1-p)} {np(1-p)+(\frac{np}{2})^{2}}.\]
Utilizing the fact that \(np\geq 16\) we can see that
\[\frac{np(1-p)}{np(1-p)+(\frac{np}{2})^{2}}\leq\frac{4}{4+np}\leq\frac{1}{ \sqrt{np}},\]
which combined with the above gives
\[\mathbb{E}\left[\frac{1}{\sqrt{X\lor 1}}\right]\leq\frac{\sqrt{2}}{\sqrt{np}}+ \frac{1}{\sqrt{np}}\leq\frac{3}{\sqrt{np}}.\]
**Lemma 7**.: _Let \(\sigma,\tau\) be independent uniform permutations over \(S_{n}\setminus\{\mathrm{Id}\}\), and let \(c_{1}(\sigma),c_{1}(\tau)\) be the number of singleton cycles in both \(\sigma\) and \(\tau\) respectively. Then, for any \(\epsilon>0\),_
\[a_{n}:=(n!)^{2}\mathbb{E}\left[e^{-pn^{2}+pc_{1}(\sigma)c_{1}(\tau)}e^{4(n^{2}- c_{1}(\sigma)c_{1}(\tau))p^{2}}\right]\to 0\]
_as \(n\to\infty\) for_
\[\frac{(2+\epsilon)\log(n)}{n}\leq p\leq\frac{17\log^{2}(n)}{n(\log\log(n))^{2}}. \tag{5}\]
Proof.: First, we write the expression in the statement of the lemma as
\[a_{n}=\sum_{0\leq x,y\leq n-1}(n!)^{2}\exp\left\{-pn^{2}+pxy+4(n^{2}-xy)p^{2} \right\}\mathbb{P}\left(c_{1}(\sigma)=x\right)\mathbb{P}\left(c_{1}(\tau)=y \right).\]
We split off the terms with \(xy=0\) and upper bound by
\[a_{n}\leq 2n(n!)^{2}\exp\left\{-pn^{2}+4n^{2}p^{2}\right\}+C\sum_{1\leq x,y \leq n-1}\frac{(n!)^{2}}{x!y!}\exp\left\{-pn^{2}+pxy+4(n^{2}-xy)p^{2}\right\},\]
for some \(C>0\) such that \(\mathbb{P}\left(c_{1}(\sigma)=x\right)\leq\sqrt{C}\frac{1}{x!}\). Such a \(C\) exists because \(\mathbb{P}\left(c_{1}(\sigma^{\prime})=x\right)\sim\frac{1}{x!}\) for \(\sigma^{\prime}\in S_{n}\) uniformly drawn (see Arratia and Tavare (1992) and Ford (2022) for a discussion of random permutation statistics). One can see immediately that the first term tends to \(0\) for \(p\) in the described range, so all that is left is the second term. Relabelling \(x=n-k\) and \(y=n-\ell\) we can upper bound the sum by
\[C\sum_{1\leq k,\ell\leq n-1}n^{k+\ell}\exp\left\{-p(n(k+\ell)-k\ell)(1+o(1)) \right\}\leq C\sum_{1\leq\ell\leq n-1}\bigg{(}n\sup_{0\leq k\leq n}f_{\ell}(k )\bigg{)}, \tag{6}\]
where \(f_{\ell}(k)=e^{-((np-\log(n))(k+\ell)-pk\ell)(1+o(1))}\) with \(k\) now being allowed to take on real values. To find \(\max_{0\leq k\leq n}f_{\ell}(k)\) it suffices to find \(\min_{0\leq k\leq n}((np-\log(n))(k+\ell)-pk\ell):=\min_{0\leq k\leq n}g_{\ell} (k)\). Since \(g_{\ell}(k)\) is linear in \(k\) it is monotone, and so
\[\min_{0\leq k\leq n}g_{\ell}(k)=\min\bigg{\{}(np-\log(n))\ell,(n^{2}p-n\log(n) -\ell\log(n))\bigg{\}}\geq\min\bigg{\{}(1+\epsilon)\log(n)\ell,\epsilon n\log (n)\bigg{\}}.\]
For the above inequality we use the assumptions on \(p\) from (5). Combining this bound with (6) gives
\[a_{n}\leq C\sum_{1\leq\ell\leq n-1}n^{-\epsilon\ell(1+o(1))}+C\sum_{1\leq\ell \leq n-1}n^{-n\epsilon(1+o(1))-1}\leq C\sum_{1\leq\ell\leq n-1}n^{-\epsilon\ell (1+o(1))}+Cn^{-n\epsilon(1+o(1))-2},\]
which converges to 0 as \(n\to\infty\). Altogether, this proves that \(a_{n}\to 0\) for \(p\) in the desired range.
|
2304.05190
|
Thermodynamics of a rotating hadron resonance gas with van der Waals
interaction
|
Studying the thermodynamics of the systems produced in ultra-relativistic
heavy-ion collisions is crucial in understanding the QCD phase diagram.
Recently, a new avenue has opened regarding the implications of large initial
angular momentum and subsequent vorticity in the medium evolution in
high-energy collisions. This adds a new type of chemical potential into the
partonic and hadronic systems, called the rotational chemical potential. We
study the thermodynamics of an interacting hadronic matter under rotation,
formed in an ultra-relativistic collision. We introduce attractive and
repulsive interactions through the van der Waals equation of state.
Thermodynamic properties like the pressure ($P$), energy density
($\varepsilon$), entropy density ($s$), trace anomaly ($(\varepsilon -
3P)/T^{4}$), specific heat ($c_{\rm v}$) and squared speed of sound ($c_{\rm
s}^{2}$) are studied as functions of temperature ($T$) for zero and finite
rotation chemical potential. The conserved charge fluctuations, which can be
quantified by their respective susceptibilities, are also studied. The
rotational (spin) density corresponding to the rotational chemical potential is
explored. In addition, we explore the possible liquid-gas phase transition in
the hadron gas with van der Waals interaction in the $T$ -- $\omega$ phase
space.
|
Kshitish Kumar Pradhan, Bhagyarathi Sahoo, Dushmanta Sahu, Raghunath Sahoo
|
2023-04-11T12:49:56Z
|
http://arxiv.org/abs/2304.05190v2
|
# Thermodynamics of a rotating hadron resonance gas with van der Waals interaction
###### Abstract
Studying the thermodynamics of the systems produced in ultra-relativistic heavy-ion collisions is crucial in understanding the QCD phase diagram. Recently, a new avenue has opened regarding the implications of large initial angular momentum and subsequent vorticity in the medium evolution in high-energy collisions. This adds a new type of chemical potential into the partonic and hadronic systems, called the rotational chemical potential. We study the thermodynamics of an interacting hadronic matter under rotation, formed in an ultra-relativistic collision. We introduce attractive and repulsive interactions through the van der Waals equation of state. Thermodynamic properties like the pressure (\(P\)), energy density (\(\varepsilon\)), entropy density (\(s\)), trace anomaly (\((\varepsilon-3P)/T^{4}\)), specific heat (\(c_{\rm v}\)) and squared speed of sound (\(c_{\rm s}^{2}\)) are studied as functions of temperature (\(T\)) for zero and finite rotation chemical potential. The charge fluctuations, which can be quantified by their respective susceptibilities, are also studied. The rotational (spin) density corresponding to the rotational chemical potential is explored. In addition, we explore the possible liquid-gas phase transition in the hadron gas with van der Waals interaction in the \(T\) - \(\omega\) phase space.
pacs:
## I Introduction
There have been intense investigations has been going on to understand the behavior of the strongly interacting matter produced in the ultra-relativistic heavy-ion collisions at the Relativistic Heavy Ion Collider (RHIC) at BNL and the Large Hadron Collider (LHC) at CERN. Such matter can be described by Quantum Chromodynamics (QCD). According to QCD, a smooth crossover phase transition is expected at high temperatures (\(T\)) and vanishing baryochemical potentials (\(\mu_{\rm B}\)) region in the QCD phase diagram, which is described by the RHIC and LHC experiments. As one goes towards low temperatures, high baryochemical potential region, the phase transition becomes a first-order one. These phase transition lines meet at a hypothesized critical endpoint (CEP), which has been one of the most exciting topics of discussion in the high-energy physics community. To gather valuable and reliable information about the QCD matter, lattice QCD (lQCD) has been the most successful theory based on first principles. However, at non-vanishing chemical potential, the lQCD breaks down because of the fermion sign problem [1; 2]. Nevertheless, there have been significant attempts to bypass this problem indirectly [3; 4; 5], but the issue still largely persists. The more simplistic Hadron Resonance Gas (HRG) model effectively explains the QCD matter behavior and matches the lQCD data up to temperature (\(T\simeq 150\) MeV) [6; 7; 8]. The HRG model works at both zero baryochemical potential and baryon-rich environments and successfully explains the hadron yields from heavy-ion collisions using only two parameters, the temperature, and the baryochemical potential. However, after \(T\simeq 150\) MeV, the hadrons start to melt, and the results from the HRG model start deviating from the lQCD estimations. The HRG model also breaks down while estimating the higher-order fluctuations and correlations of conserved charges [9]. It has been observed that repulsive interactions between the hadrons can substantially affect the behavior of thermodynamic and transport properties, particularly the higher order fluctuations [10; 11]. The repulsive interactions can be incorporated into the hadron gas by a van der Waals type repulsion where the hardcore radius of the hadrons serves as the repulsion in the Excluded Volume Hadron Resonance Gas (EVHRG) model or through a repulsive mean field potential which is introduced in the hadron gas in the Relativistic Mean-Field Hadron Resonance Gas (RMFHRG) model. Recently, an interacting hadron resonance gas model was introduced where both long-distance attraction and short-distance repulsion between the hadrons was taken into account through the van der Waals equation of state [12]. The van der Waals-type interaction in the hadronic medium delays the melting of hadrons, and thus, the VDWHRG model can explain the lQCD data even up to \(T\sim 180\) MeV. This model, which has a liquid-gas phase transition, is very effective in estimating a variety of thermodynamic and transport properties [13; 14; 15].
In order to understand the medium formed in ultra-relativistic collisions, it is essential to study its thermodynamic properties. The fundamental thermodynamic quantities, such as pressure (\(P\)), energy density (\(\varepsilon\)), and entropy density (\(s\)), can give us necessary information about the system. The scaled pressure, energy density, and entropy density provide information about the degrees of freedom in the medium. Similarly, the speed of sound tells about the interaction in the medium. A massless ideal gas gives \(c_{\rm s}^{2}\) to be \(1/3\), whereas the value for a hadron gas is \(1/5\). Studying \(c_{\rm s}^{2}\) can help us understand whether the medium is partonic or hadronic or approaches a massless ideal gas limit. On the other hand,
the specific heat of a system is estimated via the temperature fluctuations in the system. It gives the measure of the amount of heat energy needed to raise the system's temperature by one unit. It is also expected to diverge near the critical point and thus is an excellent observable to study the phase transition. Similarly, trace anomaly also plays a vital role in the QCD dynamics and phase transition. It measures the deviation from the masslessness of the constituents in the medium. In recent studies [16; 17; 18; 19], various thermodynamic observables were studied as functions of final state charged particle density in pseudorapidity (\(\langle dN_{\rm ch}/d\eta\rangle\)), which shed light on a possible change in dynamics of the system after a threshold in \(\langle dN_{\rm ch}/d\eta\rangle\). This suggests that after \(\langle dN_{\rm ch}/d\eta\rangle\simeq 10\) - 20, small systems like pp and p-Pb mimic the behavior of heavy-ion collisions.
In addition, studying the correlations and fluctuations of conserved charges is a reliable method for comprehending the physics of the phase transition of strongly interacting matter. As the fluctuation-dissipation theorem relates susceptibilities to fluctuations in a system near thermal equilibrium, the related susceptibilities indicate inherent statistical fluctuations. Changes in conserved charges at finite temperatures and chemical potential are sensitive signs that hadronic matter is changing into quark-gluon plasma (QGP). Moreover, divergent fluctuations can also indicate the presence of the CEP; however, the shift from the hadronic to QGP phase is continuous for the vanishing net baryon chemical potential.
Recently, it was found that there is a finite hyperon polarization in relativistic heavy-ion collisions at the STAR experiment, which led to the conclusion that finite vorticity is present in the medium [20]. This opens up a whole new window of exciting consequences. The vorticity or rotation gets coupled with the temperature in the medium, thus changing the entire dynamics of the system evolution [21; 22]. The fundamental Euler's thermodynamic equation gets modified in the presence of a finite rotation, adding a new rotation chemical potential into the system [23; 24]. Apart from the vorticity coming from the initial global orbital angular momentum of the colliding heavy ions, there are several other sources from which vorticity can be generated in a system. The smoke-loop type vortex created in the vicinity of fast-moving jets in an expanding fireball also contributes to the vorticity of the system, although they are not responsible for hyperon polarization [25]. Inhomogeneous transverse expansion of fireball may produce transverse vorticity circling the longitudinal axis in the system [26; 27; 28; 29; 30; 31]. Additionally, vorticity can be generated from the Einstein-de Haas effect [32], where a magnetized medium creates a finite rotation. This indicates that the huge magnetic field produced in heavy ion collisions due to fast-moving spectators may magnetize the medium and it may generate a large amount of vorticity in the system, whereas the reverse effect is the famous Barnett effect [33]. Thus, the analogy between rotation and the magnetic field is a well-known phenomenon studied in many physical systems [34; 35]. Similarly, viscosity in the medium is also responsible for generating finite vorticity and vice versa [22; 36]. Thus, it is necessary to include the effect of rotation while studying the medium formed in an ultra-relativistic collision. Recently, many studies have been conducted with the introduction of rotation to understand the QCD phase structure. Vorticity formation in the ultra-relativistic heavy-ion collision has been studied from hydrodynamic models such as ECHO-QGP, PIGR, vHLLE, MUSIC, 3-FD, CLVisc in (3+1) dimensional model [37; 38; 39; 40; 41]. Event generators, such as AMPT, UrQMD, and HIJING, have also been used to estimate kinematic and thermal vorticity [42; 43; 27; 44; 45; 42]. Moreover, the non-zero local vorticity can help us probe the chiral vortical effect (CVE), which is a non-trivial consequence of topological quantum chromodynamics [46; 47]. This effect is the vortical analog of the chiral magnetic effect (CME) [48; 49] and chiral separation effect (CSE) [50; 51]. It represents the vector, and axial currents generation along the vorticity [52; 53; 54; 55]. CVE is extremely important because it induces baryon charge separation along the vorticity direction, which can be experimentally probed by two-particle correlations [56].
There are several studies on the effect of magnetic fields on the QCD phase diagram. In [57], the authors have coupled the linear sigma model to quarks to study the chiral transition as well as to Polyakov loops to consider the confinement. Taking a constant uniform magnetic field, they investigated how the magnetic field affects the chiral and deconfinement transitions. It was shown that the chiral condensate is enhanced by the magnetic field, and the transition temperature rises as a result. This phenomenon is commonly known as magnetic catalysis. Numerous studies that involve the Nambu-Jona-Lasinio model and its extended versions, such as the PNJL [58], and EPNJL [59] models came to similar conclusions regarding the rise in \(T_{c}\) and the strength of the transitions. However, in contrast to this, the lattice QCD results [60; 61], showed that the magnetic field actually suppresses rather than increases the critical temperature for the chiral phase transition of QCD. Since rotation in the medium adds another kind of chemical potential, it can affect the phase transition, and hence, it will be intriguing to observe how rotation affects the QCD phase diagram. In ref. [62], the authors explore the deconfinement from a rapidly rotating hot and dense hadronic matter, similarly, in ref. [63; 64], the authors investigate the chiral phase transition in a system of fermions under rotation using the Nambu-Jona Laisino (NJL) model. Here the authors have shown the importance of angular velocity in determining the phase transition from hadronic to quark degrees of freedom. Their results are presented by a phase diagram in the temperature-angular momentum \(T-\omega\) plane along with the phase diagram in the temperature-chemical potential \(T-\mu_{B}\) plane. Similarly, the quark deconfinement has been studied in rotating neutron stars in [65]. The authors in ref. [66] have studied the chi
ral phase transition along with the spin polarization in a three-flavor NJL model. Apart from this, the rotation effect has been explored in the mesonic condensation of the isospin matter [67]. The authors in [68] has also studied a combined effect of rotation and magnetic field on pion condensation. They have demonstrated increased condensation upon increasing the rotation (angular velocity). These kinds of studies evident that introducing angular velocity in the medium adds another kind of chemical potential known as rotational chemical potential. Similar to baryon chemical potential, this rotational chemical potential can also lead to a phase transition.
Given the above information, it would be interesting to understand what effect rotation plays in quantifying a hadronic system's thermodynamic properties. In this work, we take rotation into account and estimate various thermodynamic properties and charge fluctuations in an interacting hadron gas for the first time. We also look for a possible criticality in the rotating medium for a liquid-gas phase transition. The structure of this work is as follows. The section II gives a detailed calculation of the thermodynamic observables and the susceptibilities within the scope of a VDWHRG model that includes rotation. We briefly examine the results in section III and provide a summary in section IV.
## II Formulation
In this work, we have assumed a system of relativistic gas of massive fermions and bosons having half-integral and integral spin (\(S\)), rotating with a constant angular velocity vector \(\omega\). The density operator for rotational grand canonical ensemble having large volume \(V\) and angular momentum \(\mathbf{J}\) is given as [69; 70; 71; 72; 73; 74]
\[\widehat{\rho}_{\omega}=\frac{1}{Z_{\omega}}\exp[(-\widehat{H}+\mu\widehat{Q} +\omega\cdot\widehat{\mathbf{J}})/T]\mathsf{P}_{V}, \tag{1}\]
where \(\widehat{H}\) is the hamiltonian operator, \(\widehat{Q}\) is a generic conserved charges, \(\mu\) is the relevant chemical potential, and \(Z_{\omega}\) is the partition function of the rotating system given as
\[Z_{\omega}=\mathrm{tr}\exp[(-\widehat{H}+\mu\widehat{Q}+\omega\cdot\widehat{ \mathbf{J}})/T]\mathsf{P}_{V}. \tag{2}\]
The \(\mathsf{P}_{V}\) is the projector onto localized states \(|h_{V}\rangle\), \(\mathsf{P}_{V}=\sum_{h_{V}}|h_{V}\rangle\langle h_{V}|\). The partition function in Eq. (2) can be written as a product of single-particle partition functions. The calculation of Eq. (2) then reduces to compute matrix elements of single-particle compatible operators \(\widehat{h},\widehat{q},\widehat{\mathbf{j}}\), like this [69; 70]:
\[\langle p_{\pm},\tau_{\pm}|\exp[-(\widehat{h}+\mu\widehat{q}+\omega\cdot \widehat{\mathbf{j}})/T]\mathsf{P}_{V}|p_{\pm},\sigma_{\pm}\rangle \tag{3}\] \[=(\mathrm{e}^{(\epsilon-\mu q)/T}\pm 1)^{-1}\langle p_{\pm},\tau_{ \pm}|\exp[\omega\cdot\widehat{\mathbf{j}}/T]\mathsf{P}_{V}|p_{\pm},\sigma_{ \pm}\rangle,\]
where \(p_{\pm}\) is the single particle four-momentum. The \(\sigma_{\pm}\) and \(\tau_{\pm}\) are level of polarization states. The \(\pm\) corresponds to fermions and bosons, respectively. An analytical extension from imaginary values of \(\omega\) can be used to derive the matrix elements on the right-hand side of Eq. (3). After replacing \(\omega/T\) with \(-i\phi\), we have a rotation \(\mathsf{R}_{\hat{\omega}}(\phi)\) around the axis \(\omega\) of an angle \(\phi\). A more detailed explanation can be found in the Ref. [69; 70].
The matrix element can be rewritten in terms of the rotational matrix \(\mathsf{R}_{\hat{\mathcal{O}}}(\phi)\) as
\[\langle p_{\pm},\tau_{\pm}|\exp[\omega\cdot\widehat{\mathbf{j}}/T]\mathsf{P} _{V}|p_{\pm},\sigma_{\pm}\rangle=\langle p_{\pm},\tau_{\pm}|\mathsf{R}_{\hat{ \omega}}(\phi)\mathsf{P}_{V}|p_{\pm},\sigma_{\pm}\rangle. \tag{4}\]
Expanding the matrix element of the right-hand side,
\[\langle p_{\pm},\tau_{\pm}|\mathsf{R}_{\hat{\omega}}(\phi)\mathsf{P}_{V}|p_{ \pm},\sigma_{\pm}\rangle=\sum_{\sigma^{\prime}_{\pm}}\int d^{3}\mathrm{p}^{ \prime}\ \langle p_{\pm},\tau_{\pm}|\widehat{\mathsf{R}}_{\hat{\omega}}(\phi)|p^{ \prime}_{\pm},\sigma^{\prime}_{\pm}\rangle\langle p^{\prime}_{\pm},\sigma^{ \prime}_{\pm}|\mathsf{P}_{V}|p_{\pm},\sigma_{\pm}\rangle. \tag{5}\]
The matrix element representation of the rotation involves a Dirac delta and a Wigner matrix. Thus,
\[\langle p_{\pm},\tau_{\pm}|\widehat{\mathsf{R}}_{\hat{\omega}}(\phi)|p^{ \prime}_{\pm},\sigma^{\prime}_{\pm}\rangle=\delta^{3}\left(\mathbf{p}_{\pm}- \mathsf{R}_{\hat{\omega}}(\phi)(\mathbf{p}_{\pm}{}^{\prime})\right)D^{S}([ \mathsf{R}_{\hat{\omega}}(\phi)(p^{\prime}_{\pm})]^{-1}\mathsf{R}_{\hat{\omega }}(\phi)[p^{\prime}_{\pm}])_{\tau_{\pm}\sigma^{\prime}_{\pm}}. \tag{6}\]
It's difficult to determine the \(\mathsf{P}_{V}\) matrix element over momentum eigenstates. A theoretical quantum field framework can be used to perform the calculation [73]
\[\langle p^{\prime}_{\pm},\sigma^{\prime}_{\pm}|\mathsf{P}_{V}|p_{\pm},\sigma_ {\pm}\rangle=\frac{1}{2}\sqrt{\frac{\varepsilon}{\varepsilon^{\prime}}}\,\int_ {V}d^{3}\mathrm{x}\;\mathrm{e}^{i\mathbf{x}\cdot(\mathbf{p}_{\pm}-\mathbf{p}_ {\pm}{}^{\prime})}\left(D^{S}([p^{\prime}_{\pm}]^{-1}[p_{\pm}])+D^{S}([p^{ \prime}_{\pm}]^{\dagger}[p_{\pm}]^{\dagger-1})\right)_{\sigma^{\prime}_{\pm} \sigma_{\pm}}\langle 0|\mathsf{P}_{V}|0\rangle, \tag{7}\]
where \(\langle 0|\mathsf{P}_{V}|0\rangle\) is the vaccum expectation of the projector \(\mathsf{P}_{V}\) and tends to 1 for large volume. Substituting the Eqs. (6) and (7) in Eq. (5), we have
\[\langle p_{\pm},\tau_{\pm}|\widehat{\mathsf{R}}_{\hat{\omega}}(\phi)\mathsf{P}_ {V}|p_{\pm},\sigma_{\pm}\rangle=\int_{V}d^{3}\mathrm{x}\;\mathrm{e}^{i \mathbf{x}\cdot(\mathbf{p}_{\pm}-\mathsf{R}_{\hat{\omega}}(\phi)^{-1}(\mathbf{ p}_{\pm}))}\ \ \frac{1}{2}\left(D^{S}([p_{\pm}]^{-1}\mathsf{R}_{\hat{\omega}}(\phi)[p_{\pm}])+D^{S}([p_{ \pm}]^{\dagger}\mathsf{R}_{\hat{\omega}}(\phi)[p_{\pm}]^{\dagger-1})\right)_{ \tau_{\pm}\sigma_{\pm}} \tag{8}\]
Taking advantage of the unitarity of the Wigner rotation, i.e.,
\[D^{S}([\mathsf{R}_{\hat{\omega}}(\phi)(p^{\prime}_{\pm})]^{-1}\mathsf{R}_{\hat{ \omega}}(\phi)[p^{\prime}_{\pm}])=D^{S}([\mathsf{R}_{\hat{\omega}}(\phi)(p^{ \prime}_{\pm})]^{\dagger}\mathsf{R}_{\hat{\omega}}(\phi)[p^{\prime}_{\pm}]^{ \dagger-1}) \tag{9}\]
and the unitarity of R itself as an SL(2,C) matrix. The analytical prolongation of equation (8) to imaginary angles yields the final expression for the matrix element in Eq. (3) as:
\[\begin{split}\langle p_{\pm},\tau_{\pm}|\exp[\omega\cdot\hat{ \mathbf{j}}/T]\mathsf{P}_{V}|p_{\pm},\sigma_{\pm}\rangle=&\int_{V} d^{3}\mathrm{x}\;\mathrm{e}^{i\mathbf{x}\cdot(\mathbf{p}_{\pm}-\mathsf{R}_{ \hat{\omega}}(i\omega/T)^{-1}(\mathbf{p}_{\pm}))}\\ &\times\frac{1}{2}\left(D^{S}([p_{\pm}]^{-1}\mathsf{R}_{\hat{ \omega}}(i\omega/T)[p_{\pm}])+D^{S}([p_{\pm}]^{\dagger}\mathsf{R}_{\hat{\omega }}(i\omega/T)[p_{\pm}]^{\dagger-1})\right)_{\tau_{\pm}\sigma_{\pm}}\end{split} \tag{10}\]
The equilibrium single-particle phase space distribution can be calculated with the help of the matrix element in Eq. (10). The spacial integral form in Eq. (3) allows us to write the phase-space distribution as:
\[\begin{split} f(\mathbf{x},\mathbf{p})_{\tau_{\pm}\sigma_{\pm}}=& (e^{(\varepsilon-\mu q)/T}\pm 1)^{-1}\mathrm{e}^{i\mathbf{x}\cdot( \mathbf{p}_{\pm}-\mathsf{R}_{\hat{\omega}}(i\omega/T)^{-1}(\mathbf{p}_{\pm}) )}\\ &\times\frac{1}{2}\left(D^{S}([p_{\pm}]^{-1}\mathsf{R}_{\hat{ \omega}}(i\omega/T)[p_{\pm}])+D^{S}([p_{\pm}]^{\dagger}\mathsf{R}_{\hat{\omega }}(i\omega/T)[p_{\pm}]^{\dagger-1})\right)_{\tau_{\pm}\sigma_{\pm}}\end{split} \tag{11}\]
A non-relativistic thermodynamic equilibrium system with given angular momentum \(\omega\) can have a rigid velocity according to \(\mathbf{v}=\mathbf{\omega}\times\mathbf{R}\), which in the relativistic system adds another constraint of \(|\mathbf{\omega}\times\mathbf{R}|\ll 1\). Therefore the ratio between \(\omega\) and \(T\) is very small for a proper macroscopic system (and in fact, for the majority of practical reasons), i.e.: \(\frac{\hbar\omega}{KT}\ll 1\). As a result, the lowest order term in \(\omega/T\) is a good approximation for the difference between the momenta in the exponent of Eq. (11).
\[\mathbf{p}_{\pm}-\mathsf{R}_{\hat{\omega}}(i\omega/T)^{-1}(\mathbf{p}_{\pm}) =\mathbf{p}_{\pm}-\left[\cosh\frac{\omega}{T}\,\mathbf{p}_{\pm}-i\sinh\frac{ \omega}{T}\,\hat{\omega}\times\mathbf{p}_{\pm}+(1-\cosh\frac{\omega}{T})\, \mathbf{p}_{\pm}\cdot\hat{\omega}\hat{\omega}\right]\simeq i\frac{\omega}{T} \hat{\omega}\times\mathbf{p}_{\pm} \tag{12}\]
This results in the phase-space distribution function in Eq. (11) becoming;
\[\begin{split} f(\mathbf{x},\mathbf{p}_{\pm})_{\tau_{\pm}\sigma_{ \pm}}&=\;(e^{(\varepsilon-\mu q)/T}\pm 1)^{-1}e^{-\mathbf{x}\cdot(\omega\times \mathbf{p}_{\pm})/T}\frac{1}{2}\left(D^{S}([p_{\pm}]^{-1}\mathsf{R}_{\hat{ \omega}}(i\omega/T)[p_{\pm}])+D^{S}([p_{\pm}]^{\dagger}\mathsf{R}_{\hat{\omega }}(i\omega/T)[p_{\pm}]^{\dagger-1})\right)_{\tau_{\pm}\sigma_{\pm}}\\ &=\;(e^{(\varepsilon-\mu q)/T}\pm 1)^{-1}e^{-\mathbf{p}_{\pm} \cdot(\omega\times\mathbf{x})/T}\frac{1}{2}\left(D^{S}([p_{\pm}]^{-1}\mathsf{ R}_{\hat{\omega}}(i\omega/T)[p_{\pm}])+D^{S}([p_{\pm}]^{\dagger}\mathsf{R}_{\hat{ \omega}}(i\omega/T)[p_{\pm}]^{\dagger-1})\right)_{\tau_{\pm}\sigma_{\pm}}\\ &=\;(e^{(\varepsilon-\mu q)/T}\pm 1)^{-1}e^{-(\mathbf{p}_{\pm} \cdot\mathbf{v})/T}\frac{1}{2}\left(D^{S}([p_{\pm}]^{-1}\mathsf{R}_{\hat{ \omega}}(i\omega/T)[p_{\pm}])+D^{S}([p_{\pm}]^{\dagger}\mathsf{R}_{\hat{ \omega}}(i\omega/T)[p_{\pm}]^{\dagger-1})\right)_{\tau_{\pm}\sigma_{\pm}}\end{split} \tag{13}\]
where we have used the definition of \(\mathbf{v}=\omega\times\mathbf{x}\)
The single-particle phase-distribution in Eq. (13) for the ideal rotating relativistic fermions and bosons particles is the unnormalized one, and we need to take the trace of the matrix in Eq. (13) to obtain the so-called phase-space density in \((\mathbf{x},\mathbf{p})\):
\[\begin{split} f(\mathbf{x},\mathbf{p})&=\sum_{\sigma_ {\pm}}f(\mathbf{x},\mathbf{p})_{\sigma_{\pm}\sigma_{\pm}}\\ &=(e^{(\varepsilon-\mu q)/T}\pm 1)^{-1}e^{-(\mathbf{p}\cdot \mathbf{v})/T}\chi\Big{(}\frac{\omega}{T}\Big{)}\end{split} \tag{14}\]
being:
\[\chi\Big{(}\frac{\omega}{T}\Big{)}\equiv\mathrm{tr}D^{S}(\mathsf{R}_{\hat{ \omega}}(i\omega/T))=\frac{\sinh(S+\frac{1}{2})\frac{\omega}{T}}{\sinh(\frac{ \omega}{2T})} \tag{15}\]
For fluid rest frame \(\mathbf{v}=0\), hence the single-particle distribution function is given as
\[f(\mathbf{x},\mathbf{p})=\frac{1}{e^{(\varepsilon-\mu q)/T}\pm 1}\bigg{(}\frac{ \sinh(S+\frac{1}{2})\frac{\omega}{T}}{\sinh(\frac{\omega}{2T})}\bigg{)}, \tag{16}\]
where the q is the conserved charge.
### van der Waals HRG model
In contrast to the QGP phase, where the degrees of freedom are basically quarks and gluons, the hadronic phase is described by the confined state of the quarks and gluons. The ideal HRG model deals with a system of non-interacting point particles with hadronic degrees of freedom. The basic quantity required to calculate the hadron yields and thermodynamic properties is the partition function of the ensemble. The partition function for _ith_ particle species in a Grand Canonical Ensemble (GCE) of ideal HRG can be written as [75],
\[lnZ_{i}^{id}=\pm\frac{Vg_{i}}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\ ln\{1\pm\exp[-( E_{i}-\mu_{i})/T]\}. \tag{17}\]
Here, the degeneracy of _ith_ hadronic species is given by \(g_{i}\), whereas \(E_{i}=\sqrt{p^{2}+m_{i}^{2}}\) gives the free particle energy of the \(i^{th}\) hadron, \(m_{i}\) being the mass of the _ith_ hadron. The \(\pm\) sign corresponds to baryons and mesons, respectively. The chemical potential is denoted by \(\mu_{i}\), and is given by,
\[\mu_{i}=B_{i}\mu_{B}+S_{i}\mu_{S}+Q_{i}\mu_{Q}, \tag{18}\]
where \(\mu_{B}\), \(\mu_{S}\), and \(\mu_{Q}\), respectively, represent the baryon chemical potential, strangeness chemical potential, and charge chemical potential. The baryon number, strangeness, and electric charge of the _ith_ hadron are denoted by \(B_{i}\), \(S_{i}\), and \(Q_{i}\), respectively.
In a rotating medium of hadron gas, the partition function in Eq. (2) for a single hadronic species is equivalent to the one defined in Eq. (17) multiplied by a factor \(\chi(\frac{\omega}{T})\), given as
\[lnZ_{i}^{id}=\pm\frac{Vg_{i}}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\ ln\{1\pm\exp[ -(E_{i}-\mu_{i})/T]\}\chi(\frac{\omega}{T}), \tag{19}\]
where \(\chi(\frac{\omega}{T})\) is given by the Eq. (15). The partition functions for all hadrons and resonances, \(\ln Z_{i}^{id}\), can then be added to form the overall partition function of the hadron gas;
\[\ln Z=\sum_{i}\ln Z_{i}^{id}. \tag{20}\]
This partition function may now be used to derive the different thermodynamic quantities for a single hadronic species, such as pressure \(P_{i}\), energy density \(\varepsilon_{i}\), number density \(n_{i}\), and entropy density \(s_{i}\), as
\[P_{i}^{id}(T,\mu_{i})= \pm\frac{Tg_{i}}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\ ln\{1\pm\exp[ -(E_{i}-\mu_{i})/T]\}\] \[\times\chi(\frac{\omega}{T}) \tag{21}\]
\[\varepsilon_{i}^{id}(T,\mu_{i})=\frac{g_{i}}{2\pi^{2}}\int_{0}^{ \infty}\frac{E_{i}\ p^{2}dp}{\exp[(E_{i}-\mu_{i})/T]\pm 1}\chi(\frac{\omega}{T}) \tag{22}\]
\[n_{i}^{id}(T,\mu_{i})= \frac{g_{i}}{2\pi^{2}}\int_{0}^{\infty}\frac{p^{2}dp}{\exp[(E_{i} -\mu_{i})/T]\pm 1}\chi(\frac{\omega}{T}) \tag{23}\]
\[s_{i}^{id}(T,\mu_{i})= \pm\frac{g_{i}}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\Big{[}\ln\{1\pm \exp[-(E_{i}-\mu_{i})/T]\}\] \[\pm\frac{(E_{i}-\mu_{i})/T}{\exp[(E_{i}-\mu_{i})/T]\pm 1}\Big{]} \chi(\frac{\omega}{T}) \tag{24}\]
Along with this, we also compute another density related to angular velocity (the so-called rotational chemical potential \(\omega\)) known as spin density. In the presence of rotation in the medium, the Euler equation of thermodynamic variables becomes [24],
\[\varepsilon+P=sT+n\mu+\mathrm{w}\omega. \tag{25}\]
Therefore the new spin density w can be calculated as,
\[\mathrm{w}=\frac{\partial P}{\partial\omega}\bigg{|}_{T,\mu}\] \[= \pm\frac{Tg_{i}}{2\pi^{2}}\int_{0}^{\infty}p^{2}dp\ ln\{1\pm\exp[ -(E_{i}-\mu_{i})/T]\}\frac{\partial}{\partial\omega}\chi\Big{(}\frac{\omega} {T}\Big{)}\,, \tag{26}\]
Moreover, The nth-order susceptibilities of conserved charges can be calculated from the relation,
\[\chi_{x}=\frac{\partial^{n}(P/T^{4})}{\partial(\frac{\mu_{x}}{T})^{n}}, \tag{27}\]
where the corresponding conserved charges, such as the baryon number, electric charge, and strangeness number, are represented by the letter x.
The Ideal HRG model does not include interactions among the hadrons and, therefore, is unable to explain different thermodynamic quantities estimated from lQCD at high temperatures and baryon densities. Now to introduce the interaction in the medium, we start with the van der Waals equation, which, in canonical ensemble representation, can be written as [76; 15]
\[\Bigg{(}P+\bigg{(}\frac{N}{V}\bigg{)}^{2}a\bigg{)}(V-Nb\Big{)}=NT \tag{28}\]
where the VDW parameters \(a\) and \(b\), both positive, describe the attractive and repulsive interactions, respectively, among the hadrons. P, V, T, and N, respectively, stand for pressure, volume, temperature, and the number of particles in the system.
Writing number density, \(n\equiv N/V,\) the above equation can be simplified as
\[P(T,n)=\frac{nT}{1-bn}-an^{2}, \tag{29}\]
The two terms in the above equation represent the correction factor to the ideal case due to repulsion and attraction separately. The excluded volume correction, or the correction for repulsive interactions, is incorporated in the first term by changing the total volume V to an effective volume that is accessible to particles using the appropriate volume parameter \(b=16\pi r^{3}/3\), where \(r\) is the particle's hardcore radius. In contrast, the second term accounts for the attractive interactions between particles. For \(a=0\), Eq. (29) reduces to the EVHRG equation of state, where only repulsive interactions are included. And for both \(a=0\), & \(b=0\), it reduces to the ideal HRG.
This method is then applied to the GCE, where the VDW equation of state takes the form [77; 15; 78]
\[P(T,\mu)=P^{id}(T,\mu^{*})-an^{2}(T,\mu), \tag{30}\]
where, \(P(T,\mu)\) is the VDW pressure and reduces to ideal one, \(P^{id}(T,\mu)\) when there is no interaction. The particle
number density of the VDW hadron gas, n(T,\(\mu\)) is given by
\[n(T,\mu)=\frac{\sum_{i}n_{i}^{id}(T,\mu^{*})}{1+b\sum_{i}n_{i}^{id}(T,\mu^{*})}. \tag{31}\]
Here, \(i\) runs over all hadrons and resonances in the interacting medium, and \(\mu^{*}\) is the modified chemical potential given by
\[\mu^{*}=\mu-bP(T,\mu)-abn^{2}(T,\mu)+2an(T,\mu). \tag{32}\]
Using Eq. (30), the \(\mu^{*}\) can also be written as
\[\mu^{*}=\mu-\frac{bn(T,\mu)T}{1-bn(T,\mu)}+2an(T,\mu). \tag{33}\]
Additional thermodynamical variables like energy density \(\varepsilon(T,\mu)\) and entropy density \(s(T,\mu)\) can now be calculated as,
\[\varepsilon(T,\mu)=\frac{\sum_{i}\epsilon_{i}^{id}(T,\mu^{*})}{1+b\sum_{i}n_{ i}^{id}(T,\mu^{*})}-an^{2}(T,\mu). \tag{34}\]
\[s(T,\mu)=\frac{s^{id}(T,\mu^{*})}{1+bn^{id}(T,\mu^{*})}, \tag{35}\]
As formulated initially, the VDWHRG model includes interactions confined to all pairings of baryons or anti-baryons [12; 15; 77; 78]. Considering the fact that annihilation processes dominate short-range interactions between baryon-antibaryon pairs, the interaction between them was neglected [12; 79]. Previously, all meson-related interactions, such as meson-meson or meson-(anti)baryon interactions, are neglected as the inclusion of these interactions suppresses the thermodynamic quantities in the crossover region at vanishing baryochemical potential in comparison with LQCD data [12]. However, by assuming a hard-core radius \(r_{M}\) for mesons, a more realistic formalism that takes into account meson-meson interactions was developed by selecting the VDW parameters that best fit the LQCD data [14]. As a result, the VDWHRG model's total pressure is expressed as [12; 13; 14; 77; 78]
\[P(T,\mu)=P_{M}(T,\mu)+P_{B}(T,\mu)+P_{\bar{B}}(T,\mu), \tag{36}\]
where the pressure contributions made by mesons and (anti)baryons, respectively, are denoted by \(P_{M}(T,\mu),P_{B(\bar{B})}(T,\mu)\), and are given by,
\[P_{M}(T,\mu)=\sum_{i\in M}P_{i}^{id}(T,\mu^{*M}), \tag{37}\]
\[P_{B}(T,\mu)=\sum_{i\in B}P_{i}^{id}(T,\mu^{*B})-an_{B}^{2}(T,\mu), \tag{38}\]
\[P_{\bar{B}}(T,\mu)=\sum_{i\in\bar{B}}P_{i}^{id}(T,\mu^{*B})-an_{B}^{2}(T,\mu). \tag{39}\]
Here, mesons, baryons, and anti-baryons are each represented by \(M\), \(B\), and \(\bar{B}\). Due to the excluded volume correction, mesons have a modified chemical potential of \(\mu^{*M}\), whereas baryons and anti-baryons have modified chemical potentials of \(\mu^{*B}\) and \(\mu^{*B}\), respectively, as a result of VDW interactions [14]. Taking a simple case of vanishing electric charge and strangeness chemical potentials, where \(\mu_{Q}=\mu_{S}=0\), the modified chemical potential for mesons and (anti)baryons can be determined from Eq. 18 and Eq. 32 as;
\[\mu^{*M}=-bP_{M}(T,\mu), \tag{40}\]
\[\mu^{*B(\bar{B})}=\mu_{B(\bar{B})}-bP_{B(\bar{B})}(T,\mu)-abn_{B(\bar{B})}^{2} +2an_{B(\bar{B})}, \tag{41}\]
where \(n_{M}\), \(n_{B}\) and \(n_{\bar{B}}\) are the modified number densities of mesons, baryons, and anti-baryons, respectively, which are given by
\[n_{M}(T,\mu)=\frac{\sum_{i\in M}n_{i}^{id}(T,\mu^{*M})}{1+b\sum_{i\in M}n_{i} ^{id}(T,\mu^{*M})}, \tag{42}\]
\[n_{B(\bar{B})}(T,\mu)=\frac{\sum_{i\in B(\bar{B})}n_{i}^{id}(T,\mu^{*B(\bar{B })})}{1+b\sum_{i\in B(\bar{B})}n_{i}^{id}(T,\mu^{*B(\bar{B})})}. \tag{43}\]
There are different approaches to estimate the VDW parameters. They can be obtained by reproducing the ground state of the nuclear matter [77]. Alternatively, one can obtain the parameters by fitting lattice QCD results for different thermodynamic quantities [14; 15]. The parameters in the model are now set to \(a=0.926\) GeV fm\({}^{3}\) and \(b=(16/3)\pi r^{3}\), where the \(r\) being the hardcore radius of each hadron, given as \(r_{M}=0.2\) fm for mesons, and \(r_{B,(\bar{B})}=0.62\) fm, for (anti)baryons [14]. Using this information, we now proceed to estimate various thermodynamic quantities in a rotating hadron resonance gas with VDW interactions.
## III Results and Discussion
We explore the effect of rotation in the hadron gas by considering an interacting hadron gas model, namely, the VDWHRG model, with attractive and repulsive interactions between the hadrons. The model takes into the contributions of all hadrons and resonances up to a mass cut-off of 2.25 GeV available in the particle data group [80]. The van der Waals parameters are obtained by fitting thermodynamic quantities like energy density and pressure in the VDWHRG model to the available lattice QCD data [14]. It is noteworthy to mention that the van der Waals parameters should, in principle, change with respect to a change in rotation. However, as it is
non-trivial to have \(a\) and \(b\) as functions of \(\omega\), we neglect the dependency in the current study. We then estimate various thermodynamic quantities at finite rotation by taking the obtained \(a\) and \(b\) values from the fitting.
Fig. 1 shows the variation of \(P/T^{4}\), \(\varepsilon/T^{4}\), \(s/T^{3}\), \((\varepsilon-3P)/T^{4}\), \(c_{v}/T^{3}\), and \(c_{s}^{2}\) with temperature at zero baryochemical potential for certain values of \(\omega\). The red triangles are the lQCD results from the Wuppertal-Budapest collaboration [81], and the shaded region shows the lattice results from Hot QCD collaboration [82], at \(\mu_{B}=0.0\) GeV. All the calculations are done for \(\mu_{B}=0.0\) GeV. Our results at \(\omega=0\) fm\({}^{-1}\), represented by a solid black line, are in good agreement with the lQCD estimations. From the upper-left panel of fig.1, we observe that \(P/T^{4}\) increases with temperature for all \(\omega\) values. At a given temperature, \(P/T^{4}\) is higher for a higher value of \(\omega\). Similar trends can be observed in \(\varepsilon/T^{4}\), \(s/T^{3}\), \((\varepsilon-3P)/T^{4}\) as well as in \(c_{v}/T^{3}\) plots. However, the slopes of the spectra differ for each observable. In the trace anomaly plot, we observe a peak that shifts towards low temperatures with an increase in \(\omega\). This peak signifies the conformal symmetry breaking at which the constituent particles become massless. The behavior of \(c_{s}^{2}\) is also crucial to understand the phase transition region. In VDWHRG, there appears a minimum in \(c_{s}^{2}\), which is in agreement with lQCD, and this minimum can be interpreted as a signature of the transition from hadrons to quark degrees of freedom. By increasing the value of rotational chemical potential, the minima shift towards the lower temperature regime, suggesting that the phase transition temperature decreases in the presence of rotation.
In order to understand the effect of spin and rotation on the basic thermodynamic quantities, we plot the scaled pressure as a function of \(\omega\) for various spin particles in fig. 2. The temperature is taken to be constant at \(T=155\) MeV. We observe that at \(\omega=0\) fm\({}^{-1}\), the contribution to pressure is dominated by the spin-0 particles, followed by the spin-1, spin-1/2, and spin-3/2 particles. This is due to the fact that at \(\omega=0\) fm\({}^{-1}\), the contribution to pressure comes only from the Boltzmann factor in the distribution function. Thus, the lesser massive particles will dominantly contribute. The spin-0 particles, which consist of pions, kaons, etc., contribute the most, followed by the vector mesons such as \(\rho\) and \(\eta\). Finally, the spin-1/2 baryons, such as protons and neutrons, and spin-3/2 baryons, such as \(\Lambda\), \(\Xi\), will contribute to the pressure, respectively. As the rotation increases,
Figure 1: (Color Online) Variation of different thermodynamical quantities as functions of temperature for \(\mu_{\rm B}=0\) GeV, and for certain values of rotation chemical potentials. The red triangles are the Wuppertal-Budapest lattice QCD data [81], and the shaded region are the HotQCD lattice data [82].
the contribution of spin-1 particles dominates due to the \(\omega\) term in the distribution function, and the contribution from the spin-0 particles remains almost the same. The slight decrease in the spin-0 particles trend at high \(\omega\) is due to the van der Waals effect. In addition, the spin-3/2 particles contribute more than spin-1/2 due to the effect of rotation.
In addition to the speed of sound, the entropy density, number density, etc., are observables that show discontinuities at the first-order phase transition. Our study deals with the van der Waals interaction in the hadronic phase. Therefore a liquid-gas phase transition is expected at \(T-\mu_{B}\) plane, which is estimated in various works [14; 15; 78] with different van der Waals parameters. Since rotation adds another chemical potential to the system, it is useful to see if the angular velocity alone can lead to a phase transition. Fig. 3 shows the behavior of entropy density in the \(T-\omega\) phase space. Here, the angular velocity (\(\omega\)) is taken to take into account small iterations. The temperature is taken with an interval of 1 MeV for the calculation. One can observe a smooth trend of the scaled entropy density at high temperatures and low rotational chemical potential. However, the smooth curve at comparatively low \(\omega\) starts changing its shape as one approaches high \(\omega\) values. At around \(T\simeq 69\) MeV, a discontinuity appears for \(\omega\simeq 0.65\) GeV. A clear first-order phase transition is observed as one approaches higher \(\omega\) values. This suggests that the rotation has the same effect in achieving a liquid-gas phase transition as the baryon chemical potential [15]. Therefore a hadron gas can be liquified either by increasing baryon density and lowering the temperature or by increasing the angular velocity while decreasing the temperature of the gas. Compared to Ref. [14], where the critical point is around \(T=65\) MeV for \(T-\mu_{B}\) plane, here the temperature for the critical point is slightly higher, though the VDW parameters are the same. This shows that the phase transition in the presence of a rotational chemical
Figure 3: (Color Online) Variation of scaled entropy density at low temperature and higher angular velocity values is shown.
Figure 2: (Color Online) Pressure for different spin particles as a function of \(\omega\) at a constant temperature, \(T=155\) MeV.
potential appears more quickly than that for the baryochemical potential case. Analogous to how the magnetic field affects the chiral phase transition in raising the critical temperature [57], here, the angular velocity raises the critical temperature for the liquid-gas phase transition as compared to that of baryochemical potential.
Fig. 4 shows the temperature dependence of normalized dimensionless spin density estimated in the VD-WHRG model using Eq. 26. Similar to number density, which can be defined as the change in pressure (or free energy) with respect to chemical potential, spin density can also be defined as the change in the pressure as a function of rotational chemical potential. The net spin density in a system is defined as the density of hadrons of positive spin minus the density of hadrons of negative spin. It is observed that much like other thermodynamic densities, such as number density and entropy density, spin density also increases with an increase in temperature. Moreover, at a particular temperature, the value of spin density increases with increased rotational chemical potential.
We also estimate the susceptibilities of various conserved quantities to show their dependence on rotational chemical potential. Fluctuations of conserved charges like net baryon density, electric charge, and strangeness are essential probes for hadronization and can help us locate the phase boundary. Large fluctuations in these quantities are one of the essential signatures of the critical endpoint. Since the rotation can affect the phase transition and hence the critical point, it is essential to see its effect on the fluctuations of different conserved charges. We have used Eq. 27 to calculate the second-order susceptibilities of different conserved quantities. Fig. 5 shows the temperature dependence of second-order fluctuations of conserved quantities, namely, baryon density, electric charge, and strangeness, respectively, from left to right. The red triangles are the results of lQCD calculations from the Wuppertal-Budapest collaboration [81] whereas the shaded region represents the lQCD results of HotQCD collaboration [82]. The solid black line, calculated in the VDWHRG model at zero baryochemical potential and zero rotation, agrees with lattice results for baryon and charge susceptibility. However, the results for strangeness susceptibility are slightly suppressed than those of lattice results. The fluctuations in every conserved quantity increase with an increase in rotational chemical potential. It is observed that for higher \(\omega\), a more prominent peak appears in the case of baryon density fluctuations. However, the trends seem to be saturated in the case of charge susceptibilities, and a monotonic increasing behavior is obse
Figure 5: (Color Online) The baryon number susceptibility (left panel), charge susceptibility (middle panel), and strangeness susceptibility (right panel) as functions of temperature for different values of \(\omega\).
Figure 6: (Color Online) Various susceptibilities as functions of \(\omega\) at a constant temperature, \(T\) = 155 MeV.
tuations within the range of studied \(\omega\). Fig. 6 shows the variation of all three susceptibilities as a function of rotational chemical potential at a fixed temperature \(T=155\) MeV. It is observed that the baryon number and charge susceptibilities increase sharply with \(\omega\) after a particular value. However, the increase in strangeness susceptibility is a bit slower. This is because the first two susceptibilities are more sensitive to the microscopic structure of the matter [83] and can provide helpful information about the structural changing as the quarks get deconfined at adequately high temperatures.
## IV Summary
In this work, we estimate the effect of rotation on the thermodynamic properties of an interacting hadron resonance gas. We observe that rotation has a similar effect on the thermodynamic properties as the baryon chemical potential. The rotational chemical potential enhances all observables like pressure, energy density, entropy density, etc. We also observe that the rotation in a system could lead to a first-order liquid-gas phase transition, although the initial angular momentum required for it would be so high that within LHC energy, it may not be possible. In addition, we estimate the spin density associated with the rotational chemical potential and its behavior as a function of temperature. The effect of rotation on fluctuations in conserved quantities is also explored, and one can find that it enhances the second-order fluctuations in all conserved quantities. In view of our study, we must pay attention to the effect of rotation produced in a non-central heavy-ion collision while studying the particle dynamics and the thermodynamics of the system.
Recent studies focusing on vorticity and polarization in the medium formed in ultra-relativistic collisions lead us to an exciting pathway. As the scientific community shifts its attention to rotational dynamics in the evolving QCD medium, myriad unique consequences can be unraveled. Moreover, it will be interesting to see the results of the lattice calculation by taking care of the rotation into the system.
## Acknowledgement
K.K.P. and B.S. acknowledge the financial aid from UGC and CSIR, Government of India, respectively. The authors gratefully acknowledge the DAE-DST, Government of India funding under the mega-science project "Indian participation in the ALICE experiment at CERN" bearing Project No. SR/MF/PS-02/2021-IITI (E-37123).
|
2303.01437
|
Symmetries and invariant solutions of the wave equation for shear
disturbances in soft solids
|
The Lie-group approach was applied to determine symmetries of the third-order
non-linear equation formulated for description of shear elastic disturbances in
soft solids. Invariant solutions to this equation are derived and it turned out
that they could represent outgoing or incoming exponentially decaying or
unbounded disturbances.
|
Alexander I. Kozlov
|
2023-03-02T17:50:45Z
|
http://arxiv.org/abs/2303.01437v1
|
# Symmetries and invariant solutions of the wave equation for shear disturbances in soft solids
###### Abstract
The Lie-group approach was applied to determine symmetries of the third-order nonlinear equation formulated for description of shear elastic disturbances in soft solids. Invariant solutions to that equation are derived and it turned out that they could represent outgoing or incoming exponentially decaying or unbounded disturbances.
## I Introduction
Soft solids (like gels or some biological tissues) differ from Newtonian liquids in particular with possibility to maintain shear stresses and therefore with ability to guide transverse elastic waves. Nonlinear wave equation for shear elastic disturbances was considered in last decades in many works (see [1-3], for example). Nonlinear elastic constants as well as viscosity were taken in account so that the partial differential equation of the third order in derivatives was proposed for the one-dimentional case [2-3]:
\[\frac{1}{c^{2}}\frac{\partial^{2}u}{\partial t^{2}}=\frac{\partial^{2}u}{ \partial z^{2}}+\frac{2}{3}\beta\frac{\partial}{\partial z}\!\left(\frac{ \partial u}{\partial z}\right)^{3}\,+\,\tau\frac{\partial}{\partial z}\! \left\{\!\left[1+2\!\left(\frac{\partial u}{\partial z}\right)^{2}\right]\! \frac{\partial^{2}u}{\partial t\partial z}\right\} \tag{1}\]
where \(\,z\,\) and \(\,t\,\)are the spatial and time coordinates respectively, \(\,c\,\) is the small-signal transverse-sound-wave velocity, \(\,\tau\,\) is proportional to coefficients of viscosity and is defined in [2] as
\(\,\tau=\!\left(\zeta+4\eta/3\right)\!/\mu\,\) (\(\,\zeta\,\) and \(\,\eta\,\) are so-called the second and the first viscosity coefficients [4]), \(\,\mu\,\) is the second of the Lame's elastic constants, \(\,\beta=3\!\left(\mu+A/2+D\right)\!/\!\left(2\mu\right)\,\) is the nonlinearity coefficient, where \(\,A\,\) and \(\,D\,\) are the third-order and the forth-order elastic constants [1-3].
Different solutions of Equation (1) were investigated analytically and numerically [1-3], and an attempt to derive one-way equation of the lower order in derivatives has been made [3]. Nevertheless it seems to be useful to investigate symmetries of the partial differential equation (1), because this approach can bring some new exact analytical solutions for problems, which were solved only approximately or numerically for years [5].
This short communication is devoted to application of methods of the classical theory of Lie groups for solution of Equation (1).
## II Solution
Before solving Eq. (1) its variables were changed in the next way [3]:
\[w=\sqrt{\frac{2}{3}\beta}\cdot u\hskip 28.452756pt\theta=\frac{t}{\tau}\hskip 28.452756ptx= \frac{z}{c\tau}\hskip 28.452756pt\alpha=\frac{3}{c^{2}\tau^{2}}\]
So we obtain the equation
\[w_{\theta\theta}=w_{xx}+\alpha w_{x}^{2}w_{xx}+w_{\theta xx}+2\frac{\alpha}{ \beta}w_{x}w_{\theta x}w_{xx}+\frac{\alpha}{\beta}w_{x}^{2}w_{\theta xx} \tag{2}\]
where lower indices denote differentiation with respect to appropriate coordinates, as usual.
Appling the standard Lie-group approach we consider the next operator of one-parametric group of infinitesimal transformation, which is prolonged to all necessary derivatives [6-7]:
\[X=\xi^{0}\frac{\partial}{\partial\theta}+\xi^{x}\frac{\partial}{\partial x}+ \eta^{w}\frac{\partial}{\partial w}+\zeta^{0}\frac{\partial}{\partial w_{0}} +\zeta^{x}\frac{\partial}{\partial w_{x}}+\zeta^{00}\frac{\partial}{\partial w _{x}}+\zeta^{xx}\frac{\partial}{\partial w_{xx}}\zeta^{0x}\frac{\partial}{ \partial\eta_{\theta xx}}+\zeta^{0xx}\frac{\partial}{\partial\eta_{\theta xx}}\]
Applying the latter differential operator to Equation (2) we can get the determining equations of the problem:
\[\xi^{0}_{0}=\xi^{0}_{x}=\xi^{0}_{w}=0\hskip 28.452756pt\xi^{x}_{0}=\xi^{x}_{x }=\xi^{x}_{w}=0\hskip 28.452756pt\eta^{w}_{0\theta}=\eta^{w}_{x}=\eta^{w}_{w}=0\]
Solution of these PDE gives the next operators of infinitesimal transformation of Equation (2):
\[X_{1}=\frac{\partial}{\partial\theta}\hskip 56.905512ptX_{2}=\frac{\partial}{ \partial x}\hskip 56.905512ptX_{3}=\left(1+20\right)\frac{\partial}{ \partial w}\]
Linear combination of these three operators (where \(Q\) and \(R\) are some arbitrary constants)
\[X=Q\frac{\partial}{\partial\theta}+R\frac{\partial}{\partial x}+\left(M\,+2N \theta\right)\frac{\partial}{\partial w}\]
leads to the next system of characteristic PDE (where \(M\) and \(N\) are arbitrary constants) :
\[\frac{d\theta}{Q}=\frac{dx}{R}=\frac{dw}{M+2N\theta}\]
Solving the latter system we obtain two invariants of Eq. (2): the independent variable \(\lambda\) and depending on it \(\Phi(\lambda)\), obeying the next expressions:
\[\lambda=Qx-R\theta\hskip 14.226378pt\mbox{and}\hskip 14.226378pt\Phi(\lambda)=Qw-M \theta-N\theta^{2}\]
Finding \(W\) from the second of Equations (3) and substituting the result in (2) we can reduce Equation (2) to the next ordinary differential equation in function \(\Phi(\lambda)\) (with primes denoting differentiation):
\[\Bigg{(}\frac{R^{2}}{Q}-Q\Bigg{)}\Phi_{\lambda\lambda}^{\
b) for \(R=-Q=-1\) (incoming disturbance)
\[u=\pm C_{2}\sqrt{\frac{3}{\beta}\biggl{(}1-\frac{\beta}{3}\biggr{)}}\exp\left( \pm\frac{\beta}{3c\tau}\cdot\frac{z+ct}{\sqrt{1-\frac{\beta}{3}}\,}\right)+C_{3 }+Mt \tag{7b}\]
If the constant \(M\) is equal to zero these solutions are decaying or unbounded in time and space for \(\beta/3<1\). Indeed, it had been found experimentally that for some gels [8-9] in the nonlinearity constant
\[\beta=3\bigl{(}\mu+A/2+D\bigr{)}/\bigl{(}2\mu\bigr{)}\]
\(\mu\) is positive, while \(A\) is often negative and some times more than \(\mu\). But in some other substances [9] both constants \(\mu\) as well as \(A\) could be positive, thus leading to the possibility of periodically oscillating in time decaying in space displacements as the expression under the square root in (7) is negative leading to imaginary values of the exponential function. (Unfortunately, the author has not found in literature numerical values of the forth-order elastic constant \(D\), believing that its absolute value is less than those of \(\mu\) and \(A\).)
To make the picture complete, it should be also mentioned the particular solution (incoming and outgoing) of Equation (6), though it represents the unbounded displacement:
\[\Phi(\lambda)=\sqrt{\frac{3}{\alpha}\left(R^{2}-Q^{2}\right)}\frac{\lambda}{Q }+C_{4} \tag{8}\]
thus leading to the next trivial solutions to the Equation (1) :
\[u=\bigl{(}z\pm ct\bigr{)}\sqrt{\pm\frac{\beta}{\alpha}}+C_{4}+Mt \tag{9}\]
\(C_{4}\) is another one constant of integration.
## III Conclusion
Classical Lie-group analysis of the nonlinear wave equation for soft tissues was conducted and the invariant solutions to that equation were derived. Those solutions turned out to be in fact outgoing and incoming disturbances, which could be unbounded or exponentially decaying in amplitude at infinite time or space coordinate.
## Publishing data sharing policy
The data that supports the findings of this study are available within the article.
|
2306.04391
|
Carbon Abundance of Globular Cluster M22 (NGC 6656) and the Surface
Carbon Depletion Rates of the Milky Way Globular Clusters
|
It is well known that metal-poor red giant branch (RGB) stars show variations
in some elemental abundances, including carbon, due to the internal mixing
accompanied by their own in situ CN cycle in the hydrogen burning shell. With
our new photometric carbon abundance measurements of RGB stars in M22 and other
globular clusters (GCs) in our previous studies, M5, M3, and M92, we derive the
carbon depletion rates against the $V$ magnitude, $d\mathrm{[C/Fe]}/M_V$, for
individual populations in each GC. We find the metallicity dependence of the
carbon depletion rates, $d\mathrm{[C/Fe]}/M_V$ $\propto$ $-$0.25[Fe/H]. Our
results also suggest that the carbon depletion rates of the second generation
(SG) of stars are larger than those of the first generation (FG) of stars in
our sample GCs, most likely due to different internal temperature profiles with
different initial helium abundances between the FG and SG. Our results can
provide critical constraints both on understanding the mixing efficiency in the
theoretical models, which is largely unknown, and on interpretation of the
observational carbon abundance evolution of the bright halo RGB stars.
|
Jae-Woo Lee
|
2023-06-07T12:43:34Z
|
http://arxiv.org/abs/2306.04391v1
|
Carbon Abundance of Globular Cluster M22 (NGC 6656) and the Surface Carbon Depletion Rates of the Milky Way Globular Clusters
###### Abstract
It is well known that metal-poor red giant branch (RGB) stars show variations in some elemental abundances, including carbon, due to the internal mixing accompanied by their own in situ CN cycle in the hydrogen burning shell. With our new photometric carbon abundance measurements of RGB stars in M22 and other globular clusters (GCs) in our previous studies, M5, M3, and M92, we derive the carbon depletion rates against the \(V\) magnitude, \(d[{\rm C}/{\rm Fe}]/M_{V}\), for individual populations in each GC. We find the metallicity dependence of the carbon depletion rates, \(d[{\rm C}/{\rm Fe}]/M_{V}\propto-0.25[{\rm Fe}/{\rm H}]\). Our results also suggest that the carbon depletion rates of the second generation (SG) of stars are larger than those of the first generation (FG) of stars in our sample GCs, most likely due to different internal temperature profiles with different initial helium abundances between the FG and SG. Our results can provide critical constraints both on understanding the mixing efficiency in the theoretical models, which is largely unknown, and on interpretation of the observational carbon abundance evolution of the bright halo RGB stars.
Stellar populations (1622); Population II stars (1284); Hertzsprung Russell diagram (725); Globular star clusters (656); Chemical abundances (224); Stellar evolution (1599); Red giant branch (1368)
0000-0002-4880-7880]Jae-Woo Lee
## 1 Introduction
In a generally accepted globular cluster (GC) formation scenario, the second generation (SG) of the stars formed out of interstellar media polluted by the first generation (FG) of the stars (e.g., D'ercole et al., 2008). The SG stars in normal GCs exhibit different elemental abundance patterns than the FG stars do (Cassisi & Maurizio, 2020; Gratton et al., 2019, and references therein). For example, the nitrogen enhancement and carbon depletion in the main sequence (MS) and the lower red giant branch (RGB) stars of the typical GC SGs could be explained as a natural consequence of the CN cycle occurred in the previous generation of stars. Photometrically, variations in the carbon and nitrogen abundances leave distinctive hallmarks through strong absorption band features of NH, CN, CH molecules, which help to lucidly define multiple stellar populations (MSPs) of GCs (Lee, 2017, 2019; Lee & Sneden, 2021; Milone et al., 2017).
However, the interpretation of carbon and nitrogen abundances of the bright RGB stars can be somewhat complicated due to internal mixing episodes accompanied by their own in situ CN cycle in the hydrogen shell burning region, which significantly alters their initial carbon and nitrogen abundances. Many candidates for this non-canonical extra mixing have been suggested and the thermohaline mixing is considered to be the most promising one since it appears to reproduce observational constraints, although the detailed mixing efficiency used in the theoretical models appears to be largely unknown (e.g., Charbonnel & Zahn, 2007; Lee, 2010; Angelou et al., 2011).
A few elements, such as \({}^{7}\)Li, \({}^{12}\)C, \({}^{12}\)C/\({}^{13}\)C, and \({}^{14}\)N, have been frequently used to fine-tune the unknown mixing parameters. It is believed that neither the [N/Fe] nor \({}^{12}\)C/\({}^{13}\)C are proper probes to investigate the internal mixing, since the [N/Fe] of the initially nitrogen enhanced stars (i.e., the SG stars) may not be significantly affected by any extra mixing (Angelou et al., 2011), and the \({}^{12}\)C/\({}^{13}\)C ratio can rapidly attain near-equilibrium value and quickly saturated by only moderate amount of mixing (e.g., Sneden, Pilachowski, & VandenBerg, 1986). The reliable lithium abundance can be obtained with the Li i resonance doublet at 6707.78 A in a rather clean spectral region. However, lithium may not be ideal element to investigate the mixing. Due to its fragility, lithium can be heavily destructed at the RGB bump (RGBB) luminosity level and does not provide any useful information for bright GC RGB stars (e.g., see Figure 1 of Angelou et al., 2015, and references therein). For the same reason, the degree of lithium depletion is known to be extremely sensitive to the stellar models (Lattanzio et al., 2015).
The last element standing, carbon, is abundant enough that it cannot be completely exhausted via the CN cycle in GC RGB stars. Unfortunately, there is no observable atomic transitions in the optical passband and one should rely on the diatomic molecular absorption bands, such as CH and CN, to derive [C/Fe]. A great deal of effort has been directed to determine carbon abundances by others (e.g., Briley et al., 2004; Smith & Martell, 2003; Sneden, Pilachowski, & VandenBerg, 1986).
During the past decade, we developed a new photometric system combined with the robust and self-consistent theoretical fine model grids with various parameter sets to simultaneously measure the key elements of GC MSPs, [Fe/H], [C/Fe], and [N/Fe], even in the very crowded field, where traditional spectroscopic observations cannot be applied (Lee, 2017, 2019, 2021, 2022, 2023; Lee & Sneden, 2021).
In this Letter, we present a photometric [Fe/H], [C/Fe], and [N/Fe] study for the metal-complex GC M22 (Lee et al., 2009; Lee, 2015, 2016, 2020; Marino et al., 2009, 2011). With our new [C/Fe] measurements of M22 and those of our previous studies for M3, M5, and M92, we will discuss the surface carbon depletion rates of the Milky Way GCs to shed more light on the quantitative details of the internal mixing processes.
## 2 Observations and Data Reduction
The journals of observations for M22 are given in Lee (2015, 2020). In 2019, we obtained additional photometric data for our JWL34 and Stromgren \(y\), \(b\) for M22, with the total integration times of 6200, 260, and 650 s, respectively, in three nights from June 27 to July 8 using the KPNO 0.9 m telescope equipped with a 2 \(\times\) 2k CCD chip, providing a field of view (FOV) of 21\({}^{\prime}\times\) 21\({}^{\prime}\).
The raw data handling was described in detail in our previous works (Lee, 2015, 2017). The photometry of M22 and standard stars were analyzed using DAOPHOTII, DAOGROW, ALLSTAR and ALLFRAME, and COLLECT-CCDAVE-NEWTRIAL packages (Stetson, 1987, 1994). Finally, we derived the astrometric solutions for individual stars using the Gaia Early Data Release 3 (EDR3; Gaia Collaboration, 2020) and the IRAF IMCOORS package.
In order to select M22 member stars, we made use of the proper-motion study from the Gaia EDR3. We derived the mean proper-motion values using iterative sigma-clipping calculations, finding that, in units of milliarcsecond per year, (\(\mu_{\rm RA}\times\cos\delta\), \(\mu_{\rm decl.}\)) = (9.792, \(-\)5.611) with standard deviations along the major axis of the ellipse of 0.756 mas yr\({}^{-1}\) and along the minor axis of 0.679 mas yr\({}^{-1}\). We emphasize that our mean proper motion for M22 is in good agreement with previous value derived by the Gaia Collaboration (e.g., Gaia Collaboration, 2018). We considered stars within 3\(\sigma\) from the mean values to be M22 proper-motion member stars.
## 3 Photometric Indices and Color-Magnitude Diagrams
Throughout this work, we will use our own photometric indices (see also Lee, 2019; Lee & Sneden, 2021; Lee, 2022), defined as
\[hk_{\rm JWL} = ({\rm Ca}_{\rm JWL}-b)-(b-y), \tag{1}\] \[cn_{\rm JWL} = JWL39-{\rm Ca}_{\rm JWL},\] (2) \[ch_{\rm JWL} = (JWL43-b)-(b-y),\] (3) \[nh_{\rm JWL} = (JWL34-b)-(b-y). \tag{4}\]
The \(hk_{\rm JWL}\) index is a good photometric measure of metallicity, while the \(nh_{\rm JWL}\), \(cn_{\rm JWL}\), and \(ch_{\rm JWL}\) indices are measures of NH absorption band at \(\lambda\)3360, CN at \(\lambda\)3883, and CH at \(\lambda\)4250 A, respectively (e.g., see Lee et al., 2009; Lee, 2015, 2022, 2023; Lee & Sneden, 2021).
In Figure 1, we show our color-magnitude diagrams (CMDs) of M22 using our color indices. The figure shows that our procedure of selecting cluster's member stars by using the Gaia proper motion works excellently and the most
Figure 1: (Top) CMDs of M22 in our science field. (Bottom) CMDs for M22 proper-motion member stars.
of the foreground and background off-cluster field stars are removed. As we showed in our previous works (Lee et al., 2009; Lee, 2015, 2020), a bimodal \(hk_{\rm JWL}\) distribution of M22 RGB stars can be clearly seen. Also M22 has very broad \(cn_{\rm JWL}\), \(ch_{\rm JWL}\), and \(nh_{\rm JWL}\) RGB sequences due to the variations in the carbon and nitrogen abundances as will be discussed below (see also Hesser & Harris, 1979; Lee, 2015).
We select the M22 member RGB stars of our interest with -2.5 mag \(\leq V-\)\(\rm{\it{HB}}\)\(\leq\) 3.0 mag in order to derive their [Fe/H]\({}_{\rm{\it{hk}}}\), [C/Fe]\({}_{\rm{\it{ch}}}\), and [N/Fe]\({}_{\rm{\it{nh}}}\). In Figure 2, we show CMDs of the metal-poor (MP) and the metal-rich (MR) RGB stars in M22. The classification of the two populations will be discussed below (see also Lee et al., 2009; Lee, 2015). At a glance, the differences in the \(cn_{\rm JWL}\), and \(ch_{\rm JWL}\) CMDs between the MP and MR populations are noticeable, suggesting that they have different carbon and nitrogen abundances.
The mean interstellar reddening value toward M22 is rather large, \(E(B-V)\) = 0.32 (Harris, 1996, 2010 version), and our M22 CMDs may be vulnerable to differential reddening effect across our science field. We attempt to correct the potential differential reddening effect in the following way. First, we calculated the extinction law for our filter system. We calculated the color excess of individual color indices using our filter transmission functions, synthetic spectra of RGB stars for [Fe/H] = \(-\)1.70 dex with a primordial CNO and helium abundances as described below, and the interstellar extinction law by Mathis (1990). Through our calculations, we obtained \(E(b-y)\) = 0.820\(E(B-V)\), \(E(cn_{\rm JWL})\) = 0.053\(E(B-V)\), \(E(ch_{\rm JWL})\) = \(-\)0.043\(E(B-V)\), \(E(nh_{\rm JWL})\) = 0.539\(E(B-V)\), and \(E(hk_{\rm JWL})\) = \(-\)0.067\(E(B-V)\). In Figure 2, we show differential reddening vectors of individual color indices for \(\Delta E(B-V)\) = 0.1 mag. Except for the \((b-y)\), the reddening vectors do not point to broaden our observed color indices (i.e., the reddening vectors align along the RGB sequences). Therefore, it is thought that differential reddening across M22 does not significantly affect our results (see also Lee, 2015). In order to examine our claim, we derived the mean \((b-y)\) fiducial sequences of each population and calculated \(\Delta E(B-V)\) values of individual RGB stars from their \(E(b-y)\). Finally, we calculated the reddening corrected color indices and we show them in Figure 2. Except for the \((b-y)\) color, our solutions hardly reduce the RGB widths, suggesting that the broad RGB widths in \(hk_{\rm JWL}\), \(cn_{\rm JWL}\), \(ch_{\rm JWL}\), and \(nh_{\rm JWL}\) are mainly due to variations in the metallicity, carbon, and nitrogen abundances.
## 4 Metallicity, Carbon, and Nitrogen Abundances
Figure 2: CMDs of member RGB stars with \(-\)2.5 mag \(\leq V-\)\(\rm{\it{HB}}\)\(\leq\) 3.0 mag in M22. In the top panels, we show differential reddening vectors of individual color indices for \(\Delta E(B-V)\) = 0.1 mag with red arrows.
We derive metallicity of individual RGB stars in M22 using our \(hk_{\rm{WL}}\) measurements. We retrieved the model isochrones for [Fe/H] = -2.1, -1.9, -1.7, and -1.5 dex, \(Y\) = 0.247(0.248), 0.275, and 0.300 with [\(\alpha\)/Fe] = +0.4 dex, and the age of 12.5 Gyr from a Bag of Stellar Tracks and Isochrones (Pietrinferni et al., 2021). We adopted different CNO abundances, [C/Fe] = (-0.6, \(\Delta\)[C/Fe] = 0.2,0.6), [N/Fe] = (-0.8, \(\Delta\)[N/Fe] = 0.4, 1.6), and [O/Fe] = (0.1, 0.3, 0.5) for each model grid. Note that our presumed CNO abundances do not affect our photometric metallicity (Lee, 2022). We constructed 97 model atmospheres and synthetic spectra for each chemical composition from the lower main sequences to the tip of RGB sequences using ATLAS12 (Kurucz, 2011) and the latest version of MOOGSCAT (Sobeck et al., 2011; Sneden, 1974). As we discussed in our previous works (Lee & Sneden, 2021; Lee, 2023), the latest version of MOOGSCAT (Sobeck et al., 2011) takes proper care of Rayleigh scattering from neutral hydrogen (RSNH) atoms with a nonlocal thermodynamic equilibrium treatment of the source function, which is important to calculate continuum opacities for our short-wavelength indices such as our JWL34 (i.e., \(nh_{\rm{WL}}\)), due to a \(\lambda^{-4}\) dependency of the RSNH cross section (see Lee & Sneden, 2021, and references therein).
The photometric metallicity of individual RGB stars can be calculated using the following relation (also see Appendices of Lee & Sneden, 2021)
\[{\rm[Fe/H]}_{hk}\approx f_{1}(hk_{\rm{WL}},\ M_{V}). \tag{5}\]
We obtained the mean [Fe/H]\({}_{hk}\) = -1.839 \(\pm\) 0.003 dex (\(\sigma\) = 0.129) and we show our results in Figure 3. We emphasize that our photometric metallicity does not show any gradient against \(V\) magnitude and exhibits a bimodal distribution as already well known (Lee et al., 2009; Lee, 2015, 2016; Marino et al., 2009, 2011).
In order to perform populational tagging for metallicity, we applied an expectation-maximization (EM) algorithm for a two-component Gaussian mixture model on our [Fe/H]\({}_{hk}\) distribution. Stars with \(P\)([Fe/H]\({}_{hk}\)\(|x_{i}\)) \(\geq\) 0.5 from the EM estimator correspond to the MP population, where \(x_{i}\) denotes the individual RGB stars, while those with \(P\)([Fe/H]\({}_{hk}\)\(|x_{i}\)) \(<\) 0.5 correspond to the MR population.1 We obtained the populational number ratio of \(n\)(MP):\(m\)(MR) = 68.4:31.6 (\(\pm\)1.4) and our current result is consistent with our previous result, \(n\)(Ca-w):\(n\)(Ca-s) = 70:30 (Lee, 2015). We show the metallicity distributions of the two populations in Figure 3(b). We obtained [Fe/H]\({}_{hk}\) = -1.914 \(\pm\) 0.002 (\(\sigma\) = 0.070) for the MP population and -1.676 \(\pm\) 0.002 (\(\sigma\) = 0.050) for the MR population. The difference between the two metallicity groups, \(\Delta\)[Fe/H]\({}_{hk}\) = 0.238 \(\pm\) 0.003 dex, is in excellent agreement with those in spectroscopic analyses (e.g., Lee, 2016). We also examined the cumulative radial distributions (CRDs) and we show them in Figure 3(c). We performed Kolmogorov-Smirnov tests and obtained that the MP and MR populations are most likely drawn from the different parent CRDs (see also Lee et al., 2009; Lee, 2015).
Footnote 1: Note that the MP of current study is corresponding to the Ca-w (Lee, 2015) and the G1 (Lee, 2020), while the MR to the Ca-s and the G2, respectively.
With our differential reddening estimates as mentioned above, we calculate the metallicity from the dereddened \(hk_{\rm{WL}}\) colors, finding [Fe/H]\({}_{hk}\) = -1.840 \(\pm\) 0.003 dex (\(\sigma\) = 0.129), -1.915 \(\pm\) 0.002 (\(\sigma\) = 0.069), -1.676 \(\pm\) 0.002 (\(\sigma\) = 0.058) for the mean value, the MP, and MR populations with the identical populational number ratio \(n\)(MP):\(n\)(MR) = 68.4:31.6 (\(\pm\)1.4). As we pointed out above, we emphasize again that the differential reddening does not appear to affect our results.
Using the following relations (Lee, 2021; Lee & Sneden, 2021; Lee, 2023),
\[{\rm[C/Fe]}_{ch}\approx f_{2}(ch_{\rm{WL}},\ {\rm[Fe/H]}_{hk},\ M_{V}), \tag{6}\] \[{\rm[N/Fe]}_{th}\approx f_{3}(nh_{\rm{WL}},\ {\rm[Fe/H]}_{hk},\ M_{V}), \tag{7}\]
we derive the photometric [C/Fe]\({}_{ch}\) and [N/Fe]\({}_{nh}\) of each population in M22. In Figure 4, we show [C/Fe]\({}_{ch}\) and [N/Fe]\({}_{nh}\) against the \(V\) magnitude with and without the differential reddening correction, suggesting that the differential reddening correction does not significantly affect our photometric [C/Fe]\({}_{ch}\) and [N/Fe]\({}_{nh}\). Our results clearly show that the carbon abundance decreases and nitrogen abundance increases in RGB stars brighter than the RGBB (\(V\approx\) 14.0 mag for M22; Lee, 2015) as can be seen in our previous studies of other Galactic GCs, M5, M3, and M92 (Lee & Sneden, 2021; Lee, 2021, 2023).
In Figures 4(i-j), the distributions of the MP and MR on the [C/Fe]\({}_{ch}\) versus [N/Fe]\({}_{nh}\) plane are different in the sense
Figure 3: (a) The [Fe/H]\({}_{hk}\) distribution of M22 RGB stars. The blue and red colors denote the MP and MR populations. (b) The histogram of the [Fe/H]\({}_{hk}\) along with results returned from our EM estimator. (c) CRDs of each RGB population. (d–f) Same as (a-c) but for the [Fe/H]\({}_{\rm{tot}}\) from the dereddened \(hk_{\rm{WL}}\) colors.
that at a given carbon abundance the nitrogen abundance of the MP RGB stars are lower, confirming previous results of Marino et al. (2011) and Lee (2015). As Marino et al. (2011) noted, without any discernible oxygen abundance differences, the difference in the total CNO abundances between the two populations in M22 is likely responsible for this separation.
In Figure 4, we emphasize that the degrees of the carbon depletion between the M22 MP and MR RGB stars brighter than the RGBB appears to be quite different. On the other hand, the degree of the nitrogen enhancement is not as clear as that of the carbon depletion, at least due to the dependence of the nitrogen enhancement on their initial nitrogen abundances (e.g., Angelou et al., 2011).
We performed subpopulational tagging for the MP and MR populations. Due to the carbon depletion and the nitrogen enhancement in RGB stars brighter than the RGBB, neither [C/Fe]\({}_{ch}\) nor [N/Fe]\({}_{nh}\) may be the proper probes for the populational tagging. Instead, we perform populational tagging using our \(\|ch_{\rm{IWL}}\) and \(\|nh_{\rm{IWL}}\), which are defined as
\[\|~{}ch_{\rm{IWL}}\equiv\frac{ch_{\rm{IWL}}-ch_{\rm{IWL},red}}{ ch_{\rm{IWL},red}-ch_{\rm{IWL},blue}}, \tag{8}\] \[\|~{}nh_{\rm{IWL}}\equiv\frac{nh_{\rm{IWL}}-nh_{\rm{IWL},red}}{ nh_{\rm{IWL},red}-nh_{\rm{IWL},blue}}, \tag{9}\]
where the subscripts denote the fiducials of the red and the blue sequences of individual color indices (see also Lee & Sneden, 2021). With these procedures, the curvatures of the RGB sequence on the \(ch_{\rm{JWL}}\) and \(nh_{\rm{IWL}}\) CMDs can be effectively removed. Then we can perform populational tagging for the bright RGB stars on these normalized color indices (e.g., Lee, 2020). We made histograms for the (\(\|ch_{\rm{IWL}}\) and \(\|nh_{\rm{IWL}}\)), which is roughly equivalent to [C/N], showing at least two distinctive peaks in each histogram. We applied EM algorithms for each population and obtained subpopulational number ratios of \(n\)(FG):\(n\)(SG) = 49.0:51.0 (\(\pm\)2.4) for the MP and 50.0:50.0 (\(\pm\)3.5) for the MR.
With our subpopulations, we derived \(V\) magnitude dependence of carbon abundance depletion, \(d\)[C/Fe]\(/M_{V}\). In order to derive the slopes of such relation, we used a ordinary least-squares fit and a robust fit that minimizes absolute deviation (MEDFIT: Press et al., 1986). We show our results in Figure 5 and Table 1. As shown, the M22 MP population has steeper gradients than the MR does, mainly due to difference in the mean metallicity between the two populations (Charbonnel & Zahn, 2007). Also importantly, the SG subpopulations appear to have slightly steeper gradients than the FG, although the differences are not statistically significant.
Figure 5: Plots of [C/Fe]\({}_{th}\) vs. \(M_{V}\) of individual populations of M22, M5, M3, and M92 for RGB stars brighter than their RGBB \(V\) magnitudes. The blue and red solid lines denote the least-squares fitting and the robust fitting by minimizing absolute deviation, respectively. The numbers in parentheses are the mean metallicity of each GC.
Figure 6: (a) A plot of \(d[{\rm C/Fe}]/M_{V}\) vs. [Fe/H] returned from the least-squares fitting (blue solid lines in Figure 5). The blue and red colors denote the FG and SG, respectively. The green and gold solid lines show a linear regression with confidence intervals of 95%. The yellow shade box indicates the mean value by Smith & Martell (2003). (b) Same as (a) but for results returned from the robust fitting by minimizing absolute deviation (red solid lines in Figure 5).
\begin{table}
\begin{tabular}{l l l c c c c} \hline \hline & Pop. & [Fe/H]\({}^{d}\) & [Fe/H]\({}_{kk}\) & \(d\)[C/Fe]/\(d{M_{v}}\)\({}^{b}\) & \(d\)[C/Fe]/\(d{M_{v}}\)\({}^{c}\) & \(d\)[C/Fe]/\(d{M_{v}}\)\({}^{d}\) \\ \hline M22-MR & FG & & \(-\)1.68 & 0.104 \(\pm\) 0.033 & 0.150 \(\pm\) 0.088 & \\ & SG & & & 0.142 \(\pm\) 0.029 & 0.156 \(\pm\) 0.074 & \\ M22-MP & FG & & \(-\)1.91 & 0.203 \(\pm\) 0.024 & 0.235 \(\pm\) 0.090 & \\ & SG & & & 0.227 \(\pm\) 0.020 & 0.198 \(\pm\) 0.092 & \\ \hline M5 & FG & \(-\)1.29 & \(-\)1.30 & 0.096 \(\pm\) 0.019 & 0.077 \(\pm\) 0.056 & \\ & SG & & & 0.106 \(\pm\) 0.017 & 0.141 \(\pm\) 0.089 & \\ M3 & FG & \(-\)1.50 & \(-\)1.50 & 0.159 \(\pm\) 0.012 & 0.105 \(\pm\) 0.049 & 0.236 \(\pm\) 0.033 \\ & SG & & & 0.206 \(\pm\) 0.016 & 0.187 \(\pm\) 0.055 & \\ M92 & FG & \(-\)2.31 & \(-\)2.31 & 0.391 \(\pm\) 0.022 & 0.315 \(\pm\) 0.066 & 0.227 \(\pm\) 0.045 \\ & SG & & & 0.374 \(\pm\) 0.032 & 0.396 \(\pm\) 0.119 & \\ \hline \end{tabular} \({}^{a}\)Harris (1996, 2010 version)
\({}^{b}\)A least-squares fitting.
\({}^{c}\)A fitting by minimizing absolute deviation.
\({}^{d}\)Smith & Martell (2003).
\end{table}
Table 1: Magnitude Dependence of Carbon Abundance.
and their slopes for M3 and M92 are slightly different from our values.
In Figure 6, we show plots of \(d[{\rm C}/{\rm Fe}]/M_{V}\) versus [Fe/H] for the case of a least-squares fit (Case 1; column (5) of Table 1) and a robust fit, which minimizes absolute deviation (Case 2; column (6) of Table 1), respectively. We obtained following relations
\[\frac{d[{\rm C}/{\rm Fe}]}{dM_{V}} = -0.261(\pm 0.043)[{\rm Fe}/{\rm H}]-0.253(\pm 0.077), \tag{10}\] \[\frac{d[{\rm C}/{\rm Fe}]}{dM_{V}} = -0.241(\pm 0.037)[{\rm Fe}/{\rm H}]-0.224(\pm 0.066), \tag{11}\]
and, not surprisingly, our results clearly show that the \(d[{\rm C}/{\rm Fe}]/M_{V}\) depends on metallicity. Note that Smith & Martell (2003) obtained the mean value of \(d[{\rm C}/{\rm Fe}]/M_{V}\approx\) 0.22 \(\pm\) 0.03 from three GCs (M92, NGC 6397, and M3) and the MP halo giants. On the other hand, Martell, Smith, & Briley (2008) argued that the carbon depletion rate2 becomes doubled from [Fe/H] = -1.3 to -2.3.
Footnote 2: We note that the carbon depletion rate by Martell, Smith, & Briley (2008), \(\Delta[{\rm C}/{\rm Fe}]/\Delta t\), is different from our value, \(d[{\rm C}/{\rm Fe}]/M_{V}\).
As we mentioned above, the slopes of the \(d[{\rm C}/{\rm Fe}]/M_{V}\) for the FG and SG appear to be slightly different for M5, M3, and M92, in the sense that the SG tends to have a steeper slope. We argue that different initial helium abundances between the FG and SG are responsible for the difference in the slopes. Due to the high helium abundances of the SG stars, which can be inferred from their RGBB \(V\) magnitudes (e.g., Lee, 2020, 2023; Lee & Sneden, 2021), the SG RGB stars have high temperature at the hydrogen burning shell, which causes slightly faster carbon destruction through the CN cycle (e.g., Church et al., 2014).
## 6 Summary
We investigated the photometric metallicity, carbon, and nitrogen abundances of the metal-complex globular cluster M22. Our results confirmed previous results that M22 contains at least two MSPs with heterogeneous metallicities (Lee et al., 2009; Lee, 2015, 2016; Marino et al., 2009, 2011).
We obtained [Fe/H]\({}_{hk}\) = \(-\)1.839 \(\pm\) 0.003 dex (\(\sigma\) = 0.129), \(-\)1.914 \(\pm\) 0.002 (\(\sigma\) = 0.070), and \(-\)1.676 \(\pm\) 0.002 (\(\sigma\) = 0.050) for the all RGB, the MP, and MR RGB populations, respectively. Applying differential reddening correction does not affect our photometric metallicity measurements, no larger than 0.001 dex in the mean values.
Our [C/Fe]\({}_{ch}\), [N/Fe]\({}_{nh}\) measurements of individual populations in M22 showed that each population showed evidences of the CN process accompanied by deep mixing episodes with different degrees.
With our carbon abundance measurements for M22 and other GCs in our previous studies, M5 (Lee, 2021), M3 (Lee & Sneden, 2021), M92 (Lee, 2023), we investigated the surface carbon depletion rates, \(d[{\rm C}/{\rm Fe}]/M_{V}\), for Milky Way GCs against metallicity, finding \(d[{\rm C}/{\rm Fe}]/M_{V}\propto-0.25\)[Fe/H]. We also argued that the carbon depletion rates of the SG are larger than those of the FG, most likely due to different initial helium abundances between the FG and SG, which cause different internal temperature profiles.
Our results presented here can provide critical constraints both on understanding the mixing efficiency in the theoretical models, which is largely unknown, and on interpretation of the carbon abundances evolution seen from the bright halo RGB stars.
## Acknowledgments
J.-W.L. acknowledges financial support from the Basic Science Research Program (grant No. 2019R1A2C2086290) through the National Research Foundation of Korea (NRF) and from the faculty research fund of Sejong University in 2022. He also thanks the anonymous referee for encouraging comments. SMARTS: 1.0 m (STA), WIYN: 0.9 m (HDI, S2KB), Gaia.
|
2310.16073
|
FloCoDe: Unbiased Dynamic Scene Graph Generation with Temporal
Consistency and Correlation Debiasing
|
Dynamic scene graph generation (SGG) from videos requires not only a
comprehensive understanding of objects across scenes but also a method to
capture the temporal motions and interactions with different objects. Moreover,
the long-tailed distribution of visual relationships is a crucial bottleneck
for most dynamic SGG methods. This is because many of them focus on capturing
spatio-temporal context using complex architectures, leading to the generation
of biased scene graphs. To address these challenges, we propose FloCoDe:
Flow-aware Temporal Consistency and Correlation Debiasing with uncertainty
attenuation for unbiased dynamic scene graphs. FloCoDe employs feature warping
using flow to detect temporally consistent objects across frames. To address
the long-tail issue of visual relationships, we propose correlation debiasing
and a label correlation-based loss to learn unbiased relation representations
for long-tailed classes. Specifically, we propose to incorporate label
correlations using contrastive loss to capture commonly co-occurring relations,
which aids in learning robust representations for long-tailed classes. Further,
we adopt the uncertainty attenuation-based classifier framework to handle noisy
annotations in the SGG data. Extensive experimental evaluation shows a
performance gain as high as 4.1%, demonstrating the superiority of generating
more unbiased scene graphs.
|
Anant Khandelwal
|
2023-10-24T14:59:51Z
|
http://arxiv.org/abs/2310.16073v3
|
# Correlation Debiasing for Unbiased Scene Graph Generation in Videos
###### Abstract
Dynamic scene graph generation (SGG) from videos requires not only comprehensive understanding of objects across the scenes that are prone to temporal fluctuations but also a model the temporal motions and interactions with different objects. Moreover, the long-tailed distribution of visual relationships is the crucial bottleneck of most dynamic SGG methods, since most of them focus on capturing spatio-temporal context using complex architectures, which leads to the generation of biased scene graphs. To address these challenges, we propose _FloCoDe:_ **Flow**_-aware temporal consistency and **Cor**relation **De**biasing with uncertainty attenuation for unbiased dynamic scene graphs. _FloCoDe employs feature warping using flow to detect temporally consistent objects across the frames. In addition, it uses correlation debiasing to learn the unbiased relation representation for long-tailed classes. Moreover, to attenuate the predictive uncertainties, it uses a mixture of sigmoidal cross-entropy loss and contrastive loss to incorporate label correlations to identify the commonly co-occurring relations and help debias the long-tailed ones. Extensive experimental evaluation shows a performance gain as high as 4.1% showing the superiority of generating more unbiased scene graphs.
## 1 Introduction
Scene graph generation for videos (VidSGG) aims to represent the video in the form of a dynamic graph that is able to capture the temporal evolution of the relationships between pairs of objects. VidSGG has direct usage for various downstream applications like visual question answering [1, 44, 52], video captioning [53] and video retrieval [8, 40, 51] etc. VidSGG is considered more challenging compared to its image-based counterpart since the relations between identified object pairs are dynamic along the temporal dimension, making this a multi-label problem. In the current stage, VidSGG is relatively in its nascent stage as compared to SGG (scene graph generation) from static images [7, 22, 23, 24, 27, 30, 43, 57, 58]. Several works [6, 11, 17, 26, 28, 36, 46] have proposed to solve VidSGG mostly with spatio-temporal sequence processing with transformers [3, 4, 12, 18, 32, 42, 49]. Most of these methods simply focus on building complex models that can effectively aggregate the spatio-temporal information in a video, but they eventually lack the ability to address the data imbalance in the relation/predicate classes. Their performance is quite good in terms of the Recall@K metric, which is biased towards frequent classes. However, another metric mean-Recall@K is proposed [5, 43] to quantify the performance in the presence of low-frequent classes, since it gives the overall view of the SGG models rather than considering only high-frequent classes. Although, recent methods [31, 35] has proposed to deal with class imbalance using memory-based debiasing and uncertainty attenuation for classification.But their method of debiasing is based on learnable attention and transformer weights, which have a risk of getting biased towards high-frequent classes. One such case of this issue has been shown in Figure 2 during qualitative comparison with our method. Moreover, the uncertainty attenuation based on Gaussian Mixture Models (GMM) has several limitations, as discussed in [34, 38, 56]. To overcome these limitations of GMM based classification, some works have proposed the regularization with a mixture of standard deviations [19] and some proposed loss function that can incorporate label correlations [38]. Some more approaches [25, 54] also tried to address the biased relation predictions. Li et al. [25] proposed to weaken the false correlation between input data and the predicate labels. Xu et al. [54] considered biases in a meta-learning paradigm. These approaches mitigate the long-tail problem to some extent, but the performance is still not satisfactory.
In this work, we not only focus on improving the unbiased predicate classification but also on improving object detection. We proposed to use the flow-warped features in the temporal dimension to compensate for the dynamic fluctuations in a video. In our analysis (in Table 4), it is shown that the major bottleneck in the existing dynamic SGG is the incorrect detection of objects across the video frames. Moreover, as compared to previous methods[31] which use mem
ory correlation to debias the predicate embedding, it has the risk of biasing the attention weights towards highly frequent classes. In order to mitigate this, we propose to debias the predicate embeddings during the generation stage itself, since the correlation between the predicate embeddings and the entities is primarily driven by high-frequent relation classes, and hence we make this correlation unbiased so that the learned embeddings are themselves the debiased embeddings. Furthermore, the uncertainty attenuation in existing methods[31] does not take into account the label correlations. Hence, we proposed to use a mixture of logic networks (MLNs) that can distinguish between two different types of predictive uncertainty: aleatoric and epistemic uncertainty. In addition to the uncertainty-aware classification loss, we introduced supervised contrastive loss to take into account the label correlations. The objective of multi-label contrastive loss is to pull together predicate representations having more than one overlapping class while pushing apart negative samples that do not share any classes. Further, we regularize the uncertainty-aware training loss function with aleatoric and epistemic uncertainty to penalise the loss function explicitly, which can reduce the uncertainty, especially for low-frequent classes where the probability of incorrect prediction is high. Combining all this, we named our framework FloCoDe: **Flow**-aware temporal consistency and **Cor**r**elation **De**biasing with Uncertainty Attenuation for an unbiased dynamic SGG. The major contributions of this paper are: 1) FloCoDe models both (a) aleatoric and epistemic uncertainty associated with dynamic SGG and (b) label correlations to produce more unbiased scene graphs. 2) Utilising a novel correlation-guided debiased learning of predicate embeddings that avoids the bias in learnable weights of attention and transformer decoder. 3) Utilising flow-aware, temporally consistent object detection for accurate classification of nodes in scene graphs 4) FloCoDe achieves significant gains for mR@K[43] and R@K, highlighting its superiority in generating unbiased scene graphs.
## 2 Related Work
**Image Scene Graph Generation**: Image-based scene graph generation (ImgSGG) is the task of obtaining the structured graph summarization that involves objects as nodes and their relationships (formally called predicates) as edges in an image. There exists tremendous work on ImgSGG with comparison to a common benchmark, Visual Genome (VG) [22]. Some of these focused on evolving efficient ways of aggregating spatial context [24, 27, 30, 57, 58] while some of the latest works are based on addressing fundamental problems such as preventing biased scene graphs caused by long-tailed predicate distribution and noisy annotations in dataset.
**Video Scene Graph Generation (VidSGG)**: With the successful exploration of spatial context within images, video researchers have started to explore the spatial context and temporal correlation between objects detected across the frames. Similar to ImgSGG, long-tailed predicates and noisy annotations still exist in the VidSGG benchmark Action Genome [16]; further, it has the additional challenge of addressing the temporal fluctuations across the frames. Numerous approaches [28, 36, 46, 48, 59] have employed object-tracking mechanisms to tackle the temporal fluctuations among different frames. However, object-tracking models incur high computational costs and memory consumption; they also accumulate information from irrelevant frames, leading to suboptimal performance. STTran [6] proposes a strong baseline that adopts a spatial encoder and a temporal decoder to extract implicitly spatial-temporal contexts. Other works [26, 50] which are also based on extracting temporal correlations. Some ensure temporal continuity by extracting the entire co-occurrence pattern, while others propose pre-training paradigms to model the temporal correlations implicitly. A lot of works [3, 32, 42, 49] are based on the superior sequence processing ability of transformers for the spatio-temporal context of visual relations. However, despite their success, these are mostly providing gains for high-frequent classes and suffer from the problem of long-tail bias. Recent work TEMPURA[31] tries to address the long-tail problem using an uncertainty-guided loss function. We go beyond this and explore label correlation to further reduce prediction uncertainty.
## 3 Method
### Preliminary
**Problem Statement**: Given a video \(\mathcal{V}=\{I_{1},I_{2},I_{3},...,I_{T}\}\), the goal of dynamic SGG is to generate scene graphs denoted as \(\mathcal{G}=\{G_{t}\}_{t=1}^{T}\) of video \(V\) consisting of \(T\) frames. \(G_{t}=\{V_{t},E_{t}\}\) is the scene graph of frame \(I_{t}\), where \(V_{t}\) is the set of nodes and \(E_{t}\) is the relation as edges between nodes in \(V_{t}\). Nodes \(V_{t}\) connected to each other using predicate in \(E_{t}\), forming multiple \(<\)_subject-predicate-object\(>\)_ triplets. The set of object and predicate classes are referred to as \(\mathcal{Y}_{o}=\{y_{o1},y_{o2},y_{o3},.....,y_{o_{co_{s}}}\}\) and \(\mathcal{Y}_{r}=\{y_{r1},y_{r2},y_{r3},.....,y_{r_{cr_{s}}}\}\) respectively.
**Object Detection and Relation Representation**: With the use of an off-the-shelf object detector (Faster-RCNN [37]), we obtain the set of objects \(O_{t}=\{o_{i}^{t}\}_{i=1}^{N(t)}\), where \(N(t)\) is the number of objects detected in frame \(I_{t}\). Each object in a \(t^{th}\) frame is denoted as \(o_{i}^{t}=\{b_{i}^{t},v_{i}^{t},c_{i_{o}}^{t}\}\) where \(b_{i}^{t}\in\mathbb{R}^{4}\) being the bounding box, \(v_{i}^{t}\in\mathbb{R}^{2048}\) the RoIAligned [14] proposal feature of \(o_{i}^{t}\) and \(c_{o_{i}}^{t}\) is its predicted class. However, the object class \(c_{o_{i}}^{t}\) fluctuates across the frames and is not coherent even for the same object. Existing works [46] address this by incorporating object tracking algorithms; opposed to this, our strategy compensates for these dynamic fluctuations using flow-warped features and ensures temporal coherence. Additionally, we extracted the base features \(f_{t},t\in[1,T]\) and ROIs from Faster R-CNN using ResNet-101 [13]. We warp these base features using
the temporal flow and compute the RoIAigned warped object features \(v_{i}^{t\to t^{\prime}}\) (\(t^{\prime}\) represents the immediate previous frame that contains the \(i^{th}\) object) using the predicted ROIs.
### Temporal Flow-Aware Object Detection
Object detectors trained on static images are prone to misclassifying the same object in different frames. Existing methods [6, 17, 26, 50] either use \(c_{o_{i}}^{t}\) obtained from object detection in each frame or use object feature to classify objects. However, these methods do not compensate for temporal fluctuations in the videos. Inspired by FPGA[60], which uses flow-guided feature aggregation for object detection in videos, we propose to leverage flow-warped features and temporal processing for consistent object detection across frames. We introduce _Temporal Flow-Aware Object Detection (TFoD)_, which utilises transformer encoder [49] with masked self-attention (_TEnc_) to process the set of temporal object sequences \(\mathcal{O_{V}}\), which is constructed as follows:
\[\mathcal{O_{V}}=\{\mathcal{O}_{t_{1},k_{1}}^{1},\mathcal{O}_{t_{2},k_{2}}^{2},...,\mathcal{O}_{t_{\mathcal{O}_{o}},k_{\mathcal{C}_{o}}}^{\hat{C} _{o}}\},\text{ where}\\ \mathcal{O}_{t_{j},k_{j}}^{j}=\{v_{i}^{t},v_{i}^{t+1},......,v_{i }^{k}\} \tag{1}\]
Each entry of \(\mathcal{O}_{t_{j},k_{j}}^{j}\) corresponds to an object of the same detected class \(c_{o_{j}}\), here \(1\leq t_{j},k_{j}\leq T\) and \(\hat{\mathcal{C}}_{o}\leq\mathcal{C}_{o}\) denoting all the detected classes in a video \(\mathcal{V}\). However, the detected class labels can be noisy since they are based only on frame-level predictions; hence, we use the flow-warped feature \(v_{i}^{t\to t^{\prime}}\) instead of \(v_{i}^{t}\) before feeding it to the transformer encoder. The flow-warped feature is computed as:
\[f_{t\to t^{\prime}}=\mathcal{W}(f_{t},\mathcal{F}(I_{t^{\prime}},I_{t})) \tag{2}\]
where \(\mathcal{W}\) is a bilinear warping function [60, 61] applied on all the locations for each channel in the feature maps and flow field \(\mathcal{F}(I_{t^{\prime}},I_{t})\) is computed from the pre-trained Flow-Net [9], where \(t^{\prime}\) is the index of the immediate frame previous to the \(t^{th}\) frame having the same object as depicted in the object sequences \(\mathcal{O_{V}}\). Using \(f_{t\to t^{\prime}}\) and predicted ROIs, the warped RoIAigned feature is computed, and then the \(\mathcal{O}_{t_{j},k_{j}}^{j}\) is denoted as:
\[\mathcal{O}_{t_{j},k_{j}}^{j}=\{v_{i}^{t},v_{i}^{t+1\to t},......,v_{i}^{k}\} \tag{3}\]
Each of \(\mathcal{O}_{t_{j},k_{j}}^{j}\) is zero-padded to prepare functional input tensor. _Tenc_ uses masked multi-head self-attention instead of multi-head self-attention in transformer encoder [49]. Mask is introduced to learn the temporal dependencies in a unidirectional manner so that the object at frame index \(t\) can only attend to objects in previous frames. Attending to future context can be noisy since it is more probable for unrelated objects, and hence it is noisy to attend to future context. For any input \(\mathbf{X}\), the single head masked attention \(\mathbb{A}\) is given as:
\[\mathbb{A}(\mathbf{Q},\mathbf{K},\mathbf{V})=softmax\left(\frac{mask( \mathbf{Q}\mathbf{K}^{T})}{\sqrt{D_{K}}}\right)\mathbf{V} \tag{4}\]
where \(D_{K}\) is the dimension of \(\mathbf{K}\), and \(\mathbf{Q},\mathbf{K},\mathbf{V}\) are the query, key, and value vectors, respectively. Here, \(\mathbf{Q=K=V=X}\), the multi-head attention is \(\mathbb{M}(X)=concat(a_{1},a_{2},...a_{H})W_{H}\), where \(a_{i}=\mathbb{A}(\mathbf{X}W_{Q_{i}},\mathbf{X}W_{Ki},\mathbf{X}W_{Vi})\) where \(W_{Q_{i}}\),\(W_{Ki}\),\(W_{Vi}\) and \(W_{H}\) are the learnable weight matrices. The rest of the components, like residual connection, normalisation, and FFN (feed-forward network), remain the same as in the transformer encoder [49]. The output of n-layered _TEnc_ is given as:
\[X_{out}^{(n)}=TEnc(X_{out}^{(n-1)}),\;X_{out}^{(0)}=\hat{\mathcal{O}}_{V} \tag{5}\]
where \(\hat{\mathcal{O}}_{V}=\mathcal{O}_{V}+P_{o}^{T}\), where \(P_{o}^{T}\) are the fixed positional embeddings injecting the temporal position of objects. Inspired by the properties of neural collapse [33] we prefixed the classifier weights (forming Equiangular Tight Frame (ETF)) for each object class to induce the maximal separable classifier even under the class imbalance setting. The pre-fixed classifier weights \(\mathbf{W}_{ETF}\) are given as:
\[\mathbf{W}_{ETF}=\sqrt{\frac{\mathcal{C}_{o}}{\mathcal{C}_{o}-1}}\mathbf{U} \left(\mathbf{I}_{\mathcal{C}_{o}}-\frac{1}{\mathcal{C}_{o}}\mathbf{1}_{ \mathcal{C}_{o}}\mathbf{1}_{\mathcal{C}_{o}}^{T}\right) \tag{6}\]
where \(\mathbf{W}_{ETF}=[\mathbf{w}_{1},\mathbf{w}_{2},.....\mathbf{w}_{\mathcal{ C}_{o}}]\in R^{d\times\mathcal{C}_{o}}\), \(\mathbf{U}\in R^{d\times\mathcal{C}_{o}}\), allows a rotation and satisfies \(\mathbf{U}^{T}\mathbf{U}=\mathbf{I}_{\mathcal{C}_{o}},\mathbf{I}_{\mathcal{C}_ {o}}\) is the identity matrix, and \(\mathbf{1}_{\mathcal{C}_{o}}\) is an all-ones vector. The object classification loss is then given as:
\[\mathcal{L}_{o}(x_{o_{i}},\mathbf{W}_{ETF})=\frac{1}{2}\left(\mathbf{w}_{c_{ i_{i}}}^{T}\hat{x}_{o_{i}}-1\right) \tag{7}\]
where \(\hat{x}_{o_{i}}=x_{o_{i}}/||x_{o_{i}}||\) and \(x_{o_{i}}\in X_{out}^{(n)}\) and \(\mathbf{w}_{c_{o_{i}}}\) is the fixed prototype in \(\mathbf{W}_{ETF}\) for object class \(c_{o_{i}}\) and we have \(||\mathbf{w}_{c_{o_{i}}}||=1\). Finally, the converged features will be aligned with \(\mathbf{W}_{ETF}\), and thus the ETF structure instructed by neural collapse is attained. This results in pulling the object features of the same class into a common prototype and pushing away the features of other classes. The theoretical advantage of the loss has been proved in [55].
### Correlation-Aware Predicate Embedding
The relationship between objects is governed by three types of correlations: a) _spatial correlation between predicates_ b) _temporal correlation between predicates_ c) _predicate-object correlation across the video frames_. We propose to model these correlations using the Vanilla Transformer [49]. Since the relations between objects are highly imbalanced, the relation representation becomes biased towards popular ones, and hence, to produce unbiased relation embeddings, we
propose to update correlation matrices as a weighted average of the current correlation matrix and the previous matrix, where the weight is determined by the decay factor. For each object \(o_{i}\) of predicted class \(c_{o_{i}}\) obtained from object detection (Section 3.2), the input to the transformer encoder is the set of features describing the relation with each object \(o_{j}\) detected in all the frames where \(o_{i}\) is detected. The input is constructed as follows:
\[r_{i,j}^{t}=concat(\mathbf{x}_{c_{o_{i}}},f_{u}(\mathbf{u}_{ij}^{t}+f_{box}( \mathbf{b}_{i}^{t},\mathbf{b}_{j}^{t})),f_{I}(t)) \tag{8}\]
where \(\mathbf{x}_{c_{o_{i}}}\in X_{out}^{(n)}\) is the feature representation of object \(o_{i}\) belonging to class \(c_{o_{i}}\in[1,\mathcal{C}_{o}]\), \(u_{ij}^{t}\in\mathbb{R}^{256\times 7\times 7}\) is the feature map of the union box computed by RoAlign [14]. \(f_{u},f_{I}\) is the FFN based on non-linear projections, and \(f_{box}\) is the bounding box to feature map projection of [57]. Both are configured to produce d-dimensional relation features. \(f_{I}\) serve as positional embeddings denoting the frame index. The single encoder input consists of both spatial and temporal relation features between object \(o_{i}\) and all other objects. \(\{o_{1},o_{2},....o_{j}\}\), specifically it is constructed as \(R_{i}=\{r_{i,1}^{t_{1}},r_{i,2}^{t_{2}},....r_{i,j}^{t_{j}}\}\), where \(t_{j}\) are the frame indices where \(o_{i}\) and \(o_{j}\) are detected simultaneously. The transformer decoder leverages masked self-attention, and its input is a set of object representations. \(\{\mathbf{x}_{c_{o_{1}}},\mathbf{x}_{c_{o_{2}}}...,\mathbf{x}_{c_{o_{j}}}\}\), corresponding to objects \(\{o_{1},o_{2},....o_{j}\}\), detected in all the frames where \(o_{i}\) is detected. The input to the transformer encoder contains all the predicate (relation) features across the frames, and hence, with multi-head self-attention, it models both spatial and temporal correlation between predicates. Similarly, the cross-attention between the encoder and decoder models the predicates-object correlation. The predicate embeddings at the output of the transformer decoder are denoted as \(\hat{r}_{tem}^{k}=\hat{r}_{i,j}^{t}\forall k\in[1,N(t)],t\in[1,T]\). At transformer decoder, we use the sliding window of size 10 for predicate representation with relating objects.
### Debiased Predicate Embedding
The relation embeddings at the output of the transformer decoder are biased by the fact that popular relations drive the learning of attention weights, causing the biased predicate embeddings. Hence, we update the cross-attention matrix with the previous matrix corresponding to each triplet \(\{o_{i},r_{i,j},o_{j}\}\) obtained by attention between the transformer encoder and decoder across all the layers. Since the input to the transformer encoder is concatenated relation and object features, and hence using cross-attention with querying objects, we get the predicate-object correlations. Since some predicates are rare and hence more prone to getting biased, if we update their correlation using previously observed correlation, it will generate the debiased embeddings. Let's denote the stored correlation matrix between every object pair for all relations as \(\mathcal{M}_{e-1}\) at the end of the previous epoch \(e-1\), and the attention matrix at the current ongoing epoch \(e\) is \(A_{e}\). During training, we update the attention matrix (denoted as \(\hat{A}_{e}\)) using the decaying factor \(\eta\) as the training epoch progresses, given as:
\[\hat{A}_{e}(o_{i},r_{i,j},o_{j})=\eta*A_{e}(o_{i},r_{i,j},o_{j})+\\ (1-\eta)*\mathcal{M}_{e-1}(o_{i},r_{i,j},o_{j}) \tag{9}\]
where \(\mathcal{M}_{0}=A_{0}\) at the end of the zero epoch. We then use the attention value from \(\hat{A}_{e}\) in place of the attention value calculated from \(QK^{T}\in A_{e}\). This avoids debiasing the attention weights to avoid biasing towards popular predicates. Once the debiased weights are learned during training, we expect that they will generate debiased embeddings during the inference phase. Hence, there is no modification of attention matrices during inference time.
### Predicate Classification
Following previous works [31], we model the classifier framework to model the noisy annotations in the SGG data.
Figure 1: FloCode: Each RGB frame is passed to object detector and object proposals are passed to temporal flow-aware object classification and relation classification (correlation debiasing and MLN for uncertainty attenuation) to generate unbiased scene graphs
Specifically, we model the classification head as a mixture-of-experts model named mixture logit networks (MLN) and a noise pattern estimation method utilizing the outputs of the MLN. The number of mixtures is \(\mathcal{K}\). Different from [31], we propose: 1) an uncertainty-aware mixture of attenuated loss 2) supervised contrastive learning, which incorporates label correlation to improve predicate classification.
**Uncertainty-Aware Mixture of Attenuated loss**: For a sample embedding \(\mathbf{z}_{i}\), the class-specific aleatoric (\(\sigma_{a}\)) and epistemic uncertainty (\(\sigma_{e}\)) are computed as:
\[\sigma_{e}^{2}=\sum_{p=1}^{\mathcal{C}_{r}}\sum_{k=1}^{\mathcal{K }}\pi_{i,p}^{k}||\mu_{i,p}^{k}-\sum_{j=1}^{\mathcal{K}}\pi_{i,p}^{j}\mu_{i,p}^{ j}||_{2}^{2} \tag{10}\] \[\sigma_{a}^{2}=\sum_{p=1}^{\mathcal{C}_{r}}\sum_{k=1}^{\mathcal{K }}\pi_{i,p}^{k}\Sigma_{i,p}^{k} \tag{11}\]
where the mean, variance, and mixture weights for the \(p^{th}\) predicate class are estimated as follows:
\[\mu_{i}^{k}=f_{\mu}^{k}(\mathbf{z}_{i}),\Sigma_{i}^{k}=\sigma(f_{\Sigma}^{k}( \mathbf{z}_{i})),\pi_{i}^{k}=\frac{e^{f_{\pi}^{k}(\mathbf{z}_{i})}}{\sum_{k=1 }^{\mathcal{K}}e^{f_{\pi}^{k}(\mathbf{z}_{i})}} \tag{12}\]
where \(f_{\mu},f_{\Sigma},f_{\pi}\) are the FFN projection functions and \(\sigma\) is the sigmoid non-linearity which ensures \(\Sigma_{i,p}^{k}\geq 0\) for the \(p^{th}\) predicate class. During training, \(\mathbf{z}_{i}=\hat{r}_{tem}^{i}\), the mixture of attenuated loss (\(\mathcal{L}_{MAL}\)) is given as:
\[\mathcal{L}_{MAL}=\frac{1}{N}\sum_{i=1}^{N}\sum_{p=1}^{\mathcal{C}_{r}}\sum_{ k=1}^{\mathcal{K}}\pi_{i,p}^{k}\frac{\mathcal{L}(\mu_{i,p}^{k},y_{r_{p}}^{i})}{ \Sigma_{i,p}^{k}} \tag{13}\]
where \(\mathcal{L}(\mu_{i,p}^{k},y_{r_{p}}^{i})\) is the sigmoidal cross entropy loss, \(y_{r_{p}}^{i}\) is the ground-truth predicate class mapped to \(\mathbf{z}_{i}\), \(\mu_{i,p}^{k}\) is logit of label \(p\) in \(k^{th}\) mixture. For the corrupted input it is more likely to make a false prediction, hence \(\Sigma_{i,p}^{k}\) will increase to reduce the overall loss function for such instance, which in turn prevents over-fitting to the corrupted instances making model more robust.
**Uncertainty-aware Supervised Contrastive Learning**: The MAL loss function classifies labels independently; this makes it difficult to capture correlations between co-occurring semantic labels. To address this limitation, we propose kernel-based multi-label contrastive loss, i.e., \(\mathcal{L}_{\text{KMLC}}\). The objective of this loss function is to pull together the representations of predicates sharing at least one class with the anchor representation \(\hat{r}_{tem}^{n}\) while pushing apart negative samples that do not share any classes. Let us consider the positive set \(\mathcal{A}(n)=\{m\in\{N\setminus n\}:\mathcal{Y}_{r}^{n}\cdot\mathcal{Y}_{r}^ {m}\neq 0\), where \(\cdot\) is a dot product\(\}\) contains samples that have at least one label in common with the anchor \(\hat{r}_{tem}^{n}\), while \(\mathcal{Y}_{r}(n,m)=\{y_{r_{p}}\in\mathcal{Y}_{r}:y_{r_{p}}^{m}=y_{r_{p}}^{n}=1\}\) represents the indices of shared labels between \(\hat{r}_{tem}^{n}\) and \(\hat{r}_{tem}^{m}\). The loss is formulated as:
\[\mathcal{L}_{\text{KMLC}}=\frac{1}{N}\sum_{n=1}^{n=N}\frac{-1}{| \mathcal{A}(n)|}\sum_{m\in\mathcal{A}(n)}J(n,m)\\ \sum_{y_{r_{p}}\in\mathcal{Y}_{r}(n,m)}\left(\text{log}\frac{ \text{exp}(\rho_{y_{r_{p}}}^{n,m}/\tau)}{\sum_{i\in N\setminus n}\text{exp}( \rho_{y_{r_{p}}}^{n,i}/\tau)}\right) \tag{14}\]
where kernel similarity is given as:
\[\rho_{y_{r_{p}}}^{n,i}=\left(\prod_{k=1}^{\mathcal{K}}\left(\frac {(\Sigma_{n,p}^{k})^{2}+(\Sigma_{i,p}^{k})^{2}}{2(\Sigma_{n,p}^{k})(\Sigma_{i,p}^{k})}\right)^{-\frac{1}{2}}\right)\\ \text{exp}\left(-\frac{1}{4}\sum_{k=1}^{\mathcal{K}}\frac{(\mu_{ n,p}^{k}-\mu_{i,p}^{k})^{2}}{(\Sigma_{n,p}^{k})^{2}+(\Sigma_{i,p}^{k})^{2}}\right) \tag{15}\]
**EMA Teacher**: During training, we adopt the EMA weight update [2, 45, 20, 47] for transformers in Section 3.3. Let's say \(\phi_{T},\theta_{T}\) are the weights of transformers for teacher and student, respectively. The weight update is then given as:
\[\phi_{T,e}=\alpha*\phi_{T,e-1}+(1-\alpha)*\theta_{T,e} \tag{16}\]
where \(e\) is the training epoch. The EMA teacher effectively an ensemble of student models at different training steps, which is a most widely used learning strategy in semi-supervised setting [10, 15, 41, 45]. With combined effect of all the student models the teacher model is an unbiased estimator of predicate embeddings, resulting in improved performance.
### Training and Testing
**Training**: With flow based object predictor (Section 3.2) and correlation-aware predicate embeddings, the debiased predicate embeddings (Section 3.3, 3.4) is generated. The entire framework is trained end-to-end minimizing the loss equation:
\[\mathcal{L}=\mathcal{L}_{o}+\mathcal{L}_{\text{MAL}}+\mathcal{L}_{\text{KMLC} }-\lambda_{1}\sigma_{e}+\lambda_{2}\sigma_{a} \tag{17}\]
**Testing**: During testing we utilize the EMA teacher \(\phi_{T}\) to generate the predicate embeddings \(\hat{r}_{tem}^{i}\). These predicate embeddings are then passed to MLN which outputs the predicate confidence scores, \(\hat{y}_{r_{p}}^{i}\). The predicate confidence scores from \(\mathcal{K}\) mixtures is given as:
\[\hat{y}_{r_{p}}^{i}=\sum_{k=1}^{\mathcal{K}}\pi_{i,p}^{k}\frac{\mu_{i,p}^{k}}{ \Sigma_{i,p}^{k}} \tag{18}\]
## 4 Experiments
### Dataset and Implementation
**Dataset**: Following previous works [6, 31], we also evaluated our method on the most widely used benchmark, Action Genome [16]. Action Genome is the largest benchmark
for video SGG; it is built on top of Charades[39]. It contains 476,229 bounding boxes for 35 object classes (without person) and 1,715,568 instances of 26 predicate classes annotated for 234,253 frames. For all experiments, we use the same training and test split following [6, 26, 31].
**Metrics and Evaluation Setup**: We evaluated the performance of FloCoDe with popular metrics, namely, recall@K (i.e., R@K) and mean-recall@K (i.e., mR@K), for \(K=[10,20,50]\). R@K measures the ratio of correct instances among the top-K predicted instances with the highest confidence, but this is biased towards frequent predicate classes [43], whereas mR@K averages out the R@K over all relationships. Hence, mR@K is a more reliable metric for balanced evaluation across predicates [43].
**Tasks**: Following previous works [6, 16, 22, 46], we also evaluated our method on three different experimental tasks:
1) **Predicate Classification** (_PRECDLS_): predict the predicate class of object pairs, given the ground-truth bounding boxes and labels of objects. 2) **Scene graph classification** (_SGCLS_): predict both predicate labels and the category labels of objects given the bounding boxes of objects. 3)
**Scene graph detection** (_SGDET_): simultaneously detects objects appearing in the frame and the predicate labels of each object pair in a frame. Following, we also evaluated our method using two setups: a) **With Constraint** and b)
**No Constraints**. Later one allows each object pair to have more than one predicates simultaneously while the former one restricts to only one predicate.
**Implementation details**: Following previous works [6, 16, 22, 31, 46], we adopted FasterRCNN [37] with ResNet-101 [13] backbone as the object detector. We train the object detector on the training set of Action Genome[16]; this results in 24.6 mAP at 0.5 IoU with CoCo metrics. For a fair comparison, we used this detector across all the baselines. Following previous works [6, 16, 22, 31, 46], per-class non-maximal suppression at 0.4 IoU (intersection over union) is applied to reduce region proposals provided by RPN. The parameters of the object detector (excluding the object classifier) are fixed during training when training scene graph generation models. For correlation-aware predicate embedding, it is required to match the object pairs across the frames. If there are multiple objects in the same category, to distinguish different pairs, we use IoU between the two objects across different images to match the subject-object pair. We calculated the IoU between the bounding box of the object in the previous frame and the object of the same category in the next frame. If the IoU is higher than 0.8, we consider them to be the same object. If there are multiple candidates, we choose the one with the highest IoU. We use an AdamW optimizer [29] with a batch size of 1 and an initial learning rate of \(2e^{-5}\). The number of mixture components \(\mathcal{K}\) is set to \(4\) for SGCLS and \(6\) for PREDCLS and SGDET. The self-attention and cross-attention layers in our framework have \(8\) heads with \(d=1536\) and dropout\(=0.1\). We set regularizer hyper-parameters as \(\lambda_{1}=1,\lambda_{2}=1\). For debaised predicate embedding, we set the initial learning rate of \(1e^{-}5\) and reduce it with patience to \(3\). For the EMA teacher update, we have used \(\alpha=0.999\). All experiments are carried out on a single NVIDIA RTX-3090.
### Comparison with state-of-the-art
We compared our method FloCoDe with several state-of-the-art methods for dynamic SGG, namely, TEMPURA [31], STTran [6], TRACE [46], STTran-TPI [50], APT [26], and ISGG [21]. Additionally, we compared our method with ReLDN [58], which is a static method. Performance comparisons in terms of mR@K and R@K for K = [10, 20, 50] are reported in Tables 1, 2, and 3. These tables contain comparisons with two experimental setups: a) **With Constraint** and b) **No Constraints**. With these experimental setups and tasks, i.e., _PREDCLS_ + _SGCLS_, we presented results for these in terms of mR@K and R@K in Tables 2 and 3, respectively. Table 1 compares results for task i.e., _SGDET_, with the same metrics and under both experimental setups. Wherever available, we utilise the source code for the respective SOTA methods for mR@K and R@K values. For methods with no code, we take the values reported in [31]. From the tables, it has been observed that our method consistently outperforms other methods across all the tasks and for both experimental setups. Specifically, in comparison to the best baselines, we observe improvements of 4.1% on _SGDET_-mR@10, 3.4% on _SGCLS_-mR@10, and 1.9% on _PREDCLS_-mR@10 under the "**With Constraint**" setup. Under the "**No Constraints** setup, we observe improvements even more significant, with 3.9% on _SGDET_-mR@10, 1.4% on _SGCLS_-mR@10, and 1.7% on _PREDCLS_-mR@10. This demonstrated the capability of FloCoDe in generating more unbiased SGG for videos incorporating dynamic fluctuations and long-tailed relations. We further verified this in Figure 3 for **With Constraint** and **No Constraints**. In these figures, we compared our method on HEAD, BODY, and TAIL classes with mR@10 values. We split the classes into HEAD, BODY, and TAIL with the same definition as mentioned in [31]. Clearly, FloCoDe improved the performance across all the classes, but the improvement for TAIL classes is more confirming the unbiased predictions. Per-class performance is shown in Fig. 4, comparing with other methods _STTran_ and _TRACE_ showing the improvement at class level. Additionally, our method outperforms in terms of R@K values, as shown in Table 3, showing improvements overall as compared to existing methods. This demonstrates that our method has better generalisation since it performs better on both mR@K (long tail performance) and R@K (overall). Qualitative visualisations are illustrated in Fig 2.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{8}{c}{**With Constraint**} & \multicolumn{8}{c}{**No Constraints**} \\ \cline{2-13} & **mR@10** & **mR@20** & **mR@50** & **R@10** & **R@20** & **R@50** & **mR@10** & **mR@20** & **mR@50** & **R@10** & **R@20** & **R@50** \\ \hline \hline RelDN & 3.3 & 3.3 & 3.3 & 9.1 & 9.1 & 9.1 & 7.5 & 18.8 & 33.7 & 13.6 & 23.0 & 36.6 \\ HCRD supervised & - & 8.3 & 9.1 & - & 27.9 & 30.4 & - & - & - & - & - & - \\ TRACE & 8.2 & 8.2 & 8.2 & 13.9 & 14.5 & 14.5 & 22.8 & 31.3 & 41.8 & 26.5 & 35.6 & 45.3 \\ ISGG & - & 19.7 & 22.9 & - & 29.2 & 35.3 & - & - & - & - & - & - \\ STTran & 16.6 & 20.8 & 22.2 & 25.2 & 34.1 & 37.0 & 20.9 & 29.7 & 39.2 & 24.6 & 36.2 & 48.8 \\ STTran-TPI & 15.6 & 20.2 & 21.8 & 26.2 & 34.6 & 37.4 & - & - & - & - & - & - \\ APT & - & - & - & - & 26.3 & 36.1 & 38.3 & - & - & - & 25.7 & 37.9 & 50.1 \\ TEMPURA & 18.5 & 22.6 & 23.7 & 28.1 & 33.4 & 34.9 & 24.7 & 33.9 & 43.7 & 29.8 & 38.1 & 46.4 \\ FloCoDe & **22.6** & **24.2** & **27.9** & **31.5** & **38.4** & **42.4** & **28.6** & **35.4** & **47.2** & **32.6** & **43.9** & **51.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparative results for SGDET task, on AG[16] in terms of mean-Recall@K and Recall@K, best results are in bold.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{8}{c}{**With Constraint**} & \multicolumn{8}{c}{**No Constraints**} \\ \cline{2-13} & \multicolumn{3}{c}{PredCLS} & \multicolumn{3}{c}{SGCLS} & \multicolumn{3}{c}{PredCLS} & \multicolumn{3}{c}{SGCLS} \\ \cline{2-13} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} \\ \hline \hline RelDN & 20.3 & 20.3 & 20.3 & 11.0 & 11.0 & 11.0 & 44.2 & 75.4 & 89.2 & 25.0 & 41.9 & 47.9 \\ TRACE & 27.5 & 27.5 & 27.5 & 14.8 & 14.8 & 14.8 & 72.6 & 91.6 & 96.4 & 37.1 & 46.7 & 50.5 \\ STTran & 68.6 & 71.8 & 71.8 & 46.4 & 47.5 & 47.5 & 77.9 & 94.2 & 99.1 & 54.0 & 63.7 & 66.4 \\ STTran-TPI & 69.7 & 72.6 & 72.6 & 47.2 & 48.3 & 48.3 & - & - & - & - & - & - \\ APT & 69.4 & 73.8 & 73.8 & 47.2 & 48.9 & 48.9 & 78.5 & 95.1 & 99.2 & 55.1 & 65.1 & 68.7 \\ TEMPURA & 68.8 & 71.5 & 71.5 & 47.2 & 48.3 & 48.3 & 80.4 & 94.2 & 99.4 & 56.3 & 64.7 & 67.9 \\ FloCoDe & **70.1** & **74.2** & **74.2** & **48.4** & **51.2** & **51.2** & **82.8** & **97.2** & **99.9** & **57.4** & **66.2** & **68.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparative results for PREDCLS and SGCLS task, on AG[16] in terms of Recall@K, best results are in bold.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{8}{c}{**With Constraint**} & \multicolumn{8}{c}{**No Constraints**} \\ \cline{2-13} & \multicolumn{3}{c}{PredCLS} & \multicolumn{3}{c}{SGCLS} & \multicolumn{3}{c}{PredCLS} & \multicolumn{3}{c}{SGCLS} \\ \cline{2-13} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} \\ \cline{2-13} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} & \multicolumn{3}{c}{R@10} & \multicolumn{3}{c}{R@20} & \multicolumn{3}{c}{R@50} \\ \hline \hline RelDN & 20.3 & 20.3 & 20.3 & 11.0 & 11.0 & 11.0 & 44.2 & 75.4 & 89.2 & 25.0 & 41.9 & 47.9 \\ TRACE & 27.5 & 27.5 & 27.5 & 14.8 & 14.8 & 14.8 & 72.6 & 91.6 & 96.4 & 37.1 & 46.7 & 50.5 \\ STTran & 68.6 & 71.8 & 71.8 & 46.4 & 47.5 & 47.5 & 77.9 & 94.2 & 99.1 & 54.0 & 63.7 & 66.4 \\ STTran-TPI & 69.7 & 72.6 & 72.6 & 47.2 & 48.3 & 48.3 & - & - & - & - & - & - \\ APT & 69.4 & 73.8 & 73.8 & 47.2 & 48.9 & 48.9 & 78.5 & 95.1 & 99.2 & 55.1 & 65.1 & 68.7 \\ TEMPURA & 68.8 & 71.5 & 71.5 & 47.2 & 48.3 & 48.3 & 80.4 & 94.2 & 99.4 & 56.3 & 64.7 & 67.9 \\ FloCoDe & **70.1** & **74.2** & **74.2** & **48.4** & **51.2** & **51.2** & **82.8** & **97.2** & **99.9** & **57.4** & **66.2** & **68.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparative results for PREDCLS and SGCLS task, on AG[16] in terms of mean-Recall@K, best results are in bold.
Figure 2: **Qualitative Comparison** with TEMPURA[31] for both **With Constraint** and **No Constraints** setup. From left to right: input video frames, ground truth graphs generated by FloCoDe, graphs generated by TEMPURA[31]. Incorrect object and predicate predictions are shown in green and red, respectively.
### Ablation Studies
We have conducted extensive ablation studies on SGCLS and SGDET tasks. Specifically, we studied the impact of _KML_ (uncertainty-aware contrastive learning), _Debiasing_ (correlation-aware debiasing), _TFoD_ (flow-aware temporal consistency), _Regularizer_ (aleatoric and epistemic regularizer), and _EMA Teacher_. When all these components have been removed, the FloCoDe boils down to the baseline STTran[6], where the object proposals and predicate embeddings are fed to the FFN layers before finally predicting the predicate class using a classification layer. The results for these ablation studies are presented in Table 4. **Uncertainty Attenuation and Debiasing**: We first discuss the impact of uncertainty-aware contrastive learning and correlation-aware debiasing. For the first case, we remove the loss \(\mathcal{L}_{\text{KMLC}}\) to study the improvement on top of the loss based on MLN. With MLN mixtures, a similar improvement is already demonstrated in [31]. In the second case, we remove the correlation-aware debiasing during training and train end-to-end without any debiasing. The results for both of these are present in Table 4 rows 1 and 2. Clearly, comparing the resulting models with FloCoDe shows the significant improvement in mR@10 values, showing the value addition from each of them in generating an unbiased SGG. This also shows that both can address the noise associated with TAIL classes, while contrastive learning deals with annotation noise-debiasing focus to generate unbiased predicate embeddings. **Temporally Consistent Object Detection**: The effect of flow-aware detection of temporally consistent objects is shown in row 3 of Table 4. Comparing the performance with FloCoDe, we can see that without _TFoD_ results in a significant drop in performance. This highlights the fact that incorrect object detection is the major bottleneck for any SGG method. For _PREDCLS_ task, it takes only the ground-truth boxes and labels, and hence the mR@k and R@K values are much higher as compared to other tasks. **Uncertainty Regularizer and EMA Teacher**: Comparing the ablation of these two components from rows 4 and 5, respectively, confirms the importance of regularizers in further reducing the noise associated with the TAIL classes. For EMA teachers, it produces more balanced predicate embeddings, hence providing improvement by just predicting the unbiased embeddings corresponding to their class. **Number of Mixtures**\(\mathcal{K}\): The performance of FloCoDe with varying numbers of mixtures in MLN is shown in Ta
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{\begin{tabular}{c} Uncertainty-aware \\ Contrastive Learning \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} Correlation-aware \\ Debiasing \\ \end{tabular} } & \multicolumn{3}{c}{Flow-aware} & \multicolumn{6}{c}{With Constraint} & \multicolumn{3}{c}{No Constraints} \\ \cline{5-16} & & & & \multicolumn{3}{c}{SGCLS} & \multicolumn{3}{c}{SGDET} & \multicolumn{3}{c}{SGCLS} & \multicolumn{3}{c}{SGDET} \\ \cline{5-16} & & & & & mR@10 & mR@20 & mR@10 & mR@20 & mR@10 & mR@20 & mR@10 & mR@20 \\ \hline \hline - & - & - & - & - & - & 27.2 & 28.0 & 16.5 & 20.8 & 40.7 & 50.1 & 20.9 & 29.7 \\ \hline ✓ & - & ✓ & ✓ & ✓ & ✓ & 34.1 & 33.8 & 19.6 & 22.1 & 46.9 & 61.1 & 26.6 & 32.7 \\ - & ✓ & ✓ & ✓ & ✓ & 33.6 & 34.3 & 19.4 & 21.8 & 46.2 & 60.6 & 25.8 & 32.5 \\ ✓ & ✓ & - & ✓ & ✓ & 32.2 & 33.4 & 18.1 & 19.8 & 45.9 & 59.1 & 21.8 & 31.6 \\ ✓ & ✓ & ✓ & - & ✓ & 35.8 & 36.6 & 21.2 & 22.7 & 48.3 & 61.4 & 27.5 & 34.4 \\ ✓ & ✓ & ✓ & ✓ & - & 36.7 & 38.8 & 22.1 & 23.8 & 49.2 & 62.9 & 28.3 & 35.2 \\ ✓ & ✓ & ✓ & ✓ & ✓ & **37.4** & **39.2** & **22.6** & **24.2** & **49.7** & **63.8** & **28.6** & **35.4** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation Studies**: Importance of _KMLC_, _Debiasing_, _TFoD_, _Regularizer_ & _EMA Teacher_ for SGCLS and SGDET.
Figure 4: Comparative per class performance for PREDCLS task in R@10 for ”with constraint” setup
Figure 3: Comparison of mR@10 for the HEAD, BODY and TAIL classes for ”with constraint”(top) and ”no contraints”(bottom)
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Task\(\mathcal{K}\) & 1 & 2 & 4 & 6 & 8 \\ \hline PREDCLS & 39.8 & 41.2 & 43.4 & **44.8** & 44.2 \\ SGCLS & 30.2 & 35.5 & **37.4** & 36.2 & 35.8 \\ SGDET & 16.1 & 18.1 & 21.9 & **22.6** & 22.1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results (in mR@10) with varying number of mixtures \(\mathcal{K}\) for **With Constraint** setup
ble 5. Number of mixtures b/w 4 to 6 is optimal.
## 5 Conclusion
We demonstrated the correlation debiasing (label correlation and debiased predicate embedding) and resulting performance of FloCoDe with these debiasing mechanism prove the fact that dynamic SGG requires more focus on long-tailed classification rather than complex architecture for temporal sequence processing.
|
2304.02199
|
Knowledge Combination to Learn Rotated Detection Without Rotated
Annotation
|
Rotated bounding boxes drastically reduce output ambiguity of elongated
objects, making it superior to axis-aligned bounding boxes. Despite the
effectiveness, rotated detectors are not widely employed. Annotating rotated
bounding boxes is such a laborious process that they are not provided in many
detection datasets where axis-aligned annotations are used instead. In this
paper, we propose a framework that allows the model to predict precise rotated
boxes only requiring cheaper axis-aligned annotation of the target dataset 1.
To achieve this, we leverage the fact that neural networks are capable of
learning richer representation of the target domain than what is utilized by
the task. The under-utilized representation can be exploited to address a more
detailed task. Our framework combines task knowledge of an out-of-domain source
dataset with stronger annotation and domain knowledge of the target dataset
with weaker annotation. A novel assignment process and projection loss are used
to enable the co-training on the source and target datasets. As a result, the
model is able to solve the more detailed task in the target domain, without
additional computation overhead during inference. We extensively evaluate the
method on various target datasets including fresh-produce dataset, HRSC2016 and
SSDD. Results show that the proposed method consistently performs on par with
the fully supervised approach.
|
Tianyu Zhu, Bryce Ferenczi, Pulak Purkait, Tom Drummond, Hamid Rezatofighi, Anton van den Hengel
|
2023-04-05T03:07:36Z
|
http://arxiv.org/abs/2304.02199v2
|
# Knowledge Combination to Learn Rotated Detection Without Rotated Annotation
###### Abstract
Rotated bounding boxes drastically reduce output ambiguity of elongated objects, making it superior to axis-aligned bounding boxes. Despite the effectiveness, rotated detectors are not widely employed. Annotating rotated bounding boxes is such a laborious process that they are not provided in many detection datasets where axis-aligned annotations are used instead. In this paper, we propose a framework that allows the model to predict precise rotated boxes only requiring cheaper axis-aligned annotation of the target dataset 1.
Footnote 1: Code is available at: [https://github.com/alanzry/KCR-Official](https://github.com/alanzry/KCR-Official)
To achieve this, we leverage the fact that neural networks are capable of learning richer representation of the target domain than what is utilized by the task. The under-utilized representation can be exploited to address a more detailed task. Our framework combines task knowledge of an out-of-domain source dataset with stronger annotation and domain knowledge of the target dataset with weaker annotation. A novel assignment process and projection loss are used to enable the co-training on the source and target datasets. As a result, the model is able to solve the more detailed task in the target domain, without additional computation overhead during inference. We extensively evaluate the method on various target datasets including fresh-produce dataset, HRSC2016 and SSDD. Results show that the proposed method consistently performs on par with the fully supervised approach.
**Acknowledgement** This paper is inspired by a computer vision project conducted at Amazon. We would like to express our sincere gratitude to the following individuals for their contributions to this research project: Gil Avraham, Hisham Husain, Chenchen Xu, Ravi Garg, Shatanjay Khandelwal and Philip Schulz, who all work at Amazon. Their support, insights, and feedback were invaluable throughout the research process, and we are truly grateful for their help.
## 1 Introduction
Rotated detectors introduced in recent works [17, 20, 32] have received attention due to their outstanding performance for top view images [15, 33, 34]. They reduce the output ambiguity of elongated objects for downstream tasks making them superior to axis-aligned detectors in dense scenes with severe occlusions [18]. However, the rotated annotation is more expensive compared to axis-aligned annotation. Furthermore, popular 2D annotation tools such as Sagemaker Groundtruth 2 and VGG app 3 do not support rotated bounding box annotations. As a result, many popular detection datasets only have axis-aligned annotations [3, 4, 11]. These problems reduces the potential scope of the implementation of rotated detectors. In this work, we introduce **K**nowledge **C**ombination to learn **R**otated object detection, a training scheme that only requires cheaper axis-aligned annotation for the target dataset in order to predict rotated boxes.
Footnote 2: [https://aws.amazon.com/sagemaker/data-labeling/](https://aws.amazon.com/sagemaker/data-labeling/)
Footnote 3: [https://www.robots.ox.ac.uk/vgg/software/via/](https://www.robots.ox.ac.uk/vgg/software/via/)
Neural networks encode data into a latent space, which is then decoded to optimize the given task. The latent embedding is an abstract representation of the data, containing much richer information than the output [29]. Early works in deep learning show that the model implicitly learns to detect image features such as edges and corners [10, 12], which
Figure 1: KCR combines the task knowledge of a source dataset with stronger rotated annotation and the domain knowledge of the target dataset with weaker axis-aligned annotation, which enables the model to predict rotated detection on the target domain.
can be used for more detailed tasks if decoded properly. We believe decoding to a more precise task on the target domain can be learnt via co-optimizing with a strongly labelled source dataset. We design a framework that combines task knowledge of rotated detection from a source dataset, and the domain knowledge of a disjoint class of objects in the target dataset with only axis-aligned annotation, as shown in Figure 1. This approach combines the advantage of both weakly-supervised learning and transfer learning.
We follow a design principal that the framework should maximize the target domain knowledge learnt by the model while minimizing the negative impact caused by weaker labels. This is achieved by co-training the source and target dataset with projection losses and a novel assignment process. The design choices are validated through ablation studies. We conduct extensive experiments to demonstrate that our framework is robust to a large domain gap between source and target dataset. Therefore, box orientation can practically be learnt for free with KCR, due to the availability of free public source datasets such as DOTA [32] with rotated annotations. We show the efficacy of this method on a fresh-produce dataset with high density of objects and severe occlusions. The performance (AP50) gap between the proposed method, learning from weak axis-aligned boxes, and the fully-supervised model learning from strong rotated annotation, reduces to only \(3.2\%\) for the challenging cucumber dataset. We apply the same framework to HRSC2016 [16] and SSDD [27] datasets to show that our method consistently performs on par with fully supervised models. The performance gap reduces to \(1.0\%\) for SSDD. We believe our approach can greatly increase the usage and impact of rotated object detectors. The source code will be publicly available for the community to save future annotation cost.
In summary, our main contributions are as follows:
1. We introduce a framework that combines task knowledge of a strongly labelled source dataset and domain knowledge of a weakly labelled target dataset.
2. We apply this method in 2D rotated detection task, enabling the model to predict rotated bounding box with only axis-aligned annotation and verify the generality of the method with several datasets.
3. We demonstrate robustness of the framework to various domain gaps between source and target datasets. Hence, box orientation can be learnt with no additional annotation cost in practical applications.
## 2 Related Work
**Rotated Detection Task.** Rotated object detection requires the model to predict minimum area rectangles with five degrees of freedom, namely rotated bounding boxes, enclosing objects of interests [32]. In axis-aligned object detection, the output rectangles have four degrees of freedom, which are aligned with the image axes [13]. The rotated boxes occupy a much smaller area when estimating the location status of diagonally positioned elongated objects as shown in Figure 1. Rotated detection is strictly superior to axis-aligned detection as there is less background within the box, and the orientation can potentially convey object pose information. However, there are only a handful of datasets with rotated annotations [16, 27, 32] comparing to large number of readily available large-scale axis-aligned datasets [13, 19, 4, 11]. A potential contributor to this phenomenon is the ease of annotating axis-aligned boxes by simple click-and-drag with current labelling tools. Rotated boxes require much more effort to tightly enclose the objects with an extra degree of freedom. Popular annotation tools such as AWS Sagemaker and VGG app do not support rotated boxes. In order to acquire tighter rotated annotations, users must pay for instance segmentation, which is significantly more expensive and unnecessary for the final task. Therefore, we propose this work to address the shortcomings of the rotated object detection task pipeline by making orientation free to learn.
**Weakly-Supervised Learning.** Weakly supervised learning sits between fully supervised learning and unsupervised learning, in the sense that only weak labels are available. The labels are weak either because they are incomplete, inexact or noisy [1, 6, 23, 30, 31, 30]. In computer vision, popular weakly-supervised learning tasks include object detection with only image level annotation [5, 35] and instance segmentation with only box annotation [21, 2]. Due to the difficulty of pixel-wise prediction, weakly-supervised instance segmentation still falls significantly behind a fully-supervised model [28] on novel objects. In this paper, we focus on learning rotated detection requiring five parameters with only axis-aligned annotation during training which provides four parameters. The one parameter difference makes our approach weakly-supervised. To the best our knowledge, such problem has only been attempted on specific category of objects [8] but not approached generally. For elongated objects, solving this problem is more appropriate than solving weakly-supervised instance segmentation directly.
**Transfer Learning.** Computer vision models are frequently initialised with backbones [7, 14, 22] pretrained on ImageNet or COCO [13, 26], which is a basic form of transfer learning and a common practice. Transfer learning is highly effective when the target dataset has small sample size such as medical imaging [24]. Despite the fact that conventional transfer learning reduces the number of data samples required for a specific task, strong annotation of the target domain is still required for the model to learn the detailed task. In this paper, we utilize co-training strategy to transfer the ability to make more detailed predictions from the source to the target dataset with weaker annotations.
## 3 Kcr
The goal of our work is to learn to predict rotated bounding boxes on a _target_ dataset of which we only have axis-aligned bounding boxes in training examples. We develop a co-training scheme that utilizes an out-of-domain but strongly labelled _source_ dataset to learn accurate rotations of elongated objects. An example of a potential source and target dataset pair are a satellite imagery dataset, DOTA [32], and a fresh-produce dataset, as shown in Figure 1. In this section, we first briefly describe the workflow of the detector following by our training scheme which enables weakly-supervised learning and knowledge transfer.
### Rotated Detection Overview
In this work, we employ the same architectural choices as [33] and the flow is briefly demonstrated in Figure 2. The forward propagation comes in two stages: the oriented RPN followed by oriented R-CNN, where both methods contain a classification head and a regression head. The RPN takes an image and generates \(N\) oriented region proposals \(\left\{R_{i}\right\}_{i=1}^{N}\) which each take the form \(R_{i}=\left(x_{i},y_{i},w_{i},h_{i},\alpha,\beta,p_{i}\right)\), where \(\left(x_{i},y_{i}\right)\) denotes the center, \(w_{i}\) and \(h_{i}\) are the width and height of the tightest axis-aligned external box. \(\alpha\) and \(\beta\) are the are the offsets relative to the midpoints of the top and right sides of the external rectangle. \(p_{i}\) is an object score. The proposed rotated region will then be cropped by rotated roi [33] in feature space. The proposal is then fed to Oriented R-CNN which is a CNN followed by another classification head and a bounding box regression head rectifying the spatial location. We denote the output of of second stage is \(\left\{R_{i}^{*}\right\}_{i=1}^{N}\) and \(R_{i}^{*}=\left(x_{i}^{*},y_{i}^{*},w_{i}^{*},h_{i}^{*},\theta_{i}^{*},p_{i}^ {*},c_{i}^{*}\right)\) where \(\left(x_{i}^{*},y_{i}^{*}\right)\) denotes the center, \(w_{i}^{*}\) and \(h_{i}^{*}\) are the width and height and \(\theta_{i}^{*}\) the rotation angle of the final predicted rotated box. \(p_{i}^{*}\) is the second stage object score and \(c_{i}^{*}\) represents the classification score.
### Learning Rotated Region Proposal
We develop a co-training scheme that utilizes both source and target datasets. We denote \(\left\{B_{j}^{s}\right\}_{j=1}^{m}\) to be the \(m\) rotated bounding boxes of a source image where \(B_{j}^{s}=\left(x_{j}^{s},y_{j}^{s},w_{j}^{s},h_{j}^{s},\theta_{j}^{s}\right)\) is a rotated box. For the target dataset, we have \(n\) axis-aligned boxes \(\left\{B_{j}^{t}\right\}_{j=1}^{n}\) on a target image where \(B_{j}^{t}=\left(x_{j}^{t},y_{j}^{t},w_{j}^{t},h_{j}^{t}\right)\) is a axis-aligned box.
In the first stage, a set of \(N\) oriented regions are proposed where each proposal \(R_{i}=\left(x_{i},y_{i},w_{i},h_{i},\alpha,\beta,p_{i}\right)\) is assigned a ground truth label \(B_{\sigma(i)}\) based on intersection over union (IoU) matching, where the assignment \(\sigma(i)\) and matching score \(\tau(i)\) are
\[\sigma(i) =\operatorname*{arg\,max}_{j\in\{1,\dots,m\}}iou((x_{i},y_{i},w_ {i},h_{i}),P(B_{j})), \tag{1}\] \[\tau(i) =\operatorname*{max}_{j\in\{1,\dots,m\}}iou((x_{i},y_{i},w_{i},h _{i}),P(B_{j})). \tag{2}\]
Since in first stage, \(\left(x_{i},y_{i},w_{i},h_{i}\right)\) represents the tightest external axis-aligned box instead of the rotated region itself, we need a transformation function \(P\) to project the ground truth to axis-aligned box for source rotated dataset. It is important to notice that \(P(B_{j})\) is strictly equal or larger than canonical axis-aligned box of the object. Therefore, for target axis-aligned dataset, we need to enlarge \(B_{j}^{t}\). Here we
Figure 2: Overall framework of KCR, which learns the knowledge of rotated detection from source dataset combined with domain knowledge of axis-aligned target dataset to infer rotated bounding boxes of target objects. The rotated detector [33] takes in an image encoded by a CNN and generates _first-stage_ proposals \(\left(x,y,w,h,\alpha,\beta,p\right)\) and _second-stage_ refinements \(\left(x^{*},y^{*},w^{*},h^{*},\theta^{*},p^{*}\right)\). We use a function \(P\) to either project or enlarge the box representation during the assignment processes and loss functions.
formally define the transformation function
\[P(B)=\begin{cases}(x_{min},y_{min},x_{max},y_{max})\\ \quad\text{ for }B=(x,y,w,h,\theta),\\ (x_{min}-\gamma w_{i},y_{min}-\gamma h_{i},x_{max}+\gamma w_{i},\\ y_{max}+\gamma h_{i})\text{ for }B=(x,y,w,h),\end{cases} \tag{3}\]
where \(\gamma\) is simply an enlargement factor that we can tune. The loss for this RPN is defined as follows
\[L_{\mathcal{S}} =\frac{1}{N}\sum_{i=\mathbb{N}}^{N}-\mathbf{1}_{\tau(i)\geq 0.5} \log(p_{i})+l_{1}(R_{i},B_{\sigma(i)}^{s}) \tag{4}\] \[L_{\mathcal{T}} =\frac{1}{N}\sum_{i=1}-\mathbf{1}_{\tau(i)\geq 0.5}\log(p_{i})\] \[\qquad\qquad\qquad\qquad+l_{1}((x_{i},y_{i},w_{i},h_{i}),P(B_{ \sigma(i)}^{t})),\] \[L =L_{\mathcal{S}}+L_{\mathcal{T}}, \tag{5}\]
where the loss for both target and source examples are composed by a binary cross entropy (BCE) loss and an \(l_{1}\) regression loss on spatial outputs. The label for the BCE is determined by whether the proposal has an overlap larger than 0.5 with any projected ground truth. For the source dataset, we can compute ground truth \((\alpha^{s},\beta^{s})\) from \(\theta^{s}\) which is the regression for the rotation representation. However, for the target dataset, we only compute regression loss for \((x_{i},y_{i},w_{i},h_{i})\). The model must learn the rotation knowledge from the source dataset. To simplify the mathematical notation, we omit classification loss as it is less relevant to our contribution than object score.
### Learning Rotated R-CNN
In the second stage, the Oriented R-CNN takes in a subset of proposals generated in first stage and make final prediction \(R_{i}^{*}=(x_{i}^{*},y_{i}^{*},w_{i}^{*},h_{i}^{*},\theta_{i}^{*},p_{i}^{*},c_ {i}^{*})\). The bounding box regression is less important because of two reasons. Firstly, the cropped region proposal is already an approximate detection. The goal of regression of this stage is to fine-tune. Secondly, we choose to use class-agnostic rotated bounding box regression here as the model is able to learn general bounding box regression from the source dataset. The classification of the second stage is also straight forward to train because canonical axis-aligned detector fundamentally identical in that respect. The most important and challenging aspect is to produce an accurate object score \(p_{i}^{*}\), which is a direct result of ground truth assignment process. We first formulate the assignment process for the source dataset as
\[\sigma_{s}^{*}(i) =\operatorname*{arg\,max}_{j\in\{1,\dots,m\}}iou((x_{i},y_{i},w_{ i},h_{i},\theta_{i}),B_{j}^{s}), \tag{6}\] \[\tau_{s}^{*}(i) =\max_{j\in\{1,\dots,m\}}iou((x_{i},y_{i},w_{i},h_{i},\theta_{i} ),B_{j}^{s}). \tag{7}\]
Note that the assignment process is based on the proposal instead of the final refined prediction. The difference from the first stage is that here we use accurate rotated ground truth \(B_{j}^{s}\) against \(R_{i}\) instead of the external enclosing axis-aligned box. We compute \(\theta\) of the first stage with \((\alpha,\beta)\) inline
\(w_{j}*h_{j}<a_{threshold}\), it is likely an occluded box. Following these heuristics, we define a binary reliability switch and loss for target dataset as
\[g_{i}=\begin{cases}1\text{ if }(r_{\sigma^{*}(i)}>3\text{ or }w_{\sigma^{*}(i)}*h_{\sigma^{*}(i)}<a_{threshold})\\ 0\text{ otherwise}\end{cases} \tag{13}\]
\[L^{*}_{\mathcal{T}}=\frac{1}{N}\sum_{i=1}^{N}-g_{i}\mathbf{1}_{\tau^{*}_{i}(i) \geq 0.5}log(p^{*}_{i}). \tag{14}\]
and we choose to mask unreliable examples. Despite heuristic selection is less general than projection assignment, this can be beneficial for confined industry applications where the aspect ratio of the target object is known such as detecting a bottle on the conveyor belt.
## 4 Experiments and Discussion
In this section, we first outline our implementation details and experimental setup including the datasets. Then we show the effectiveness of our method through ablation studies followed by the comparison with different target and source datasets. Finally, we include some qualitative results.
### Experimental setup
**Source Datasets.** Source datasets, in this work, are datasets strongly labelled with box orientations. The model has access to the source datasets to learn the task of rotated box detection. We choose a variety of source datasets including DOTA [32], COCO [13] and Fresh-produce dataset 1 where DOTA [32] is a popular satellite imagery dataset with rotated annotations and COCO [13] is a popular general object detection dataset. In COCO, rotated bounding box ground truth can be generated by finding the minimum enclosing rectangle on each instance segmentation mask ground truth. Fresh-produce dataset 1 is a challenging dataset with high object density and heavy occlusion. The number of images and instances are shown in table 1. There are three long-shape subsets including banana, cucumber and carrot.
**Target Datasets.** The training subset of the target datasets only contains axis-aligned bounding box ground truth which is used for the model to learn. Alternatively, the validation and test subsets contain rotated ground truth to evaluate the performance of the model. We select HRSC2016 [16], SSDD [27], cucumber and carrot datasets 1 as our target datasets to cover a variety of domain permutations including satellite, single channel, natural and various object density.
**Implementation Details.** We pretrain our detector, Oriented-RCNN [33] using DOTA [32]. The ship class and the images with the ships are removed from DOTA for this paper for pretraining and its role as a source dataset, due to class overlap with some target datasets. The main statistic we use in the paper to evaluate our method is AP50. It calculates the average precision with IOU threshold of \(0.5\). Average precision (AP) is the area under precision and recall curve. We utilise the mmrotate framework [36] and [33] for training. We use a batch of \(2\) images from target dataset and a batch of \(2\) images from the source dataset for a combined mini-batch of \(4\). The losses and forward propagation for two batches are computed independently, then the losses are added and backpropagated through the network. We train the models up to \(50\) epochs of target dataset. During inference, we set the non-maximum-suppression threshold to \(0.5\) instead of \(0.1\) used generally for aerial datasets, as the higher object density fresh-produce dataset contains far more object overlapping cases. We conduct the experiments using one 2080ti GPU with 11GB of memory. The training time depends on the dataset size, ranging from 30 minutes to 2 hours to train. The test can be done within 5 minutes for each dataset with speed of 15 FPS, which is the same as the original detector.
### Ablation Studies
To tackle the weakly-supervised learning of rotated bounding box given only axis-aligned ground truth, we build our approach progressively. In this section, we show how the approach evolves using cucumber as our axis-aligned training target dataset and DOTA [32] as our source dataset. We choose this particular pair because their domain gap is large in terms object appearance, density and occlusion severity. It is more convincing if the model is able to learn rotation with such a large domain gap between source and target.
We first establish a baseline by training a rotated detector using our axis-aligned cucumber training dataset and test it on rotated cucumber test set. This baseline is the first row of table 2 with AP50 of 0.491. Then we introduce the cotraining strategy with the source dataset without any modification of the training scheme. That means we simply treat both source and target datasets equally and completely follow the training scheme of [33]. This gives us AP50 of 0.542 which is an incremental improvement over the baseline.
RPN projection 5 improves the performance to 0.581 (Table 2). Instead of learning the wrong rotation supervision from the axis-aligned dataset, it chooses not to learn. After that, we try the single class strategy: only use one class label for all classes in source dataset. This effectively improves the performance to 0.633 because the model focuses on learning the object score and rotation angle instead of the classifi
\begin{table}
\begin{tabular}{l c c} \hline \hline Class & Number of images & Number of instances \\ \hline banana & 158 & 7391 \\ cucumber & 48 & 2036 \\ carrot & 47 & 4647 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fresh-produce datasets.
cation problem of the source dataset. If the target dataset is multi-class, we can apply the single class strategy to the source dataset because the classification is not important.
The projection assignment and heuristic selection methods are implemented separately. The projection assignment strategy as described in section 3.3 improves the performance to 0.664 since it reduces the false negative rate. It cannot further improve performance due to the false positive assignment problem. Heuristic selection strategy improves the model performance to 0.666. Finally, we pretrain this model on other axis-aligned objects and improve this performance to 0.683 which is 0.192 higher than the baseline. We take the last row of table 2 as our final approach and apply it to permutations of target and source datasets in section 4.2.
We also provide an ablation of the enlargement factor \(\gamma\geq 1\). As we can see from Table 3, \(\gamma=1.00\) is the best, which is what we have used throughout all the experiments.
### Main results
In this section, we apply KCR framework to various combinations of source and target datasets to investigate the generality of the method under different domain gaps. We also compare KCR with a popular foreground segmentor [25].
**Fresh-produce dataset**. In Table 4 we can see results progressively improve when information that is closer to desired result is introduced. We initially evaluate our method where no knowledge combination is used, that is, for the cucumber and carrot, we use the axis-aligned bounding boxes and allow the network to learn incorrect \(\alpha,\beta\) and \(\theta\) val
ues. In the next iteration, we use DOTA [32] aerial satellite images for knowledge combination. The domain gap between these images and the cucumber and carrots is large because of the sparsity of objects in DOTA. Nonetheless, introducing rotated-bounding box information, even with dissimilar dataset, results in a large performance improvement for both cucumber and carrot as seen in the second row of Table 4. The next progression combines knowledge with bananas annotated with rotated bounding-box. This further improves the results and provides an insight into task similarity strengthening knowledge combination. The banana dataset offers the opportunity for the model to learn rotated detection with severe occlusion. For reference, the last row in Table 4 is the result when training the model directly on ground truth rotated bounding box, i.e. fully supervised training. We note that the performance gap between our approach and fully supervised training is small.
**HRSC2016**. HRSC2016 [16] is a rotated satellite imagery dataset that focuses on ships. The objects in HRSC2016 typically have a larger aspect ratio and occupies larger area of the image in comparison DOTA [32] targets. To prevent class overlap, images with ships have been removed from DOTA when using as a source dataset. We establish our baseline by using original training regime and only axis-aligned ground truth of HRSC2016 yielding AP50 of 0.175, which is 72.8% lower than a fully supervised result as shown in Table 5. We investigate the performance of KCR with COCO [13], containing only natural images, as source dataset because the domain gap between COCO and HRSC2016 is visually large. We generate rotated bounding box ground truth from instance segmentation masks. The result significantly improves to 0.579 when we use KCR framework. Although the rotated bounding box is used in source dataset, objects in COCO are typically axis-aligned, hence produce a weak rotation training signal. We therefore rotate images by \(0-180\) degrees followed by horizontal and diagonal flip augmentation to increase the number of rotated examples in the source dataset. As a result, AP50 rises to 0.783 with strongly augmented COCO. Thus we show that KCR enables the transfer learning of extra parameter under large domain gap. At last, we use DOTA dataset as source and achieves AP50 of 0.791, which is only 11.2% lower than the fully supervised model. The performance gap is potentially due to lack of equally long objects with large aspect ratio in the source dataset.
**SSDD**. SSDD [27] is SAR dataset which also focuses on detection of ships. Images in SSDD are single channel and vary in frequency depending on the sensor used for acquisition. These images are commonly low resolution and contain high frequency noise, which results in a visually large domain gap from any source datasets. We follow the same knowledge combination strategy as before. Using KCR, a trained rotated box model performs almost equally well from three different source datasets (DOTA, coco and coco augmented). The final AP50 of 0.888 is 45.6% better than the baseline and only 1% lower than a fully supervised model.
**Rotated Boxes with Grabcut.** GrabCut [25] is a computer vision algorithm that predicts a foreground initialised with a region-of-interest. This foreground segmentation can then be used for predicting a rotated box with a variety of heuristic algorithms. This predicted rotation can be used at inference
Figure 4: Distribution of Aspect ratios on 4 datasets.
Figure 3: Visualization of KCR performance against original Oriented-Rcnn trained with weak axis-aligned annotation. The images are from test sets of HRSC2016 [16] and SSDD [27]. We use the COCO as our source dataset which has large domain gap from the target. The model trained with KCR methods learnt to predict accurate rotated bounding box, which is much more precise than the original model.
time as a post-processing step, transforming the axis aligned boundary box to a rotated boundary box. At test time, this improves the baseline by a minimal 6.5%, however reduces inference throughput to 0.3 FPS. Alternatively, this algorithm can be performed offline on the axis-aligned ground truth to produce a noisy rotated boundary box ground truth for training. This method is significantly more effective, improving test time performance on rotated ground truth to 0.629 AP50 for HRSC and 0.585 for SSDD. However, KCR outperforms training on noisy ground truth by 16.2% on HRSC and 30.3% on SSDD.
**Analysis on aspect ratios.** We show histograms of aspect ratio in Fig 4. The biggest difference between an axis-aligned and rotated box happens when the instance has high aspect ratio **and** rotated. Most COCO objects are neither long **nor** rotated while DOTA has a fair distribution of long and rotated objects. Performance gap between original COCO and DOTA as source dataset is bigger for HRSC than SSDD because HRSC has a more tail heavy aspect ratio distribution.
### Qualitative Results
We visualize performance of KCR on HRSC2016 [16] and SSDD [27] in Figure 3. The model has successfully learnt to predict accurate rotated bounding box with weak axis-aligned annotation. The source dataset we use to gain rotation knowledge is COCO [13], which has a large domain gap distant from the target dataset. The rotated prediction from our framework produces boxes much tighter than a model which was trained on and predicts axis-aligned boxes.
Fresh-produce dataset is more challenging due to higher density of objects with severe occlusion. We visualize the performance of KCR with cucumber and carrot as target datasets and a banana dataset as the source dataset supplying knowledge of rotation. The depicted images are from a separate unlabeled set to ensure generality. As shown in Figure 5, the model is clearly capable of predicting tight rotated bounding boxes in a challenging scenario which is core contribution of this paper. The model is able to complete the task with high precision and recall in a scene with frequent and extremely occlusions, various lighting conditions and different object sizes. For the two images in the left column, the model detects almost every object.
## 5 Conclusion
Rotated detection improves the performance of downstream tasks by reducing the overall area of the enclosing box, improving the foreground to background ratio. This is particularly important for scenes with high density of targets and complex occlusions. However, most existing datasets only provide axis-aligned annotation, with the lack of the capability to annotate rotated boxes. In this paper, we address this problem by proposing KCR, which is a novel knowledge combination training scheme that only requires axis-aligned annotation for the target object class to train the model. At inference time, the model predicts accurate rotated bounding boxes on par with fully-supervised approach. This approach will enable the detector to predict an extra but crucial param
Figure 5: Visualization of KCR performance trained with axis-aligned only dataset. The images are chosen from an unlabeled set to ensure generality. The model is clearly capable of predicting tight rotated bounding box even with high object density and occlusion.
eter. We believe this work can greatly extend the use case of rotated object detection by reducing annotation costs.
|
2308.13341
|
Relativistic constraints on 3N contact interactions
|
In this paper we analyze the relativistic corrections to the leading order
three-nucleon (3N) contact interactions. These boost corrections are derived
first from the nonrelativistic reduction of covariant Lagrangians and later
from the Poincar\'e algebra constraints on nonrelativistic theories. We show
that in order to describe the 3N potential in reference frames other than the
center-of-mass frame, the inclusion of five additional terms with fixed
coefficients is required. These terms will be relevant in systems with mass
number A>3. How they will affect EFT calculations of binding energies and
scattering observables in these systems should be investigated.
|
Alessia Nasoni, Elena Filandri, Luca Girlanda
|
2023-08-25T12:21:01Z
|
http://arxiv.org/abs/2308.13341v1
|
# Relativistic constraints on 3N contact interactions
###### Abstract
In this paper we analyze the relativistic corrections to the leading order three-nucleon (3N) contact interactions. These boost corrections are derived first from the nonrelativistic reduction of covariant Lagrangians and later from the Poincare algebra constraints on nonrelativistic theories. We show that in order to describe the 3N potential in reference frames other than the center-of-mass frame, the inclusion of five additional terms with fixed coefficients is required. These terms will be relevant in systems with mass number \(\mathbf{A>3}\). How they will affect EFT calculations of binding energies and scattering observables in these systems should be investigated.
**Keywords: Effective Lagrangians, Three-body interaction, Contact interaction, Relativistic covariance**
## 1 Introduction
Nowadays, effective field theories (EFTs) are recognized as the standard framework for dealing with the nuclear interaction [1, 2, 3, 4, 5, 6, 7]. The starting points are the identification of the most general effective Lagrangian preserving all the low energy symmetries of the fundamental theory and a power counting to organize the infinite tower of permitted interactions. This leads to the emergence of a predictive setting in which the interactions are expressed at each order of the low-energy expansion in terms of a finite number of low-energy constants (LECs), which can be treated as fitting parameters and extracted from phenomenology. Of particular interest among these fitting parameters are the LECs related to contact interactions between nucleons. They are strongly constrained by discrete symmetries and also by Poincare symmetry, although the typical setting of nuclear physics is a nonrelativistic quantum-mechanical context.
Relativistic effects in nuclear interaction vertices can be determined by the nonrelativistic reduction of a relativistic quantum-field theoretical Lagrangian and evaluated order by order in the low-energy expansion, since they scale with the soft nucleon momenta [8, 9]. An alternative approach derives from the Poincare algebra constraints in a purely quantum mechanical setting [10, 11]. Indeed, at sufficiently low energy scales, when the effects of creation and annihilation of particles can be ignored, the system can be considered as constituted by a fixed number of particles, and interactions can be described as direct (i.e., they explicitly depend on the physical variables associated to the constituents of the system) rather than mediated by fields.
The analysis of relativistic corrections on two-nucleon contact forces up to order \(1/m^{2}\), \(m\) being
the nucleon mass, has already been discussed in Refs. [12, 13] both via the nonrelativistic reduction of covariant Lagrangians and from the point of view of the constraints imposed by the Poincare algebra. Furthermore, the analysis of these constraints up to \(1/m^{4}\) in Refs. [14, 15] led to the to the identification of two free LECs that parameterize a nucleon-nucleon (NN) interaction dependent on the overall momentum of the pair.
In this work we extend the above results to the three-body contact forces.
The first contribution to the contact 3N force is represented by a single operator \(O_{0}\), accompanied by the LEC \(E_{0}\), whose matrix elements are constant in momentum space and take the form of the identity operator in spin-flavor space. The subleading terms involve two powers of momenta and were classified in Ref. [16] as consisting of 13 independent operators in the 3N center of mass frame. In a general frame the 3N contact potential reads
\[V_{3N} = E_{0}-\sum_{i\neq j\neq k}\left\{E_{1}{\bf k}_{i}^{2}+E_{2}{\bf k }_{i}^{2}\bbox{\tau}_{i}\cdot\bbox{\tau}_{j}\right. \tag{1}\] \[+ E_{3}{\bf k}_{i}^{2}\bbox{\sigma}_{i}\cdot\bbox{\sigma}_{j}+E_{4 }{\bf k}_{i}^{2}\bbox{\sigma}_{i}\cdot\bbox{\sigma}_{j}\bbox{\tau}_{i}\cdot \bbox{\tau}_{j}\] \[+ E_{5}\left(3{\bf k}_{i}\cdot\bbox{\sigma}_{i}{\bf k}_{i}\cdot \bbox{\sigma}_{j}-{\bf k}_{i}^{2}\bbox{\sigma}_{i}\cdot\bbox{\sigma}_{j}\right)\] \[+ E_{6}\left(3{\bf k}_{i}\cdot\bbox{\sigma}_{i}{\bf k}_{i}\cdot \bbox{\sigma}_{j}-{\bf k}_{i}^{2}\bbox{\sigma}_{i}\cdot\bbox{\sigma}_{j} \right)\bbox{\tau}_{i}\cdot\bbox{\tau}_{j}\] \[+ \frac{i}{4}E_{7}{\bf k}_{i}\times({\bf Q}_{i}-{\bf Q}_{j})\cdot( \bbox{\sigma}_{i}+\bbox{\sigma}_{j})\] \[+ \frac{i}{4}E_{8}{\bf k}_{i}\times({\bf Q}_{i}-{\bf Q}_{j})\cdot( \bbox{\sigma}_{i}+\bbox{\sigma}_{j})\,\bbox{\tau}_{j}\cdot\bbox{\tau}_{k}\] \[+ E_{9}{\bf k}_{i}\cdot\bbox{\sigma}_{i}{\bf k}_{j}\cdot\bbox{ \sigma}_{j}+E_{10}{\bf k}_{i}\cdot\bbox{\sigma}_{i}{\bf k}_{j}\cdot\bbox{ \sigma}_{j}\tau_{i}\cdot\bbox{\tau}_{j}\] \[+ E_{11}{\bf k}_{i}\cdot\bbox{\sigma}_{j}{\bf k}_{j}\cdot\bbox{ \sigma}_{i}\] \[+ E_{12}{\bf k}_{i}\cdot\bbox{\sigma}_{j}{\bf k}_{j}\cdot\bbox{ \sigma}_{i}\bbox{\tau}_{i}\cdot\bbox{\tau}_{j}\] \[+ E_{13}{\bf k}_{i}\cdot\bbox{\sigma}_{j}{\bf k}_{j}\cdot\bbox{ \sigma}_{i}\bbox{\tau}_{i}\cdot\bbox{\tau}_{k}\] \[+ E_{1}^{*}{\bbox{P}}^{2}+iE_{2}^{*}{\bbox{P}}\times{\bbox{k}}_{i }\cdot\bbox{\sigma}_{i}\] \[+ i\left(E_{3}^{*}{\bbox{P}}\cdot{\bbox{k}}_{i}\,\bbox{\sigma}_{i }\cdot\bbox{\sigma}_{j}+E_{4}^{*}{\bbox{P}}\cdot\bbox{\sigma}_{i}\,{\bbox{k} }_{j}\cdot\bbox{\sigma}_{j}\right.\] \[+ \left.E_{5}^{*}{\bbox{P}}\cdot\bbox{\sigma}_{i}\,{\bbox{k}}_{j} \cdot\bbox{\sigma}_{k}\right)\bbox{\tau}_{i}\times\bbox{\tau}_{j}\cdot\bbox{ \tau}_{k}\right\}\] \[\equiv E_{0}+\sum_{i=1}^{13}E_{i}O_{i}+\sum_{i=1}^{5}E_{i}^{*}O_{i}^{*},\]
where \({\bbox{k}}_{i}={\bbox{p}}_{i}-{\bbox{p}}_{i}^{\prime}\) and \({\bbox{Q}}_{i}={\bbox{p}}_{i}+{\bbox{p}}_{i}^{\prime}\) are related to the initial and final momenta of the \(i\)-th nucleon, respectively \({\bbox{p}}_{i}\) and \({\bbox{p}}^{\prime}_{i}\), and we indicate with \(O_{i=1-5}^{*}\) the operators depending on the overall momentum \({\bbox{P}}=\sum_{i}{\bbox{p}}_{i}=({\bbox{Q}}_{1}+{\bbox{Q}}_{2}+{\bbox{Q}} _{3})/2\).
The study of this three-body contact force proves particularly interesting in the context of many unsolved problems in nuclear physics [17, 18]. As shown in Ref. [14, 19], some terms of the 3N contact potential are related via a unitary transformation to the \({\bbox{P}}\)-dependent two-nucleon potential and seem to be crucial to solve the \(p-d\)\(A_{y}\) puzzle [20]. Similarly, the study of the three-body force and its relativistic corrections could have an impact on the study of systems with \(A>3\), where still there exist large and unexplained discrepancies between theory and experiment [21, 22, 23, 24].
The paper is structured as follows. In Section 2 the relativistic corrections are calculated by non-relativistic reduction of covariant Lagrangians. In Section 3 we show how the same corrective terms can be derived from Poincare algebra constraints. Finally, in Section 4 we present the conclusions of this work.
## 2 Boost corrections from a covariant \(3n\) contact Lagrangian
The objective of this section is to determine the \({\bbox{P}}\)-dependent relativistic correction to the leading order \(3N\) contact potential in any given frame by applying the non-relativistic reduction to the covariant Lagrangian.
We begin by establishing a complete non-minimal set of relativistically invariant 3N contact operators \(\tilde{O}_{i}\) that contribute to the non-relativistic expansion starting from order \(Q^{0}\), while satisfying the requirements of hermiticity and CPT invariance, following general principles outlined in Refs. [15, 25, 26, 27]. Formally, the operators \(\tilde{O}_{i}\) are structured as the composition of fermion bilinears [13, 28]
\[(\vec{\psi}\overleftrightarrow{\partial}_{\mu_{1}}\cdots \overleftrightarrow{\partial}_{\mu_{i}}\Gamma_{A}\psi)\partial_{\rho_{1}} \cdots\partial_{\rho_{m}}(\vec{\psi}\overleftrightarrow{\partial}_{\nu_{1}} \cdots\overleftrightarrow{\partial}_{\nu_{j}}\Gamma_{B}\psi) \tag{2}\] \[\times\partial_{\sigma_{1}}\cdots\partial_{\sigma_{n}}(\vec{\psi} \overleftrightarrow{\partial}_{\lambda_{1}}\cdots\overleftrightarrow{\partial}_ {\lambda_{k}}\Gamma_{C}\psi).\]
Here, \(\psi\) represents the relativistic nucleon field, which is a doublet in isospin space. The symbol \(\overleftrightarrow{\partial}\) denotes the derivative operator \(\overrightarrow{\partial}-\overleftarrow{\partial}\). The symbols \(\Gamma_{A,B,C}\) denote generic elements of the Clifford algebra, expanded in the basis 1, \(\gamma_{5}\), \(\gamma_{\mu}\), \(\gamma_{\mu}\gamma_{5}\)
\(\sigma^{\mu\nu}\), as well as the metric tensor or the Levi-Civita tensor \(\epsilon^{\mu\nu\rho\sigma}\) (with the convention \(\epsilon^{0123}=-1\)).
The Lorentz indices on the partial derivatives must be contracted among themselves and/or with those in the \(\Gamma_{A,B,C}\) in order to preserve Lorentz invariance.
Regarding the isospin degrees of freedom, the allowed isospin-invariant flavor structures are \(1\bigotimes 1\bigotimes 1\bigotimes 1\), \(\mathbf{\tau_{i}}\cdot\mathbf{\tau_{j}}\), \(\mathbf{\tau_{1}}\times\mathbf{\tau_{2}}\cdot\mathbf{\tau_{3}}\).
In Table 2 are displayed the transformation properties under parity, charge conjugation, and Hermitian conjugation of the fermion bilinears with the aforementioned different elements of the Clifford and flavour algebra.
If charge conjugation and parity symmetries are satisfied, time reversal symmetry is automatically fulfilled, according by the CPT theorem.
We now outline power counting criteria needed to establish which operators contribute to the non-relativistic expansion starting from order \(Q^{0}\)[13, 15].
Derivatives \(\partial\) acting on the whole bilinears are of order \(Q\), while derivatives \(\overleftrightarrow{\partial}_{\mu}\) acting inside a bilinear are of order \(Q^{0}\) due to the presence of the heavy fermion mass scale. Therefore, we can restrict ourselves to retain only operators containing the latter kind of derivatives. Nevertheless, a generic operator contributing at order \(Q^{0}\) may in principle include arbitrary powers of space-time derivatives of the fields. However, it is possible to restrict ourselves to consider only a finite number of structure, as detailed in what follows. Whenever \(\overleftrightarrow{\partial}_{\mu}\) is contracted with an element of the Clifford algebra inside of the same bilinear, the fields' equations of motion can be used to remove it [29, 30]. The same is true when \(\overleftrightarrow{\partial}_{\mu}\) is contracted with another \(\overleftrightarrow{\partial}^{\mu}\) inside the same bilinear, since \(\overleftrightarrow{\partial}_{\mu}\overleftrightarrow{\partial}^{\mu}=-4m^{2 }-\partial^{2}\). As a result, by the equation of motions, no two Lorentz indices can be contracted among themselves inside the same bilinear, except for the Levi-Civita tensors and for the suppressed \(\partial^{2}\).
For what concerns pairwise contracted (\(\overleftrightarrow{\partial}_{A}\cdot\overleftrightarrow{\partial}_{B}\)) between two different bilinears, we observe that these structures too generate redundant contributions. For instance,
\[(\bar{\psi}\psi)_{1}(\bar{\psi}\overleftrightarrow{\partial}^{ \mu}\psi)_{2}(\bar{\psi}\overleftrightarrow{\partial}_{\mu}\psi)_{3}\\ =2m^{2}(\bar{\psi}\psi)_{1}(\bar{\psi}\psi)_{2}(\bar{\psi}\psi)_ {3}\\ +\frac{1}{2}(\bar{\psi}\psi)_{1}(\bar{\psi}\psi)_{2}\partial^{2}( \bar{\psi}\psi)_{3}, \tag{3}\]
so \((\bar{\psi}\psi)_{2}(\bar{\psi}\overleftrightarrow{\partial}^{\mu}\psi)_{2}( \bar{\psi}\overleftrightarrow{\partial}_{\mu}\psi)_{3}-2m^{2}(\bar{\psi}\psi) _{1}(\bar{\psi}\psi)_{2}(\bar{\psi}\psi)_{3}\)\(=O(Q^{2})\) can be neglected in the non relativistic expansion, as it starts from \(Q^{2}\). (We see from Eq. (A1) that the contribution of \(\tilde{O}_{1}\) starts from order \(Q^{0}\)).
The Dirac matrix \(\gamma_{5}\) can be thought of as of order \(O(Q)\) since it mixes the large and small components of the Dirac spinor. This also applies to the spatial components of \(\gamma_{\mu}\) and to the temporal component of \(\gamma_{\mu}\gamma_{5}\), as well as to \(\sigma_{0i}\). For instance, operators associated to the structure \(\gamma\bigotimes\gamma\bigotimes\sigma\), such as \((\bar{\psi}\gamma_{\alpha}\psi)_{1}(\bar{\psi}\gamma_{\beta}\psi)_{2}(\bar{ \psi}\sigma^{\alpha\beta}\psi)_{3}\), do not contribute at order \(Q^{0}\) due to this mixing.
The antisymmetry properties of \(\epsilon_{\mu\nu\rho\sigma}\) and \(\sigma^{\mu\nu}\) restrict the maximum number of their possible contractions with a derivative \(\overleftrightarrow{\partial}\) operating within a bilinear to one. Any additional contraction with a derivative would lead to a contribution at a higher order than \(O(Q^{0})\).
On the basis of these properties we derive a complete (but non minimal) set of 92 different relativistic operators, displayed in Table 2, that contribute to the non-relativistic expansion starting from order \(Q^{0}\). Obviously, none of the 92 operators presents derivatives \(\partial\) acting on the entire bilinear, as they are of order \(Q\).
The relativistic nucleon field \(\psi\) can be expanded in a non-relativistic manner up to second order in \(Q\) by utilizing the non-relativistic field,
\[\psi(x)=\begin{pmatrix}(1+\frac{\nabla^{2}}{8m^{2}})\mathbb{1}_{2\times 2}\\ -\frac{i}{2m}\mathbf{\sigma}\cdot\nabla\end{pmatrix}N(x)+O(Q^{3}). \tag{4}\]
We obtain 92 resulting non-relativistic operators \(\tilde{O}_{i}\) as combinations of the 146 subleading \(3N\) contact operators \(o_{1,\dots,146}\) compatible with the
\begin{table}
\begin{tabular}{|c c c c c c c c c c c|} \hline & 1 & \(\gamma_{5}\) & \(\gamma_{\mu}\) & \(\gamma_{\mu}\gamma_{5}\) & \(\sigma_{\mu\nu}\) & \(g_{\mu\nu}\) & \(\epsilon_{\mu\nu\rho\sigma}\overleftrightarrow{\partial}_{\mu}\) & \(\partial_{\mu}\) & \(\tau^{a}\) \\ \hline \(\mathcal{P}\) & + & \(-\) & + & \(-\) & + & + & \(-\) & + & + & + \\ \(\mathcal{C}\) & + & + & \(-\) & + & \(-\) & + & + & \(-\) & + & (\(-1\))\({}^{a+1}\) \\ h.c. & + & \(-\) & + & + & + & + & \(+\) & \(-\) & + & + \\ \hline \end{tabular}
\end{table}
Table 1: Transformation proprieties of the different elements of the Clifford algebra, metric tensor, Levi-Civita tensor and derivative operators under parity (\(\mathcal{P}\)), charge conjugation (\(\mathcal{C}\)) and hermitian conjugation (h.c.)
symmetries of QCD, classified in Ref. [16]. We list the resulting expressions in Appendix A. After applying Fierz's relations, these non-relativistic expansions can be rewritten in the minimal basis consisting of the operators \(O_{1,...,13}\), \(O_{1,...,5}^{*}\) which appear in Eq. (1). They are shown in Appendix B. In these expressions the operators \(O_{i}^{*}\) entering in the effective Lagrangian, always appear in one single combination,
\[O_{\bf P}^{(0)}=6O_{0}+\frac{1}{4m^{2}}\left(\frac{2}{3}O_{1}^{*}+O_{2}^{*} \right), \tag{5}\]
which starts at \(O(Q^{0})\) and contain \({\bf P}\)-dependent drift corrections. Formally the non-relativistic expansions of the relativistic operators \(\tilde{O}_{i}\), up to \(O(Q^{2})\), take the form
\[\tilde{O}_{i}=\alpha_{i}O_{\bf P}^{(0)}+\sum_{k=1}^{13}\beta_{i}^{k}O_{k}, \qquad i=1,...,13 \tag{6}\]
where \(\alpha_{i}\), \(\beta_{i}^{k}\) are coefficients which can be read from the explicit expressions.
Thus, starting from the relativistic \(3N\) contact Lagrangian written in terms of 92 (redundant)
\begin{table}
\begin{tabular}{|l|l|l|c|} \hline \(\Gamma_{A}\,\bigotimes\Gamma_{B}\,\bigotimes\Gamma_{C}\) & \(\tilde{O}_{i}\) & Operators & Flavours \\ \hline \(1\bigotimes 1\bigotimes 1\) & \(\tilde{O}_{1-2}\) & \((\bar{\psi}\psi)_{1}(\bar{\psi}\psi)_{2}(\bar{\psi}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2}\) \\ \hline \(1\bigotimes 1\bigotimes\gamma\) & \(\tilde{O}_{3-6}\) & \(\frac{i}{2m}(\bar{\psi}\psi)_{2}(\bar{\psi}\,\,\overline{\partial}^{\mu}\psi) _{2}(\bar{\psi}\gamma_{\mu}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \hline \(1\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{7-9}\) & \((\bar{\psi}\psi)_{1}(\bar{\psi}\gamma_{\mu}\psi)_{2}(\bar{\psi}\gamma^{\mu} \psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3}\) \\ \(1\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{10-13}\) & \(\frac{1}{4m^{2}}(\bar{\psi}\,\,\overline{\partial}_{\mu}\,\psi)_{1}(\bar{ \psi}\gamma_{\nu}\psi)_{2}(\bar{\psi}\,\,\overline{\partial}^{\nu}\,\gamma^{ \mu}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \(1\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{14-16}\) & \(\frac{1}{4m^{2}}(\bar{\psi}\,\,\overline{\partial}_{\mu}\,\,\overline{ \partial}_{\nu}\,\psi)_{2}(\bar{\psi}\,\,\overline{\partial}^{\nu}\,\gamma^{ \mu}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3}\) \\ \(1\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{17-19}\) & \(\frac{1}{4m^{2}}(\bar{\psi}\,\,\overline{\partial}_{\mu}\,\,\overline{ \partial}_{\nu}\,\psi)_{1}(\bar{\psi}\gamma^{\mu}\psi)_{2}(\bar{\psi}\gamma^{ \nu}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3}\) \\ \hline \(1\bigotimes\gamma\gamma\bigotimes\gamma\bigotimes\gamma\bigotimes\gamma\bigotimes\) & \(\tilde{O}_{20-22}\) & \((\bar{\psi}\psi)_{1}(\bar{\psi}\gamma_{\mu}\gamma\overline{\psi})_{2}(\bar{ \psi}\gamma^{\mu}\gamma\overline{\gamma}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3}\) \\ \hline \(1\bigotimes\gamma\gamma\bigotimes\sigma\) & \(\tilde{O}_{23-26}\) & \(\frac{i}{2m}\epsilon^{\mu\nu\alpha\beta}(\bar{\psi}\,\,\overline{\partial}_{ \mu}\,\,\psi)_{1}(\bar{\psi}\gamma_{\nu}\gamma\psi)_{2}(\bar{\psi}\sigma_{ \alpha\beta}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \(1\bigotimes\gamma\gamma\bigotimes\sigma\) & \(\tilde{O}_{27-30}\) & \(\frac{i}{2m}\epsilon^{\mu\nu\alpha\beta}(\bar{\psi}\psi)_{1}(\bar{\psi}\,\, \overline{\partial}_{\mu}\,\gamma_{\nu}\gamma\overline{\gamma}\psi)_{2}(\bar{ \psi}\sigma_{\alpha\beta}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \(1\bigotimes\gamma\gamma\bigotimes\sigma\) & \(\tilde{O}_{31-34}\) & \(\frac{i}{2m}\epsilon^{\mu\nu\alpha\beta}(\bar{\psi}\psi)_{1}(\bar{\psi}\gamma_{ \mu}\gamma\overline{\gamma}\psi)_{2}(\bar{\psi}\,\,\overline{\partial}^{\nu}\, \sigma_{\alpha\beta}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \hline \(1\bigotimes\sigma\bigotimes\sigma\) & \(\tilde{O}_{35-37}\) & \((\bar{\psi}\psi)_{1}(\bar{\psi}\,\,\overline{\partial}_{\nu}\,\gamma^{\mu}\psi) _{2}(\bar{\psi}\gamma^{\nu}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3}\) \\ \hline \(\gamma\bigotimes\gamma\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{38-41}\) & \(\frac{i}{2m}(\bar{\psi}\,\,\overline{\partial}_{\mu}\,\gamma^{\nu}\psi)_{1}(\bar{ \psi}\,\,\overline{\partial}_{\nu}\,\gamma^{\mu}\psi)_{2}(\bar{\psi}\,\, \overline{\partial}_{\nu}\,\gamma^{\mu}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \(\gamma\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{42-45}\) & \(\frac{i}{8m^{3}}(\bar{\psi}\,\,\overline{\partial}_{\mu}\,\,\overline{\partial}_{ \nu}\,\gamma^{\alpha}\psi)_{1}(\bar{\psi}\,\,\overline{\partial}_{\alpha}\, \gamma^{\mu}\psi)_{2}(\bar{\psi}\,\,\overline{\partial}_{\nu}\,\gamma^{\alpha} \psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \(\gamma\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{46-49}\) & \(\frac{i}{8m^{3}}(\bar{\psi}\,\,\overline{\partial}_{\mu}\,\,\overline{ \partial}_{\nu}\,\gamma^{\alpha}\psi)_{1}(\bar{\psi}\,\,\overline{\partial}_{ \alpha}\,\gamma^{\mu}\psi)_{2}(\bar{\psi}\,\,\overline{\partial}_{\nu}\,\gamma^{ \mu}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \hline \(\gamma\bigotimes\gamma\bigotimes\gamma\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{46-49}\) & \(\frac{i}{8m^{3}}(\bar{\psi}\,\,\overline{\partial}_{\mu}\,\,\overline{ \partial}_{\nu}\,\gamma^{\alpha}\psi)_{1}(\bar{\psi}\,\,\overline{\partial}_{ \alpha}\,\gamma^{\mu}\psi)_{2}(\bar{\psi}\,\,\overline{\partial}_{\nu}\, \gamma^{\alpha}\psi)_{3}\) & \(\mathbb{1},\tau_{1}\cdot\tau_{2},\tau_{2}\cdot\tau_{3},\tau_{1}\cdot\tau_{3}\) \\ \hline \(\gamma\bigotimes\gamma\bigotimes\gamma\bigotimes\gamma\bigotimes\gamma\) & \(\tilde{O}_{47-77}\) & \(\frac{1}{4m^{2}}\epsilon^{
LECs, \(\tilde{E}_{i}\), the potential can be written as
\[\tilde{V}=\sum_{i=1}^{92}\tilde{E}_{i}\tilde{O}_{i}=\sum_{i=1}^{92}\tilde{E}_{i} \big{(}\alpha_{i}O_{\mathbf{P}}^{(0)}+\sum_{k=1}^{13}\!\beta_{i}^{k}O_{k}\big{)}, \tag{7}\]
and comparing with Eq. (1), while considering Eq. (5), we obtain the following identification
\[\sum_{i=1}^{92}\tilde{E}_{i}\alpha_{i}=\frac{1}{6}E_{0}, \tag{8}\]
and specific constraints on the LECs \(E_{i=1,...,5}^{*}\),
\[E_{1}^{*}=\frac{1}{36m^{2}}E_{0}, \tag{9}\] \[E_{2}^{*}=\frac{1}{24m^{2}}E_{0},\] (10) \[E_{3}^{*}= E_{4}^{*}=E_{5}^{*}=0. \tag{11}\]
Finally, we can identify
\[\delta V(\mathbf{P})\equiv\frac{E_{0}}{24m^{2}}\left(\frac{2}{3}O_{1}^{*}+O_{2}^{ *}\right), \tag{12}\]
as the \(\mathbf{P}\)-dependent component of the \(3N\) contact potential in an arbitrary frame of Eq. (1), i.e. the boost correction up to order \(\mathbf{Q}^{2}\) to the 3N leading order contact potential in the rest frame of the system,
\[\begin{split} V_{3N}&=E_{0}+\sum_{i=1}^{13}E_{i}\, O_{i}+\sum_{i=1}^{5}E_{i}^{*}O_{i}^{*}=\\ &=E_{0}+\sum_{i=1}^{13}E_{i}O_{i}+\delta V(\mathbf{P}).\end{split} \tag{13}\]
## 3 The 3N contact interaction boost corrections from Poincare algebra
As an alternative to the procedure discussed in the previous Section, we review the calculation of the boost correction to the leading order 3N contact interaction up to \(Q^{2}\) order from the Poincare algebra constraints.
Relativistic many-body descriptions of systems consisting of a fixed number of interacting particles can be achieved using relativistic Hamiltonians. These Hamiltonians are defined as the sum of relativistic one-body kinetic energies, two- and many-body interactions and, importantly, their corresponding boost corrections. For a generic system of interacting particles with momenta \(\mathbf{p}_{\nu}\) and masses \(m_{\nu}\), such Hamiltonians may be expressed as follows [31],
\[\begin{split} H_{R}=&\sum_{\nu}\sqrt{m_{\nu}^{2}+p _{\nu}^{2}}+\sum_{\nu<\mu}\bigl{[}v_{\nu\mu}+\delta v_{\nu\mu}(\mathbf{P}_{\nu\mu} )\bigr{]}\\ &+\sum_{\nu<\mu<\rho}\bigl{[}V_{\nu\mu\rho}+\delta V_{\nu\mu\rho }(\mathbf{P}_{\nu\mu\rho})\bigr{]}+...,\end{split} \tag{14}\]
where \(\mathbf{P}_{\nu\mu}=\mathbf{p}_{\nu}+\mathbf{p}_{\mu}\) is the total momentum of particles \(\nu\) and \(\mu\), and \(\mathbf{P}_{\nu\mu\rho}=\mathbf{p}_{\nu}+\mathbf{p}_{\mu}+\mathbf{p}_{\rho}\) is the total momentum of particles \(\nu\), \(\mu\) and \(\rho\). The term \(v_{\nu\mu}\) corresponds to the two-body potential in the rest frame of the sub-system constituted by particles of indices \(\nu\),\(\mu\). Analogously, \(V_{\nu\mu\rho}\) is the three-body potential in the rest frame of particles \(\nu\),\(\mu\),\(\rho\). Terms \(\delta v_{\nu\mu}(\mathbf{P}_{\nu\mu})\) and \(\delta V_{\nu\mu\rho}(\mathbf{P}_{\nu\mu\rho})\) are referred to as "boost interactions". Clearly, these quantities vanish in the rest frame of their corresponding sub-system (i.e., \(\delta v_{\nu\mu}(0)=0\) if \(\mathbf{P}_{\nu\mu}=0\), and \(\delta V_{\nu\mu\rho}(0)=0\) if \(\mathbf{P}_{\nu\mu\rho}=0\)). However, it is essential to take them into account to attain accurate descriptions in reference frames where \(\mathbf{P}\neq 0\).
Both \(v_{\mu\nu}\) and \(V_{\mu\nu\rho}\) are determined by the fields and by the internal structure of the interacting particles. Realistic models of \(v_{\mu\nu}\) and \(v_{\mu\nu\rho}\) are obtained by choosing their theoretical model and by fitting them to experimental data; as a consequence, they may contain some form of model-dependent relativistic effects.
Starting from \(v_{\mu\nu}\) (respectively, \(v_{\mu\nu\rho}\)) it is possible to obtain \(\delta v_{\mu\nu}(\mathbf{P}_{\mathbf{\mu\nu}})\) ( respectively, \(\delta V_{\mu\nu\rho}(\mathbf{P}_{\mathbf{\mu\nu\rho}})\)) without any further model dependence, through relations fixed by the general principle of relativistic covariance.
For our present purposes, we are considering a system of three particles, each with spin \(s\) and mass \(m\). The dynamical variables for the \(\nu\)th particle (\(\nu=1,2,3\)) are spin \(\mathbf{\sigma_{\nu}}\), isospin \(\mathbf{\tau_{\nu}}\), momentum \(\mathbf{p_{\nu}}\), and position \(\mathbf{r_{\nu}}\). The momenta and positions are canonically conjugate operators, as are the center-of mass variables \(\mathbf{R}=\frac{\mathbf{r}_{1}+\mathbf{r}_{2}+\mathbf{r}_{3}}{3}\) and \(\mathbf{P}=\mathbf{p}_{1}+\mathbf{p}_{2}+\mathbf{p}_{3}\). The spin and isospin operators satisfy the well-known angular momentum commutation relations: \([\sigma_{\nu}^{i},\sigma_{\mu}^{j}]=i\delta_{\nu\mu}\epsilon_{ijk}\sigma_{\nu} ^{k}\) and
\([\tau^{i}_{\nu},\tau^{j}_{\mu}]=i\delta_{\nu\mu}\epsilon_{ijk}\tau^{k}_{\nu}\).
In the instant form of relativistic dynamics [32], interactions affect the Hamiltonian \(H\) and, necessarily, the boost generators \(\mathbf{K}\). We write
\[\begin{split}\mathbf{P}&=\mathbf{P}_{0},\quad H=H_{0}+V,\\ \mathbf{J}&=\mathbf{J}_{0},\quad\mathbf{K}=\mathbf{K}_{0}+\mathbf{W}, \end{split} \tag{15}\]
where \(V\), \(\mathbf{W}\) are the interaction terms, and the subscripts \(0\) indicate the corresponding operators in the absence of interactions,
\[\begin{split}\mathbf{P}_{0}&=\sum_{\nu=1}^{3}\mathbf{p}_{ \mathbf{\nu}},\\ \mathbf{J}_{0}&=\sum_{\nu=1}^{3}\mathbf{r}_{\mathbf{\nu}}\times \mathbf{p}_{\mathbf{\nu}}+\mathbf{s}_{\mathbf{\nu}}\equiv\sum_{\nu=1}^{3}\mathbf{j}_{\nu},\\ H_{0}&=\sum_{\nu=1}^{3}\omega_{\nu},\\ \mathbf{K}_{0}&=\sum_{\nu=1}^{3}\frac{\mathbf{r}_{\mathbf{\nu}} \omega_{\nu}+\omega_{\nu}\mathbf{r}_{\mathbf{\nu}}}{2c^{2}}-\frac{\mathbf{s}_{\mathbf{\nu}} \times\mathbf{p}_{\mathbf{\nu}}}{m_{\nu}c^{2}+\omega_{\nu}}-t\mathbf{p}_{\mathbf{\nu}}\\ &\equiv\sum_{\nu}\mathbf{k}_{\mathbf{\nu}},\end{split} \tag{16}\]
where \(\omega_{\nu}=\sqrt{m^{2}c^{4}+c^{2}p_{\nu}^{2}}\) is the single-particle energy of the \(\nu\)th particle.
The generators in Eq. (15) must satisfy the commutation relations of the Poincare group:
\[\begin{split}[P_{i},P_{j}]&=0,\qquad[J_{i},P_{j}] =i\epsilon_{ijk}P_{k},\\ [P_{i},H]&=0,\qquad[J_{i},J_{j}]=i\epsilon_{ijk}J_{ k},\\ [J_{i},H]&=0,\qquad[J_{i},K_{j}]=i\epsilon_{ijk}K_{ k},\\ &\qquad[H,K_{i}]=-iP_{i},\\ &\qquad[K_{i},K_{j}]=-i\epsilon_{ijk}J_{k}/c^{2},\\ &\qquad[P_{i},K_{j}]=-i\delta_{ij}H/c^{2},\end{split} \tag{17}\]
where \(i,j,k\in\{1,2,3\}\), \(\epsilon_{ijk}\) is the Levi-Civita tensor, \(\delta_{ij}\) is the Kronecker delta tensor, and summation convention on repeated indices is in force. Units are such that \(\hbar=1\) and \(c\) is the speed of light in vacuum.
As it is well known, the relations of Eq. (17) are satisfied by the free generators in Eq. (16). The problem of describing an interacting system of relativistic particles consists of determining functions \(V\) and \(\mathbf{W}\) such that the commutation relations (17) are still satisfied [10].
It is assumed that we are considering an interacting system for which \(H\) and \(\mathbf{K}\) can be expanded in powers of \(\frac{1}{m^{2}}\) as
\[\begin{split} H&=Mc^{2}+H^{(0)}+H^{(1)}+...,\\ \mathbf{K}&=\mathbf{K}^{(0)}+\mathbf{K}^{(1)}+...,\end{split} \tag{18}\]
where \(M=\sum_{\nu}^{i}m_{\nu}\) and the superscripts refer to the order in powers of \(\frac{1}{m^{2}}\). Under this assumption, interactions can be introduced at each order as additional terms \(V^{(n)}\), \(\mathbf{W}^{(n)}\) to the non-interacting components at the order \(n\), respectively \(H_{0}^{(n)}\) and \(\mathbf{K}_{0}^{(n)}\),
\[\begin{split} H^{(n)}&=H_{0}^{(n)}+V^{(n)},\\ \mathbf{K}^{(n)}&=\mathbf{K}_{0}^{(n)}+\mathbf{W}^{(n)},\end{split} \tag{19}\]
with both \(V^{(n)}\) and \(\mathbf{W}^{(n)}\) depending on the dynamical variables of the system, and not explicitly on time [10].
The commutation relations in Eq. (17) can be consequently expanded in powers of \(\frac{1}{m^{2}}\) and, in principle, they could be solved at each order by mean of a "direct integration". It is conjectured that the obtained solutions represent the most general case for systems of the depicted kind for which the expansion exists.
We assume, following [10], that the representation for a relativistic system is chosen so that in the nonrelativistic limit it holds \(\mathbf{W}^{(0)}=0\) and \(\mathbf{K}=M\mathbf{R}-t\mathbf{P}\), and \(\mathbf{J}\) and \(\mathbf{P}\) are those given in Eqs. (15) and (16).
At the first order of the expansions (18), the constraints on \(H^{(1)}=H_{0}^{(1)}+V^{(1)}\) and \(\mathbf{K}^{(1)}=\mathbf{K}_{0}^{(1)}+\mathbf{W}^{(1)}\) inherited from Poincare algebra are
\[\begin{split}[P_{i},H^{(1)}]&=0,\\ [P_{i},K_{j}^{(1)}]&=-i\delta_{ij}\frac{H^{(0)}}{c^{ 2}},\\ [J_{i},H^{(1)}]&=0,\\ [J_{i},K_{j}^{(1)}]&=i\epsilon_{ijk}K_{k}^{(1)},\\ [H^{(0)},K_{i}^{(1)}]+[H^{(1)},K_{i}^{(0)}]&=0,\\ [K_{i}^{(0)},K_{j}^{(1)}]-[K_{j}^{(0)},K_{i}^{(1)}]&=-i \epsilon_{ijk}\frac{J_{k}}{c^{2}}.\end{split} \tag{20}\]
The solution of the the above relations allows to identify the relativistic correction \(V^{(1)}\) to a phenomenological potential \(V^{(0)}\).
In particular, we focus on the \(\mathbf{P}\)-dependent component of such correction, i.e. the boost correction \(\delta V(\mathbf{P})\), which provides the relationship between descriptions of the system in different reference frames.
An expression for \(\delta V\) up to order \(\frac{1}{m^{2}}\) beyond the non-relativistic limit has been derived by Friar [33].
It is convenient to write it in terms of normalized canonical Jacobi coordinates of momentum, \(\mathbf{\pi}_{a,b}\) and position \(\mathbf{\rho}_{a,b}\), which are related to the physical variables \(\mathbf{p}_{1,2,3}\), \(\mathbf{r}_{1,2,3}\) through the change of coordinates
\[\begin{cases}\mathbf{\pi}_{\mathbf{a}}&=\frac{\mathbf{p}_{1}-\mathbf{p}_{2}}{2},\\ \mathbf{\rho}_{\mathbf{a}}&=\mathbf{r}_{1}-\mathbf{r}_{2};\end{cases}\quad\begin{cases}\mathbf{\pi}_ {\mathbf{b}}&=\frac{2}{3}\bigg{[}\mathbf{p}_{\mathbf{3}}-\frac{\mathbf{p}_{\mathbf{1}}+\mathbf{p}_{2}} {2}\bigg{]},\\ \mathbf{\rho}_{\mathbf{b}}&=\mathbf{r}_{\mathbf{3}}-\frac{\mathbf{r}_{1}+\mathbf{r}_{2}}{2};\end{cases} \tag{21}\]
\[\begin{cases}\mathbf{R}&=\frac{\mathbf{r}_{1}+\mathbf{r}_{2}+\mathbf{r}_{3}}{3},\\ \mathbf{P}&=\mathbf{p}_{1}+\mathbf{p}_{2}+\mathbf{p}_{3}.\end{cases}\]
Thus, for a system of three particles, \(\delta V\) reads
\[\delta V(\mathbf{P})=-\frac{P^{2}V^{(0)}}{2(3m)^{2}}-i[\chi_{v},H_{0}]-i[\chi_{0},V^{(0)}], \tag{22}\]
where
\[\chi_{v}(\mathbf{P})= -\frac{1}{6m}\int_{0}^{\mathbf{P}}\mathbf{w}\cdot\mathbf{d}\mathbf{P}+H.c., \tag{23}\]
\[\chi_{0}(\mathbf{P})= -\frac{1}{4(3m)^{2}}\Bigg{[}\bigg{(}\mathbf{\rho}_{\mathbf{a}}\cdot\mathbf{P }\mathbf{\pi}_{\mathbf{a}}\cdot\mathbf{P}\] \[+\mathbf{\rho}_{\mathbf{b}}\cdot\mathbf{P}\mathbf{\pi}_{\mathbf{b}}\cdot\mathbf{P}\bigg{)} +H.c.\Bigg{]}\] \[+\frac{1}{12m^{2}}\Bigg{[}\bigg{(}\mathbf{\rho}_{\mathbf{a}}\cdot\mathbf{P} \mathbf{\pi}_{\mathbf{a}}\cdot\mathbf{\pi}_{\mathbf{b}}-\frac{1}{2}\mathbf{\rho}_{\mathbf{b}}\cdot\mathbf{ P}\mathbf{\pi_{\mathbf{b}}}^{2}\] \[+\frac{2}{3}\mathbf{\rho}_{\mathbf{b}}\cdot\mathbf{P}\mathbf{\pi_{\mathbf{a}}}^{2} \bigg{)}+H.c.\Bigg{]}\] \[-\frac{1}{6m^{2}}\bigg{[}(\mathbf{s}_{1}-\mathbf{s}_{2})\wedge\mathbf{P} \cdot\mathbf{\pi}_{\mathbf{a}}\] \[+(\mathbf{s}_{3}-\frac{\mathbf{s}_{1}+\mathbf{s}_{2}}{2})\wedge\mathbf{P}\cdot\bm {\pi}_{\mathbf{b}}\bigg{]}. \tag{24}\]
We identify \(V^{(0)}\) in Eq. (22) with the leading order 3N contact interaction parameterized by the LEC \(E_{0}\) in Eq. (1).
The vector \(\mathbf{w}\) in Eq. (23) is a translationally invariant function of \(\mathbf{P}\) and \(\mathbf{\rho}_{\mathbf{a},\mathbf{b}}\), \(\mathbf{\pi}_{\mathbf{a},\mathbf{b}}\). It satisfies \(\nabla_{\mathbf{P}}\times\mathbf{w}=0\), making the integral in (23) independent of the path.. As a minimal choice, we set \(\mathbf{w}=0\), so that \(\chi_{v}(\mathbf{P})=0\), as in Refs. [11, 12, 34, 35]; this corresponds to assuming the existence of an appropriate unitary transformation absorbing \(\mathbf{w}\)[14].
We evaluate \(\delta V\) between 3N states \(\Psi\) and \(\Psi^{\prime}\) as \(\langle\Psi^{\prime}|\,\delta V(\mathbf{P})\,|\Psi\rangle\). The details of the calculation can be found in Appendix (C).
The result, expressed in terms of variables \(\mathbf{k_{i}}=\mathbf{p_{i}}-\mathbf{p_{i}^{\prime}}\) and \(\mathbf{Q_{i}}=\mathbf{p_{i}}+\mathbf{p_{i}^{\prime}}\), with \(i=1,2,3\), is
\[\begin{split}\delta V=&-\frac{E_{0}}{6m^{2}}\Bigg{[} \mathbf{P^{2}}+\frac{i}{4}\mathbf{P}\times(\mathbf{k_{1}}-\mathbf{k_{2}})\cdot(\mathbf{\sigma_{1}} -\mathbf{\sigma_{2}})\\ &+\frac{i}{3}\mathbf{P}\times\bigg{(}\mathbf{k_{3}}-\frac{\mathbf{k_{1}}+\bm {k_{2}}}{2}\bigg{)}\cdot\bigg{(}\mathbf{\sigma_{3}}-\frac{\mathbf{\sigma_{1}}+\mathbf{ \sigma_{2}}}{2}\bigg{)}\Bigg{]}.\end{split} \tag{25}\]
When written in the basis of the 146 3N subleading contact operators \(o_{i}\), \(i=1,\ldots,146\), Eq. (25) reads
\[\delta V=-\frac{E_{0}}{48m^{2}}(-o_{127}-2o_{1}+2o_{75}-2o_{79}). \tag{26}\]
However, when we use the basis of the 18 independent 3N subleading contact operators \(O_{1-13}\), \(O_{1-5}^{*}\), the same result can be written as
\[\delta V=\frac{E_{0}}{24m^{2}}\left(\frac{2}{3}O_{1}^{*}+O_{2}^{*}\right). \tag{27}\]
Since it is completely characterized by the low energy constant \(E_{0}\), we recognize \(\delta V(\mathbf{P})\) as the boost correction of our interest, in perfect agreement with Eq. (12).
## 4 Conclusions
In this paper we derived the relativistic corrections of the leading order 3N contact potential in an arbitrary frame with two different approaches. The first approach, developed within the framework of field theory, involves identifying the operators that contribute to the covariant Lagrangian at the zeroth-order in the low-energy expansion and afterwards performing relativistic reduction;
the result is represented by Eqs. (5) and (12) or, equivalently, by constraints on the LECs \(E_{i}^{*}\) parametrizing \(\mathbf{P}\)-dependent interactions, Eqs. (9, 10, 11). The second approach, formulated within the context of relativistic quantum mechanics, is directly based on fundamental principles of covariance and the constraints that arise from them through Poincare algebra; the result is Eq. (27). In both cases it is evident that the \(\mathbf{P}\)-dependent boost correction is entirely determined by the interaction at the leading order. The expression for the boost correction is given by
\[\delta V(\mathbf{P})=\frac{E_{0}}{24m^{2}}\left(\frac{2}{3}O_{1}^{*}+O_{2}^{*} \right), \tag{28}\]
indicating overall agreement within the two approaches.
The consistency between the two approaches supports the validity of the minimal ansatz \(\mathbf{w}=0\) made in Eq. (22). Hence, it is possible to assume the existence of an appropriate unitary transformation absorbing \(\mathbf{w}\)[14]. A similar assumption has been made in the 2N case in Refs. [31] and [13]; nevertheless, the validity of its extension for systems composed of a \(N>2\) nucleons is not straightforward [11].
Furthermore, the results obtained in Eqs (104)-(105) provide a non-trivial check of the Fierz's relations. In fact, if there had been an error in the Fierz's identities, as in Ref. [16] where only 14 nonrelativistic operators were found, the relativistic corrections would have had different expressions for each operator thus precluding the unambiguous determination of the boost correction.
It is conjectured that the obtained solutions are the most general for systems of the kind depicted for which the expansion exists.
Any relativistic description of a system of interacting particles of finite mass and spin, whether exact or approximate, should fall in this framework.
## Appendix A Non-relativistic expansions
Here we give the non-relativistic expansions of the operators \(\tilde{O}_{i}\) defined in Table 2 in terms of the 146 subleading 3N contact operators \(o_{i}\) and of the six leading order operators \(O_{i}^{(0)}\) listed in Ref. [16].
\[\tilde{O}_{1} =O_{1}^{(0)}-\frac{1}{4m^{2}}\left(3o_{75}-\frac{3}{2}o_{127} \right),\] (A1) \[\tilde{O}_{2} =O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{76}+o_{77}+o_{78}-o_{128}- \frac{1}{2}o_{129}\right),\] (A2) \[\tilde{O}_{3} =1+\frac{1}{4m^{2}}\left(o_{1}-o_{75}+o_{79}+\frac{1}{2}o_{127} \right),\] (A3) \[\tilde{O}_{4} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(o_{3}+\frac{1}{2}o_{34}-\frac {1}{2}o_{35}-o_{76}-o_{77}+o_{78}+o_{81}+\frac{1}{2}o_{128}\right),\] (A4) \[\tilde{O}_{5} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(o_{2}-o_{78}+o_{80}+\frac{1}{ 2}o_{129}\right),\] (A5) \[\tilde{O}_{6} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(o_{3}-\frac{1}{2}o_{34}+\frac {1}{2}o_{35}-o_{78}+o_{82}+\frac{1}{2}o_{128}\right),\] (A6) \[\tilde{O}_{7} =O_{1}^{(0)}+\frac{1}{4m^{2}}\left(o_{1}-2o_{33}+o_{39}-o_{42}+o _{75}+2o_{79}+\frac{1}{2}o_{127}\right),\] (A7) \[\tilde{O}_{8} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(o_{3}-\frac{1}{2}o_{34}-\frac {3}{2}o_{35}+o_{41}-o_{44}+o_{78}+o_{81}+o_{82}+\frac{1}{2}o_{128}\right),\] (A8) \[\tilde{O}_{9} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(o_{2}-o_{34}-o_{35}+o_{40}-o_{ 43}+o_{76}+o_{77}-o_{78}+2o_{80}+\frac{1}{2}o_{129}\right),\] (A9) \[\tilde{O}_{10} =-O_{1}^{(0)}-\frac{1}{4m^{2}}\left(2o_{1}+o_{75}+2o_{79}-\frac{ 1}{2}o_{127}\right),\] (A10) \[\tilde{O}_{11} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(2o_{3}+o_{78}+o_{81}+o_{82}- \frac{1}{2}o_{129}\right),\] (A11) \[\tilde{O}_{12} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{2}+o_{3}-\frac{1}{2}o_{35} +o_{76}+o_{77}-o_{78}+o_{80}+o_{82}-\frac{1}{2}o_{128}\right),\] (A12) \[\tilde{O}_{13} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{2}+o_{3}+\frac{1}{2}o_{34} -\frac{1}{2}o_{35}+o_{78}+o_{80}+o_{81}-\frac{1}{2}o_{128}\right),\] (A13) \[\tilde{O}_{14} =-O_{1}^{(0)}-\frac{1}{4m^{2}}\left(2o_{1}+o_{75}+2o_{79}-\frac{ 1}{2}o_{127}\right),\] (A14) \[\tilde{O}_{15} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(2o_{3}+o_{78}+o_{81}+o_{82}- \frac{1}{2}o_{129}\right),\] (A15) \[\tilde{O}_{16} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(2o_{2}+o_{76}+o_{77}-o_{78}+2 o_{80}-o_{128}+\frac{1}{2}o_{129}\right),\] (A16) \[\tilde{O}_{17} =-O_{1}^{(0)}-\frac{1}{4m^{2}}\left(2o_{1}+o_{75}+2o_{79}-\frac{ 1}{2}o_{127}\right),\] (A17) \[\tilde{O}_{18} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{2}+o_{3}+\frac{1}{2}o_{34} -\frac{1}{2}o_{35}+o_{78}+o_{80}+o_{81}-\frac{1}{2}o_{128}\right),\] (A18) \[\tilde{O}_{19} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(2o_{3}-o_{34}+o_{35}+o_{76}+o_ {77}-o_{78}+2o_{82}-\frac{1}{2}o_{129}\right),\] (A19) \[\tilde{O}_{20} =-O_{2}^{(0)}-\frac{1}{4m^{2}}\left(o_{4}-\frac{1}{2}o_{36}-\frac {1}{2}o_{39}-\frac{1}{2}o_{45}-\frac{1}{2}o_{49}-o_{79}-o_{83}-o_{115}+o_{130} +\frac{1}{2}o_{134}-o_{137}\right),\] (A20) \[\tilde{O}_{21} =-O_{5}^{(0)}-\frac{1}{4m^{2}}\left(o_{6}-\frac{1}{2}o_{38}-\frac {1}{2}o_{41}-\frac{1}{2}o_{48}-\frac{1}{2}o_{50}-\frac{1}{2}o_{81}-\frac{1}{2} o_{82}-\frac{1}{2}o_{84}-\frac{1}{2}o_{86}+\right.\] (A21) \[\left.-\frac{1}{2}o_{116}-\frac{1}{2}o_{117}+\frac{1}{2}o_{132} +\frac{1}{2}o_{133}+\frac{1}{2}o_{135}-\frac{1}{2}o_{139}-\frac{1}{2}o_{140} \right),\] \[\tilde{O}_{22} =-O_{4}^{(0)}-\frac{1}{4m^{2}}\left(o_{5}-\frac{1}{2}o_{37}-\frac {1}{2}o_{40}-\frac{1}{2}o_{47}-\frac{1}{2}o_{51}-o_{80}-o_{85}-o_{118}+o_{131} +\frac{1}{2}o_{136}-o_{138}\right),\] (A22) \[\tilde{O}_{23} =2O_{2}^{(0)}+\frac{1}{4m^{2}}\left(2o_{13}-2o_{17}+2o_{21}+o_{36} -o_{39}-o_{42}+o_{45}-o_{49}+o_{53}+2o_{83}-2o_{115}+o_{130}\right),\] (A23) \[\tilde{O}_{24} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{14}-2o_{202}+2o_{24}+o _{38}-o_{41}-o_{44}+o_{48}-o_{50}+o_{54}-o_{81}+o_{82}-o_{84}+\] (A24) \[\left.+2o_{85}+o_{86}-o_{116}-o_{117}+o_{133}+o_{139}-o_{140} \right),\]
\[\tilde{O}_{25} =2O_{4}^{(0)}+\frac{1}{4m^{2}}\left(2o_{15}-2o_{19}+2o_{23}+o_{37}-o_{ 40}-o_{43}+o_{47}-o_{51}+o_{55}+2o_{86}-2o_{118}+o_{131}\right),\] (A25) \[\tilde{O}_{26} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{16}-2o_{18}+2o_{22}+o_{ 38}-o_{41}-o_{44}+o_{46}-o_{52}+o_{56}+o_{81}-o_{82}+3o_{84}+\] (A26) \[\quad-o_{86}-o_{116}-o_{117}+o_{132}-o_{139}+o_{140}\bigg{)},\] \[\tilde{O}_{27} =2O_{2}^{(0)}-\frac{1}{4m^{2}}\left(2o_{7}-2o_{10}-o_{36}+o_{39}- o_{45}+o_{49}-2o_{75}+2o_{115}-o_{134}-2o_{137}\right),\] (A27) \[\tilde{O}_{28} =2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-o_{38}+o_{4 1}-o_{48}+o_{50}-o_{54}+o_{56}-2o_{77}+o_{81}-o_{82}+o_{84}+\] (A28) \[\quad-o_{86}+o_{116}+o_{117}-o_{135}-o_{139}-o_{140}\bigg{)},\] \[\tilde{O}_{29} =2O_{4}^{(0)}-\frac{1}{4m^{2}}\left(2o_{8}-2o_{11}-o_{37}+o_{40} -o_{47}+o_{51}-2o_{76}+2o_{118}-o_{136}-2o_{138}\right),\] (A29) \[\tilde{O}_{30} =2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-o_{38}+o_{4 1}-o_{46}+o_{52}+o_{54}-o_{56}-2o_{78}-o_{81}+o_{82}-o_{84}+\] (A30) \[\quad+o_{86}+o_{116}+o_{117}-o_{135}-o_{139}-o_{140}\bigg{)},\] \[\tilde{O}_{31} =-2O_{2}^{(0)}-\frac{1}{4m^{2}}\left(2o_{4}+o_{36}-o_{39}+o_{45}- o_{49}-2o_{79}-2o_{83}-2o_{115}+2o_{130}+o_{134}-2o_{137}\right),\] (A31) \[\tilde{O}_{32} =-2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{6}+o_{38}-o_{41}+o_{48 }-o_{50}-o_{81}-o_{82}-o_{84}-o_{86}-o_{116}-o_{117}+o_{132}+\] (A32) \[\quad+o_{133}+o_{135}-o_{139}-o_{140}\bigg{)},\] \[\tilde{O}_{33} =-2O_{4}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{5}+o_{37}-o_{40}+o_{47 }-o_{51}-2o_{80}-2o_{85}-2o_{118}+2o_{131}+o_{136}-2o_{138}\bigg{)},\] (A33) \[\tilde{O}_{34} =-2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{6}+o_{38}-o_{41}+o_{46 }-o_{52}-o_{81}-o_{82}-o_{84}-o_{86}-o_{116}-o_{117}+\] (A34) \[\quad+o_{132}+o_{133}+o_{135}-o_{139}-o_{140}\bigg{)},\] \[\tilde{O}_{35} =2O_{2}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{7}-2o_{10}+2o_{233}-o_ {36}-o_{39}+2o_{42}-o_{45}-o_{49}+2o_{53}-4o_{75}-2o_{79}+\] (A35) \[\quad-2o_{83}+2o_{115}-o_{134}-2o_{137}\bigg{)},\] \[\tilde{O}_{36} =2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}+2o_{35}-o_{3 8}-o_{41}+2o_{44}-o_{48}-o_{50}+o_{54}+o_{56}-2o_{77}-2o_{78}+\] (A36) \[\quad-o_{81}-o_{82}-o_{84}-o_{86}+o_{116}+o_{117}-o_{135}-o_{139} -o_{140}\bigg{)},\] \[\tilde{O}_{37} =2O_{4}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{8}-2o_{11}+2o_{34}-o_{3 7}-o_{40}+2o_{43}-o_{47}-o_{51}+2o_{55}-4o_{76}-2o_{80}+\] (A37) \[\quad-2o_{85}+2o_{118}-o_{136}-2o_{138}\bigg{)},\] \[\tilde{O}_{38} =O_{1}^{(0)}+\frac{1}{4m^{2}}\left(2o_{1}-2o_{33}+o_{39}-o_{42}+3 o_{75}+3o_{79}-\frac{1}{2}o_{127}\right),\] (A38) \[\tilde{O}_{39} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(o_{2}+o_{3}-\frac{1}{2}o_{34}- \frac{3}{2}o_{35}+o_{40}-o_{43}+o_{76}+o_{77}+o_{78}+2o_{80}+o_{81}-\frac{1}{2} o_{128}\right),\] (A39) \[\tilde{O}_{40} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(o_{2}+o_{3}-\frac{1}{2}o_{34}- \frac{3}{2}o_{35}+o_{41}-o_{44}+o_{76}+o_{77}+o_{78}+o_{80}+o_{81}+o_{82}-\frac {1}{2}o_{128}\right),\] (A40) \[\tilde{O}_{41} =O_{3}^{(0)}+\frac{1}{4m^{2}}\left(2o_{3}-o_{34}-o_{35}+o_{41}-o_{4 4}+o_{76}+o_{77}+o_{78}+o_{81}+2o_{82}-\frac{1}{2}o_{129}\right),\] (A41) \[\tilde{O}_{42} =-O_{1}^{(0)}-\frac{1}{4m^{2}}\left(3o_{1}+3o_{75}+3o_{79}-\frac{3} {2}o_{127}\right),\] (A42)
\[\tilde{Q}_{43} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{2}+2o_{3}+o\tau_{6}+o\tau_{77}+ \sigma_{78}+o_{80}+o_{81}+o_{82}-o_{128}-\frac{1}{2}o_{129}\right),\] (A43) \[\tilde{Q}_{44} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{2}+2o_{3}+o\tau_{6}+o\tau_{ 77}+\sigma_{78}+o_{80}+o_{81}+o_{82}-o_{128}-\frac{1}{2}o_{129}\right),\] (A44) \[\tilde{Q}_{45} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{2}+2o_{3}+o\tau_{6}+o\tau_{ 77}+\sigma_{78}+o_{80}+o_{81}+o_{82}-o_{128}-\frac{1}{2}o_{129}\right),\] (A45) \[\tilde{Q}_{46} =-O_{1}^{(0)}-\frac{1}{4m^{2}}\left(3o_{1}+3o\tau_{5}+3o\tau_{9} -\frac{3}{2}o_{127}\right),\] (A46) \[\tilde{Q}_{47} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(2o_{2}+o_{3}+\frac{1}{2}o_{3 4}-\frac{1}{2}o_{35}+o\tau_{6}+o\tau_{77}+\sigma_{78}+2o_{80}+o_{81}-\frac{3}{ 2}o_{128}\right),\] (A47) \[\tilde{Q}_{48} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(3o_{3}-\frac{1}{2}o_{34}+ \frac{1}{2}o_{35}+o\tau_{6}+o\tau_{77}+\sigma_{78}+o_{81}+2o_{82}-\frac{1}{2} o_{128}-o_{129}\right),\] (A48) \[\tilde{Q}_{49} =-O_{3}^{(0)}-\frac{1}{4m^{2}}\left(o_{2}+2o_{3}+o\tau_{6}+o\tau _{77}+\sigma_{78}+o_{80}+o_{81}+o_{82}-o_{128}-\frac{1}{2}o_{129}\right),\] (A49) \[\tilde{Q}_{50} =-O_{2}^{(0)}-\frac{1}{4m^{2}}\bigg{(}o_{4}+o_{21}-\frac{1}{2}o_ {36}-\frac{1}{2}o_{39}+\frac{1}{2}o_{42}-\frac{1}{2}o_{45}-\frac{1}{2}o_{49}- \frac{1}{2}o_{53}-o\tau_{9}-o_{83}+\] (A50) \[\quad+o_{115}+o_{119}+\frac{1}{2}o_{130}-o_{137}\bigg{)},\] \[\tilde{Q}_{51} =-O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}o_{6}+o_{22}-\frac{1}{2}o_{ 38}-\frac{1}{2}o_{41}+\frac{1}{2}o_{44}-\frac{1}{2}o_{48}-\frac{1}{2}o_{50}- \frac{1}{2}o_{56}-\frac{1}{2}o_{81}-\frac{1}{2}o_{82}+\] (A51) \[\quad-\frac{1}{2}o_{84}-\frac{1}{2}o_{86}+\frac{1}{2}o_{116}+ \frac{1}{2}o_{117}+o_{120}+\frac{1}{2}o_{132}-\frac{1}{2}o_{139}-\frac{1}{2}o _{140}\bigg{)},\] \[\tilde{Q}_{52} =-O_{4}^{(0)}-\frac{1}{4m^{2}}\bigg{(}o_{5}+o_{23}-\frac{1}{2}o_ {97}-\frac{1}{2}o_{40}+\frac{1}{2}o_{43}-\frac{1}{2}o_{47}-\frac{1}{2}o_{51}- \frac{1}{2}o_{55}-o_{80}-o_{85}+\] (A52) \[\quad+o_{118}+o_{121}+\frac{1}{2}o_{131}-o_{138}\bigg{)},\] \[\tilde{Q}_{53} =-O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}o_{6}+o_{24}-\frac{1}{2}o_ {38}-\frac{1}{2}o_{41}+\frac{1}{2}o_{44}-\frac{1}{2}o_{46}-\frac{1}{2}o_{52}- \frac{1}{2}o_{54}-\frac{1}{2}o_{81}-\frac{1}{2}o_{82}+\] (A53) \[\quad-\frac{1}{2}o_{84}-\frac{1}{2}o_{86}+\frac{1}{2}o_{116}+ \frac{1}{2}o_{117}+o_{122}+\frac{1}{2}o_{133}-\frac{1}{2}o_{139}-\frac{1}{2}o _{140}\bigg{)},\] \[\tilde{Q}_{54} =2O_{2}^{(0)}+\frac{1}{4m^{2}}\bigg{(}+2o_{13}-2o_{17}+2o_{21}+o_ {36}-o_{39}-o_{42}+o_{45}+o_{49}-5o_{53}+2o_{83}+2o_{91}+\] (A54) \[\quad+2o_{99}+2o_{115}+2o_{119}+o_{130}\bigg{)},\] \[\tilde{Q}_{55} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{14}-2o_{20}+2o_{24}+o_{ 38}-o_{41}-o_{44}+o_{48}-o_{50}+2o_{51}-o_{54}-2o_{55}+\] (A55) \[\quad-2o_{56}-o_{81}+o_{82}-o_{84}+2o_{85}+o_{86}+2o_{92}+2o_{10 2}+o_{116}+o_{117}+2o_{122}+o_{133}+o_{139}-o_{140}\bigg{)},\] \[\tilde{Q}_{56} =2O_{4}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{15}-2o_{19}+2o_{23}+o_ {37}-o_{40}-o_{43}+o_{47}-o_{51}+2o_{52}-3o_{55}-2o_{56}+\] (A56) \[\quad+2o_{86}+2o_{93}+2o_{101}+2o_{118}+2o_{121}+o_{131}\bigg{)},\] \[\tilde{Q}_{57} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{16}-2o_{18}+2o_{22}+o_{ 38}-o_{41}-o_{44}+o_{46}+2o_{50}-o_{52}-4o_{54}-o_{56}+\] (A57) \[\quad+o_{81}-o_{82}+3o_{84}-o_{86}+2o_{94}+2o_{100}+o_{116}+o_{117} +2o_{120}+o_{132}-o_{139}+o_{140}\bigg{)},\] \[\tilde{Q}_{58} =-2O_{2}^{(0)}-\frac{1}{4m^{2}}\left(o_{13}-2o_{17}+4o_{21}+o_{36}- o_{39}+o_{45}-o_{49}+2o_{83}+2o_{115}+2o_{119}-o_{134}\right),\] (A58) \[\tilde{Q}_{59} =-2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{14}-2o_{20}+2o_{22}+o_ {24}+o_{38}-o_{41}+o_{48}-o_{50}+o_{54}-o_{56}-o_{81}+\] (A59) \[\quad+o_{82}-o_{84}+2o_{85}+o_{86}+o_{116}+o_{117}+2o_{120}-o_{135 }+o_{139}-o_{140}\bigg{)},\]
\[\tilde{O}_{60} =-2O_{4}^{(0)}-\frac{1}{4m^{2}}\left(2o_{15}-2o_{19}+4o_{23}+o_{37}-o_{40 }+o_{47}-o_{51}+2o_{86}+2o_{118}+2o_{121}-o_{136}\right),\] (A60) \[\tilde{O}_{61} =-2O_{5}^{(0)}-\frac{1}{4m^{2}}\left(2o_{16}-2o_{18}+2o_{22}+2o_{24 }+o_{38}-o_{41}+o_{46}-o_{52}-o_{54}+o_{56}+o_{81}+\right.\] (A61) \[\left.-o_{82}+3o_{84}-o_{86}+o_{116}+o_{117}+2o_{122}-o_{135}-o_{1 39}+o_{140}\right)\!,\] \[\tilde{O}_{62} =-2O_{2}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{7}-2o_{10}-2o_{21}-o_{ 36}+o_{39}-o_{42}-o_{45}+o_{49}+o_{53}-2o_{75}-2o_{115}+\] (A62) \[\left.-2o_{119}+o_{130}-2o_{137}\right)\!,\] \[\tilde{O}_{63} =-2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-2o_{22}-o_{ 38}+o_{41}-o_{44}-o_{48}+o_{50}-o_{54}+2o_{56}-2o_{77}+\] (A63) \[\left.+o_{81}-o_{82}+o_{84}-o_{86}-o_{116}-o_{117}-2o_{120}+o_{1 33}-o_{139}-o_{140}\right)\!,\] \[\tilde{O}_{64} =-2O_{4}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{8}-2o_{11}-2o_{23}-o_ {37}+o_{40}-o_{43}-o_{47}+o_{51}+o_{55}-2o_{76}-2o_{118}+\] (A64) \[\left.-2o_{121}+o_{131}-2o_{138}\right)\!,\] \[\tilde{O}_{65} =-2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-2o_{24}-o_{ 38}+o_{41}-o_{44}-o_{46}+o_{52}+2o_{54}-o_{56}-2o_{78}+\] (A65) \[\left.-o_{81}+o_{82}-o_{84}+o_{86}-o_{116}-o_{117}-2o_{122}+o_{1 32}-o_{139}-o_{140}\right)\!,\] \[\tilde{O}_{66} =2O_{2}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{4}+2o_{21}+o_{36}-o_{39} +o_{42}+o_{45}-o_{49}-o_{53}-2o_{79}-2o_{83}+2o_{115}+\] (A66) \[\left.+2o_{119}+o_{130}-2o_{137}\right)\!,\] \[\tilde{O}_{67} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{6}+2o_{22}+o_{38}-o_{41} +o_{44}+o_{48}-o_{50}-o_{56}-o_{81}-o_{82}-o_{84}+\] (A67) \[\left.-o_{86}+o_{116}+o_{117}+2o_{120}+o_{132}-o_{139}-o_{140} \right)\!,\] \[\tilde{O}_{68} =2O_{4}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{5}+2o_{23}+o_{37}-o_{40 }+o_{43}+o_{47}-o_{51}-o_{55}-2o_{80}-2o_{85}+2o_{118}+\] (A68) \[\left.+2o_{121}+o_{131}-2o_{138}\right)\!,\] \[\tilde{O}_{69} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{6}+2o_{24}+o_{38}-o_{41} +o_{44}+o_{46}-o_{52}-o_{54}-o_{81}-o_{82}-o_{84}-o_{86}+\] (A69) \[\left.+o_{116}+o_{117}+2o_{122}+o_{133}-o_{139}-o_{140}\right)\!,\] \[\tilde{O}_{70} =-2O_{2}^{(0)}-\frac{1}{4m^{2}}\left(2o_{13}-2o_{17}+4o_{21}+o_{3 6}-o_{39}+o_{45}-o_{49}+2o_{83}+2o_{115}+2o_{119}-o_{134}\right),\] (A70) \[\tilde{O}_{71} =-2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{14}-2o_{20}+4o_{24}+o_{ 38}-o_{41}+o_{48}-o_{50}-o_{81}+o_{82}-o_{84}+2o_{85}+\] (A71) \[\left.+o_{86}+o_{116}+o_{117}+2o_{122}-o_{132}+o_{133}-o_{135}+o_{ 139}-o_{140}\right)\!,\] \[\tilde{O}_{72} =-2O_{4}^{(0)}-\frac{1}{4m^{2}}\left(2o_{15}-2o_{19}+4o_{23}+o_{ 37}-o_{40}+o_{47}-o_{51}+2o_{86}+2o_{118}+2o_{121}-o_{136}\right),\] (A72) \[\tilde{O}_{73} =-2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{16}-2o_{18}+4o_{22}+o_{ 38}-o_{41}+o_{46}-o_{52}+o_{81}-o_{82}+3o_{84}-o_{86}+\] (A73) \[\left.+o_{116}+o_{117}+2o_{120}+o_{132}-o_{133}-o_{135}-o_{139}+o_{ 140}\right)\!,\]
\[\tilde{O}_{74} =-2O_{2}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{7}-2o_{10}-2o_{21}-o_{36}+o_ {39}-o_{42}-o_{45}+o_{49}+o_{53}-2o_{75}-2o_{115}+ \tag{111}\] \[\quad-2o_{119}+o_{130}-2o_{137}\bigg{)},\] \[\tilde{O}_{75} =-2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-2o_{24}-o_ {38}+o_{41}-o_{44}-o_{48}+o_{50}+o_{56}-2o_{77}+o_{81}+\] (112) \[\quad-o_{82}+o_{84}-o_{86}-o_{116}-o_{117}-2o_{122}+o_{132}-o_{139 }-o_{140}\bigg{)},\] \[\tilde{O}_{76} =-2O_{4}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{8}-2o_{11}-2o_{23}-o_ {37}+o_{40}-o_{43}-o_{47}+o_{51}+o_{55}-2o_{76}-2o_{118}+\] (113) \[\quad-2o_{121}+o_{131}-2o_{138}\bigg{)},\] \[\tilde{O}_{77} =-2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-2o_{22}-o_ {38}+o_{41}-o_{44}-o_{46}+o_{52}+o_{54}-2o_{78}-o_{81}+\] (114) \[\quad+o_{82}-o_{84}+o_{86}-o_{116}-o_{117}-2o_{120}+o_{133}-o_{1 39}-o_{140}\bigg{)},\] \[\tilde{O}_{78} =2O_{2}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{4}+2o_{21}+o_{36}-o_{39 }+o_{42}+o_{45}-o_{49}-o_{53}-2o_{79}-2o_{83}+2o_{115}+\] (115) \[\quad+2o_{119}+o_{130}-2o_{137}\bigg{)},\] \[\tilde{O}_{79} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{6}+2o_{24}+o_{38}-o_{4 1}+o_{44}+o_{48}-o_{50}-o_{54}-o_{81}-o_{82}-o_{84}-o_{86}+\] (116) \[\quad+o_{116}+o_{117}+2o_{122}+o_{133}-o_{139}-o_{140}\bigg{)},\] \[\tilde{O}_{80} =2O_{4}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{5}+2o_{23}+o_{37}-o_{40} +o_{43}+o_{47}-o_{51}-o_{55}-2o_{80}-2o_{85}+2o_{118}+\] (117) \[\quad+2o_{121}+o_{131}-2o_{138}\bigg{)},\] \[\tilde{O}_{81} =2O_{5}^{(0)}+\frac{1}{4m^{2}}\bigg{(}2o_{6}+2o_{22}+o_{38}-o_{4 1}+o_{44}+o_{46}-o_{52}-o_{56}-o_{81}-o_{82}-o_{84}-o_{86}+\] (118) \[\quad+o_{116}+o_{117}+2o_{120}+o_{132}-o_{139}-o_{140}\bigg{)},\] \[\tilde{O}_{82} =2O_{2}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{7}-2o_{10}-2o_{21}+2o_ {33}-o_{36}-o_{39}+o_{42}-o_{45}-o_{49}+3o_{53}-4o_{75}+\] (119) \[\quad-2o_{79}-2o_{83}-2o_{115}-2o_{119}+o_{130}-2o_{137}\bigg{)},\] \[\tilde{O}_{83} =2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-2o_{22}+2o_ {35}-o_{38}-o_{41}+o_{44}-o_{48}-o_{50}+o_{54}+2o_{56}+\] (120) \[\quad-2o_{77}-2o_{78}-o_{81}-o_{82}-o_{84}-o_{86}-o_{116}-o_{117} -2o_{120}+o_{133}-o_{139}-o_{140}\bigg{)},\] \[\tilde{O}_{84} =2O_{4}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{8}-2o_{11}-2o_{23}+2o_ {34}-o_{37}-o_{40}+o_{43}-o_{47}-o_{51}+3o_{55}-4o_{76}+\] (121) \[\quad-2o_{80}-2o_{85}-2o_{118}-2o_{121}+o_{131}-2o_{138}\bigg{)},\] \[\tilde{O}_{85} =2O_{5}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{9}-2o_{12}-2o_{24}+2o_ {35}-o_{38}-o_{41}+o_{44}-o_{46}-o_{52}+2o_{54}+o_{56}+\] (122) \[\quad-2o_{77}-2o_{78}-o_{81}-o_{82}-o_{84}-o_{86}-o_{116}-o_{117} -2o_{122}+o_{132}-o_{139}-o_{140}\bigg{)},\]
\[\tilde{O}_{86} =O_{6}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{29}-2o_{31}+2o_{32}-o_{59}+ \frac{3}{2}o_{60}-\frac{3}{2}o_{61}-\frac{3}{2}o_{63}+\frac{3}{2}o_{64}+3o_{72}- 3o_{73}+\] (A86) \[\quad-o_{144}+\frac{1}{2}o_{145}\bigg{)},\] \[\tilde{O}_{87} =O_{6}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{29}-2o_{31}+2o_{32}+o_{59 }-\frac{1}{2}o_{60}-\frac{3}{2}o_{61}+\frac{1}{2}o_{63}-\frac{1}{2}o_{64}+2o_{ 69}+o_{72}+\] (A87) \[\quad-o_{73}-o_{144}+\frac{1}{2}o_{145}\bigg{)},\] \[\tilde{O}_{88} =O_{6}^{(0)}+\frac{1}{4m^{2}}\bigg{(}3o_{30}+3o_{31}-o_{58}-o_{59 }+\frac{3}{2}o_{60}+\frac{1}{2}o_{61}-\frac{3}{2}o_{63}+\frac{3}{2}o_{64}+4o_{ 70}-4o_{71}+\] (A88) \[\quad+o_{72}-o_{73}+\frac{3}{2}o_{145}\bigg{)},\] \[\tilde{O}_{89} =O_{6}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{29}-o_{30}-2o_{31}+2o_{32 }+o_{58}+o_{59}-\frac{3}{2}o_{60}-\frac{1}{2}o_{61}+\frac{3}{2}o_{63}-\frac{3} {2}o_{64}+\] (A89) \[\quad+o_{69}-2o_{70}+o_{71}-\frac{1}{2}o_{145}+o_{146}\bigg{)},\] \[\tilde{O}_{90} =-2O_{6}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{27}-2o_{28}-2o_{29}+2o _{30}+4o_{31}-4o_{32}+o_{60}+3o_{61}-2o_{62}-3o_{63}+\] (A90) \[\quad+3o_{64}-4o_{69}+4o_{70}+2o_{72}-2o_{73}+o_{144}+2o_{146} \bigg{)},\] \[\tilde{O}_{91} =-2O_{6}^{(0)}-\frac{1}{4m^{2}}\bigg{(}2o_{27}-2o_{28}-2o_{29}+4o _{31}-4o_{32}+o_{60}+3o_{61}-2o_{62}-3o_{63}+3o_{64}+\] (A91) \[\quad-4o_{69}+2o_{70}+o_{144}\bigg{)},\] \[\tilde{O}_{92} =-O_{6}^{(0)}-\frac{1}{4m^{2}}\bigg{(}3o_{30}+3o_{31}-3o_{58}-3o_ {59}+\frac{3}{2}o_{60}-\frac{3}{2}o_{61}-\frac{3}{2}o_{63}+\frac{3}{2}o_{64}+6 o_{70}+\] (A92) \[\quad-6o_{71}+3o_{72}-3o_{73}+\frac{3}{2}o_{145}\bigg{)}.\]
## Appendix B Reduction to the minimal basis
Here we use the Fierz's relations among the subleading \(3N\) contact operators derived in Ref. [16] to rewrite the non-relativistic expansions of the 92 relativistic operators in terms of the minimal basis of operators appearing in Eq. (1). Notice that, as a further consequence of Fierz' identities, the six leading order operators \(O_{i=1,\dots,6}^{(0)}\) are all proportional to the operator \(O_{0}\), which is the identity operator in spin and isospin space,
\[O_{0}\equiv O_{1}^{(0)}=-O_{2}^{(0)}=-O_{3}^{(0)}=-\frac{1}{3}O_{4}^{(0)}= \frac{1}{3}O_{5}^{(0)}=-\frac{1}{12}O_{6}^{(0)}.\] (B93)
\[\tilde{O}_{1} =O_{\mathbf{P}}^{(0)}-\frac{1}{4m^{2}}\left(O_{1}+\frac{1}{2}O_{2} +\frac{1}{2}O_{3}+\frac{1}{2}O_{4}+2O_{7}-\frac{3}{4}O_{9}-\frac{3}{4}O_{10}+ \frac{3}{4}O_{11}+\frac{3}{4}O_{12}\right),\] (B94) \[\tilde{O}_{2} =-O_{\mathbf{P}}^{(0)}+\frac{1}{4m^{2}}\bigg{(}O_{1}-\frac{3}{2}O _{2}+\frac{5}{2}O_{3}+\frac{7}{6}O_{4}+2O_{5}+\frac{2}{3}O_{6}+2O_{7}+\frac{ 9}{4}O_{9}+\frac{1}{4}O_{10}+\] (B95) \[\quad+\frac{3}{4}O_{11}+\frac{3}{4}O_{12}-2O_{13}\bigg{)},\] \[\tilde{O}_{3} =O_{\mathbf{P}}^{(0)}\] (B96) \[\tilde{O}_{4} =-O_{\mathbf{P}}^{(0)}-\frac{1}{4m^{2}}\left(2O_{1}+O_{2}+O_{3}-8 O_{7}-4O_{8}-\frac{3}{2}O_{9}-\frac{1}{2}O_{10}+\frac{9}{2}O_{11}+\frac{3}{2}O_{12}+2O_{13} \right),\] (B97) \[\tilde{O}_{5} =-O_{\mathbf{P}}^{(0)}+\frac{1}{4m^{2}}\left(O_{1}-2O_{2}+3O_{3}+ \frac{1}{3}O_{4}+O_{5}+\frac{1}{3}O_{6}+4O_{7}-2O_{13}\right),\] (B98)
\[\tilde{O}_{6} =-O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}-\frac{5}{8}O_{9}-O_{8}-3 \mathcal{O}_{7}-\frac{1}{12}O_{6}-\frac{1}{4}O_{5}-\frac{1}{12}O_{4}-\frac{1}{ 2}O_{3}+\] (B99) \[\qquad+\frac{1}{4}O_{2}+O_{13}+\frac{5}{8}O_{12}+\frac{11}{8}O_{1 1}-\frac{3}{8}O_{10}+\frac{1}{4}O_{1}\big{)},\] \[\tilde{O}_{7} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}-\frac{7}{16}O_{9}+ \frac{1}{2}O_{7}-\frac{1}{12}O_{6}-\frac{1}{4}O_{5}-\frac{1}{12}O_{4}-\frac{1} {16}O_{12}-\frac{1}{16}O_{11}-\frac{3}{16}O_{10}+\frac{1}{8}O_{1}\big{)},\] (B100) \[\tilde{O}_{8} =-O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}-\frac{13}{16}O_{9}- \frac{3}{2}O_{7}-\frac{1}{4}O_{6}-\frac{3}{4}O_{5}-\frac{3}{8}O_{4}-\frac{11}{ 8}O_{3}+\frac{9}{8}O_{2}+\frac{5}{4}O_{13}+\frac{1}{16}O_{12}+\] (B101) \[\qquad+\frac{1}{16}O_{11}-\frac{5}{16}O_{10}-\frac{1}{2}O_{1} \big{)},\] \[\tilde{O}_{9} =-O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}-\frac{1}{16}O_{9}+ \frac{3}{2}O_{7}+\frac{1}{12}O_{6}+\frac{1}{4}O_{5}+\frac{1}{3}O_{4}+\frac{3} {4}O_{3}-\frac{1}{4}O_{2}-\frac{1}{2}O_{13}+\frac{1}{16}O_{12}+\] (B102) \[\qquad+\frac{1}{16}O_{11}+\frac{3}{16}O_{10}+\frac{5}{8}O_{1} \big{)},\] \[\tilde{O}_{10} =-O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{1}{16}O_{9}- \frac{1}{2}O_{7}-\frac{1}{8}O_{4}-\frac{1}{8}O_{3}-\frac{1}{8}O_{2}-\frac{1} {16}O_{12}-\frac{1}{16}O_{11}+\frac{1}{16}O_{10}-\frac{1}{4}O_{1}\big{)},\] (B103) \[\tilde{O}_{11} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{13}{16}O_{9}+ \frac{3}{2}O_{7}+\frac{1}{4}O_{6}+\frac{3}{4}O_{5}+\frac{3}{8}O_{4}+\frac{11}{ 8}O_{3}-\frac{7}{8}O_{2}-O_{13}-\frac{1}{16}O_{12}+\] (B104) \[\qquad-\frac{1}{16}O_{11}+\frac{5}{16}O_{10}+\frac{1}{2}O_{1} \big{)},\] \[\tilde{O}_{12} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{19}{16}O_{9}+ O_{8}+\frac{5}{2}O_{7}+\frac{1}{6}O_{6}+\frac{1}{2}O_{5}+\frac{7}{24}O_{4}+\frac{3} {8}O_{3}-\frac{5}{8}O_{2}-O_{13}-\frac{7}{16}O_{12}+\] (B105) \[\qquad-\frac{19}{16}O_{11}+\frac{7}{16}O_{10}-\frac{1}{4}O_{1} \big{)},\] \[\tilde{O}_{13} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{1}{16}O_{9}- O_{8}-\frac{5}{2}O_{7}+\frac{1}{12}O_{6}+\frac{1}{4}O_{5}+\frac{5}{24}O_{4}+ \frac{1}{8}O_{3}+\frac{3}{8}O_{2}+\frac{1}{2}O_{13}+\] (B106) \[\qquad+\frac{11}{16}O_{12}+\frac{23}{16}O_{11}-\frac{3}{16}O_{10} +\frac{1}{2}O_{1}\big{)},\] \[\tilde{O}_{14} =-O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{1}{16}O_{9}- \frac{1}{2}O_{7}-\frac{1}{8}O_{4}-\frac{1}{8}O_{3}-\frac{1}{8}O_{2}-\frac{1} {16}O_{12}-\frac{1}{16}O_{11}+\frac{1}{16}O_{10}-\frac{1}{4}O_{1}\big{)},\] (B107) \[\tilde{O}_{15} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{13}{16}O_{9}+ \frac{3}{2}O_{7}+\frac{1}{4}O_{6}+\frac{3}{4}O_{5}+\frac{3}{8}O_{4}+\frac{11}{ 8}O_{3}-\frac{7}{8}O_{2}-O_{13}-\frac{1}{16}O_{12}+\] (B108) \[\qquad-\frac{1}{16}O_{11}+\frac{5}{16}O_{10}+\frac{1}{2}O_{1} \big{)},\] \[\tilde{O}_{16} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{7}{16}O_{9}- \frac{3}{2}O_{7}+\frac{1}{8}O_{4}-\frac{7}{8}O_{3}+\frac{5}{8}O_{2}+\frac{1}{2 }O_{13}+\frac{5}{16}O_{12}+\frac{5}{16}O_{11}-\frac{1}{16}O_{10}-\frac{1}{4}O_ {1}\big{)},\] (B109) \[\tilde{O}_{17} =-O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{1}{16}O_{9}- \frac{1}{2}O_{7}-\frac{1}{8}O_{4}-\frac{1}{8}O_{3}-\frac{1}{8}O_{2}-\frac{1} {16}O_{12}-\frac{1}{16}O_{11}+\frac{1}{16}O_{10}-\frac{1}{4}O_{1}\big{)},\] (B110) \[\tilde{O}_{18} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{1}{16}O_{9}- \frac{0}{8}-\frac{5}{2}O_{7}+\frac{1}{12}O_{6}+\frac{1}{4}O_{5}+\frac{5}{24}O_{4 }+\frac{1}{8}O_{3}+\frac{3}{8}O_{2}+\frac{1}{2}O_{13}+\frac{11}{16}O_{12}+\] (B111) \[\qquad+\frac{23}{16}O_{11}-\frac{3}{16}O_{10}+\frac{1}{2}O_{1} \big{)},\] \[\tilde{O}_{19} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{3}{16}O_{9}+ 2O_{8}+\frac{13}{2}O_{7}+\frac{1}{3}O_{6}+O_{5}+\frac{11}{24}O_{4}+\frac{13} {8}O_{3}-\frac{15}{8}O_{2}-\frac{5}{2}O_{13}-\frac{19}{16}O_{12}+\] (B112) \[\qquad-\frac{43}{16}O_{11}+\frac{15}{16}O_{10}-\frac{1}{4}O_{1} \big{)},\] \[\tilde{O}_{20} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{9}{16}O_{9}+ \frac{1}{2}O_{7}+\frac{1}{12}O_{6}+\frac{1}{4}O_{5}+\frac{1}{12}O_{4}+\frac{1} {2}O_{3}-\frac{1}{2}O_{2}-\frac{1}{2}O_{13}-\frac{1}{16}O_{12}+\] (B113) \[\qquad-\frac{1}{16}O_{11}+\frac{5}{16}O_{10}+\frac{1}{8}O_{1} \big{)},\] \[\tilde{O}_{21} =-3O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}\big{(}+\frac{9}{1
\[\tilde{O}_{25} =-6O_{\mathbf{p}}^{(0)},\] (B118) \[\tilde{O}_{26} =6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{9}{4}O_{9}-4O_{8}-10O _{7}-\frac{1}{6}O_{6}-\frac{3}{2}O_{5}-\frac{1}{6}O_{4}-\frac{3}{2}O_{3}+3O_{2 }+4O_{13}+\frac{9}{4}O_{12}++\] (B119) \[\quad\quad\frac{21}{4}O_{11}-\frac{1}{4}O_{10}+\frac{3}{2}O_{1}),\] \[\tilde{O}_{27} =-2O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{3}{4}O_{9}-\frac{ 1}{3}O_{6}-\frac{5}{6}O_{5}-\frac{1}{12}O_{4}-\frac{13}{12}O_{3}+\frac{5}{4}O_ {2}+O_{13}+\frac{1}{4}O_{12}+\frac{1}{4}O_{11}-\frac{1}{4}O_{10}+\frac{1}{4}O _{1}),\] (B120) \[\tilde{O}_{28} =6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-3O_{9}-2O_{7}-\frac{1}{3 }O_{6}-\frac{1}{2}O_{5}-\frac{7}{12}O_{4}-\frac{5}{4}O_{3}+\frac{1}{4}O_{2}+ \frac{1}{2}O_{13}-O_{10}-\frac{3}{4}O_{1}),\] (B121) \[\tilde{O}_{29} =-6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(+2O_{7}-\frac{1}{2}O_{5}+ \frac{1}{4}O_{4}+\frac{1}{4}O_{3}+\frac{1}{4}O_{2}-\frac{1}{2}O_{10}+\frac{3 }{4}O_{1}),\] (B122) \[\tilde{O}_{30} =6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{9}{4}O_{9}-\frac{ 1}{3}O_{6}-\frac{3}{2}O_{5}-\frac{13}{12}O_{4}-\frac{3}{4}O_{3}+\frac{3}{4}O _{2}+\frac{3}{2}O_{13}-\frac{3}{4}O_{12}-\frac{3}{4}O_{11}+\] (B123) \[\quad\quad-\frac{1}{4}O_{10}-\frac{3}{4}O_{1}),\] \[\tilde{O}_{31} =2O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(+\frac{9}{8}O_{9}+O_{7}+ \frac{1}{6}O_{6}+\frac{2}{3}O_{5}+\frac{1}{6}O_{4}+\frac{7}{6}O_{3}-O_{2}-O_{ 13}-\frac{1}{8}O_{12}+\] (B124) \[\quad\quad-\frac{1}{8}O_{11}+\frac{5}{8}O_{8}+\frac{1}{4}O_{1}),\] \[\tilde{O}_{32} =-6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(+\frac{9}{8}O_{9}-O_{7}+ \frac{1}{2}O_{6}+O_{5}+\frac{3}{4}O_{4}+\frac{1}{4}O_{3}-\frac{1}{4}O_{2}- \frac{1}{2}O_{13}+\frac{3}{8}O_{12}+\] (B125) \[\quad\quad+\frac{3}{8}O_{11}+\frac{1}{8}O_{10}),\] \[\tilde{O}_{33} =6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(+\frac{3}{8}O_{9}-O_{7}- \frac{1}{2}O_{5}-\frac{1}{2}O_{4}-\frac{1}{2}O_{3}-\frac{1}{2}O_{2}-\frac{3}{8 }O_{12}-\frac{3}{8}O_{11}-\frac{1}{8}O_{10}-\frac{3}{4}O_{1}),\] (B126) \[\tilde{O}_{34} =-6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(+\frac{9}{8}O_{9}-O_{7}+ \frac{1}{3}O_{6}+\frac{3}{2}O_{5}+\frac{7}{12}O_{4}+\frac{3}{4}O_{3}-\frac{1}{ 4}O_{2}-\frac{1}{2}O_{13}+\frac{3}{8}O_{12}+\] (B127) \[\quad\quad+\frac{3}{8}O_{11}+\frac{1}{8}O_{10}),\] \[\tilde{O}_{35} =-2O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{5}{8}O_{9}+O_{7}- \frac{1}{3}O_{6}-O_{5}-\frac{1}{12}O_{4}-\frac{3}{4}O_{3}+\frac{5}{4}O_{2}+O_{ 13}+\frac{1}{8}O_{12}+\] (B128) \[\quad\quad+\frac{1}{8}O_{11}-\frac{8}{8}O_{10}+\frac{1}{2}O_{1}),\] \[\tilde{O}_{36} =6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{33}{8}O_{9}-3O_{7}- \frac{1}{2}O_{6}-\frac{3}{2}O_{5}-\frac{5}{4}O_{4}-\frac{9}{4}O_{3}+\frac{3}{4}O _{2}+\frac{3}{2}O_{13}-\frac{3}{8}O_{12}+\] (B129) \[\quad\quad-\frac{3}{8}O_{11}-\frac{9}{8}O_{10}-\frac{3}{2}O_{1}),\] \[\tilde{O}_{37} =-6O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(+\frac{9}{8}O_{9}+3O_{7}+ \frac{3}{4}O_{4}+\frac{3}{4}O_{3}+\frac{3}{4}O_{2}+\frac{3}{8}O_{12}+\frac{3}{8 }O_{11}-\frac{3}{8}O_{10}+\frac{3}{2}O_{1}),\] (B130) \[\tilde{O}_{38} =O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{1}{2}O_{9}+O_{7}- \frac{1}{12}O_{6}-\frac{1}{4}O_{5}+\frac{1}{24}O_{4}+\frac{1}{8}O_{3}+\frac{1}{8 }O_{2}-\frac{1}{4}O_{10}+\frac{3}{8}O_{1}),\] (B131) \[\tilde{O}_{39} =-O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{1}{4}O_{9}+O_{8}+3O_{7} -\frac{1}{12}O_{6}-\frac{1}{4}O_{5}+\frac{1}{24}O_{4}-\frac{1}{8}O_{3}-\frac{1}{ 8}O_{2}-\frac{1}{2}O_{13}+\] (B132) \[\quad\quad-\frac{1}{2}O_{12}-\frac{5}{4}O_{11}+\frac{1}{4}O_{10}- \frac{1}{8}O_{1}),\] \[\tilde{O}_{40} =-O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{11}{8}O_{9}-O_{7}- \frac{1}{3}O_{6}-O_{5}-\frac{7}{12}O_{4}-\frac{5}{4}O_{3}+O_{2}+\frac{5}{4}O _{13}-\frac{1}{8}O_{12}+\] (B133) \[\quad\quad-\frac{1}{8}O_{11}-\frac{3}{8}O_{10}-\frac{1}{2}O_{1}),\] \[\tilde{O}_{41} =-O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(-\frac{17}{8}O_{9}-O_{8}- 5O_{7}-\frac{1}{2}O_{6}-\frac{3}{2}O_{5}-\frac{3}{4}O_{4}-\frac{5}{2}O_{3}+ \frac{9}{4}O_{2}+\frac{11}{4}O_{13}+\] (B134) \[\quad\quad+\frac{5}{8}O_{12}+\frac{11}{8}O_{11}-\frac{7}{8}O_{10} -\frac{1}{2}O_{1}),\] \[\tilde{O}_{42} =-O_{\mathbf{p}}^{(0)}+\frac{1}{m^{2}}(+\frac{11}{8}O_{9}-O_{7}- \frac{1}{4}O_{4}-\frac{1}{4}O_{3}-\frac{1}{4}O_{2}-\frac{1}{8}O_{12}-\frac{1}{8 }O_{11}+\frac{1}{8}O_{10}-\frac{1}{2}O_{1}),\] (B135) \[\tilde{O}_{
\[\tilde{\partial}_{45} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{11}{8}0_{9}+O_{7}+ \frac{1}{3}0_{6}+O_{5}+\frac{7}{12}O_{4}+\frac{5}{4}O_{3}-\frac{3}{4}O_{2}-O_{13 }+\frac{1}{8}O_{12}+\] (B138) \[\qquad+\frac{1}{8}O_{11}+\frac{3}{8}O_{10}+\frac{1}{2}O_{1}),\] \[\tilde{\partial}_{46} =-O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{1}{8}0_{9}-O_{7}- \frac{1}{4}O_{4}-\frac{1}{4}O_{3}-\frac{1}{4}O_{2}-\frac{1}{8}O_{12}-\frac{1}{8 }O_{11}+\frac{1}{8}O_{10}-\frac{1}{2}O_{1}),\] (B139) \[\tilde{\partial}_{47} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{5}{8}O_{9}-O_{8}-3O _{7}+\frac{1}{6}O_{6}+\frac{1}{2}O_{5}+\frac{5}{12}O_{4}+\frac{1}{2}O_{2}+ \frac{1}{2}O_{13}+\frac{7}{8}O_{12}+\] (B140) \[\qquad+\frac{13}{8}O_{11}-\frac{1}{8}O_{10}+\frac{1}{2}O_{1}),\] \[\tilde{\partial}_{48} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{17}{8}O_{9}+O_{8}+5 O_{7}+\frac{1}{2}O_{6}+\frac{3}{2}O_{5}+\frac{3}{4}O_{4}+\frac{5}{2}O_{3}-2O_{2}- \frac{5}{2}O_{13}+\] (B141) \[\qquad-\frac{5}{8}O_{12}-\frac{11}{8}O_{11}+\frac{7}{8}O_{10}+ \frac{1}{2}O_{1}),\] \[\tilde{\partial}_{49} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{11}{8}O_{9}+O_{7}+ \frac{1}{3}O_{6}+O_{5}+\frac{7}{12}O_{4}+\frac{5}{4}O_{3}-\frac{3}{4}O_{2}-O_ {13}+\frac{1}{8}O_{12}++\] (B142) \[\qquad\frac{1}{8}O_{11}+\frac{3}{8}O_{10}+\frac{1}{2}O_{1}),\] \[\tilde{\partial}_{50} =O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{14}{4}O_{9}+O_{8}+3O _{7}-\frac{1}{12}O_{6}-\frac{1}{4}O_{5}+\frac{1}{24}O_{4}-\frac{1}{8}O_{3}- \frac{1}{8}O_{2}-\frac{1}{2}O_{13}+\] (B143) \[\qquad-\frac{1}{2}O_{12}-\frac{5}{4}O_{11}+\frac{1}{4}O_{10}- \frac{1}{8}O_{1}),\] \[\tilde{\partial}_{51} =-3O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{3}{2}O_{9}-O_{7}- \frac{1}{12}O_{6}-\frac{1}{4}O_{5}-\frac{5}{24}O_{4}-\frac{5}{8}O_{3}+\frac{1} {8}O_{2}+\frac{1}{4}O_{13}-\frac{1}{2}O_{10}-\frac{3}{8}O_{1}),\] (B144) \[\tilde{\partial}_{52} =3O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+O_{7}-\frac{1}{12}O_{6}- \frac{1}{4}O_{5}+\frac{1}{24}O_{4}+\frac{1}{8}O_{3}+\frac{1}{8}O_{2}-\frac{1}{ 4}O_{10}+\frac{3}{8}O_{1}),\] (B145) \[\tilde{\partial}_{53} =-3O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-3O_{9}-3O_{8}-9O_{7}- \frac{5}{12}O_{6}-\frac{5}{4}O_{5}-\frac{19}{24}O_{4}-\frac{13}{8}O_{3}+\frac{2 1}{8}O_{2}+\frac{15}{4}O_{13}+\] (B146) \[\qquad+\frac{3}{2}O_{12}+\frac{15}{4}O_{11}-\frac{5}{4}O_{10}+ \frac{3}{8}O_{1}),\] \[\tilde{\partial}_{54} =-2O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{27}{8}O_{9}+O_{7}+ \frac{5}{6}O_{8}+\frac{5}{2}O_{5}+\frac{13}{12}O_{4}+\] (B147) \[\qquad+\frac{11}{4}O_{3}-\frac{9}{4}O_{2}-\frac{5}{2}O_{13}+\frac{ 1}{8}O_{12}+\frac{1}{8}O_{11}+\frac{11}{8}O_{10}+\frac{1}{2}O_{1}),\] \[\tilde{\partial}_{55} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{15}{8}O_{9}-3O_{7}+ \frac{1}{2}O_{6}+\frac{3}{2}O_{5}-\frac{1}{4}O_{4}+\frac{3}{4}O_{3}-\frac{9}{ 4}O_{2}-\frac{3}{2}O_{13}-\frac{3}{8}O_{12}+\] (B148) \[\qquad-\frac{1}{8}O_{11}-\frac{9}{8}O_{10}-\frac{3}{2}O_{1}),\] \[\tilde{\partial}_{56} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{21}{8}O_{9}-3O_{7}+ \frac{1}{2}O_{6}+\frac{3}{2}O_{5}+\frac{1}{2}O_{4}+\frac{3}{8}O_{12}+\frac{3} {8}O_{11}+\frac{9}{8}O_{10}-\frac{3}{4}O_{1}),\] (B149) \[\tilde{\partial}_{57} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{27}{8}O_{9}+3O_{7}+ \frac{1}{2}O_{6}+\frac{3}{2}O_{5}+\frac{1}{2}O_{4}+3O_{3}-3O_{2}-3O_{13}-\frac{ 3}{8}O_{12}+\] (B150) \[\qquad-\frac{3}{8}O_{11}+\frac{15}{8}O_{10}+\frac{15}{4}O_{1}),\] \[\tilde{\partial}_{58} =2O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{15}{8}O_{9}-O_{7}- \frac{1}{2}O_{6}-\frac{3}{2}O_{5}-\frac{1}{4}O_{4}-\frac{9}{4}O_{3}+\frac{9}{ 4}O_{2}+2O_{13}+\frac{3}{8}O_{12}+\frac{3}{8}O_{11}-\frac{7}{8}O_{10}),\] (B151) \[\tilde{\partial}_{59} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{21}{8}O_{9}+2O_{8}+7O _{7}-\frac{1}{3}O_{6}-2O_{5}-\frac{5}{6}O_{4}-\frac{1}{2}O_{3}-O_{2}-O_{13}+\] (B152) \[\qquad-\frac{15}{8}O_{12}-\frac{27}{8}O_{11}+\frac{7}{8}O_{10}- \frac{3}{4}O_{1}),\] \[\tilde{\partial}_{60} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{3}{8}O_{9}+3O_{7}+ \frac{3}{4}O_{4}+\frac{3}{4}O_{3}+\frac{3}{4}O_{2}+\frac{3}{8}O_{12}+\frac{3} {8}O_{11}-\frac{3}{8}O_{10}+\frac{3}{2}O_{1}),\] (B153) \[\tilde{\partial}_{61} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{39}{8}O_{9}-2O_{8}-7O _{7}-\frac{7}{6}O_{6}-\frac{5}{2}O_{5}-\frac{13}{6}O_{4}-\frac{5}{2}O_{3}+ \frac{5}{2}O_{2}+4O_{13}+\] (B154) \[\qquad+\frac{3}{8}O_{12}+\frac{15}{8}O_{
\[\tilde{O}_{64} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{3}{8}O_{9}+O_{7}+\frac{ 1}{2}O_{5}+\frac{1}{2}O_{4}+\frac{1}{2}O_{3}+\frac{1}{2}O_{2}+\frac{3}{8}O_{12} +\frac{3}{8}O_{11}+\frac{1}{8}O_{10}+\frac{3}{4}O_{1}),\] (B157) \[\tilde{O}_{65} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{39}{8}O_{9}-6O_{8 }-17O_{7}-O_{6}-\frac{5}{2}O_{5}-\frac{5}{4}O_{3}-\frac{13}{4}O_{3}+\frac{19}{4 }O_{2}+\frac{13}{2}O_{13}+\] (B158) \[\qquad+\frac{27}{8}O_{12}+\frac{63}{8}O_{11}-\frac{19}{8}O_{10}+ \frac{3}{2}O_{1}),\] \[\tilde{O}_{66} =-2O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{1}{2}O_{9}-2O_{8 }-6O_{7}+\frac{1}{6}O_{6}+\frac{1}{3}O_{5}-\frac{1}{12}O_{4}+\frac{1}{12}O_{3 }+\frac{1}{4}O_{2}+O_{13}+\] (B159) \[\qquad+O_{12}+\frac{5}{2}O_{11}-\frac{1}{2}O_{10}+\frac{1}{4}O_{ 1}),\] \[\tilde{O}_{67} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+3O_{9}+2O_{7}+\frac{1}{ 6}O_{6}+O_{5}+\frac{5}{12}O_{4}+\frac{7}{4}O_{3}-\frac{1}{4}O_{2}-\frac{1}{2} O_{13}+O_{10}+\frac{3}{4}O_{1}),\] (B160) \[\tilde{O}_{68} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-2O_{7}+\frac{1}{2}O_{5} -\frac{1}{4}O_{4}-\frac{1}{4}O_{3}-\frac{1}{4}O_{2}+\frac{1}{2}O_{10}-\frac{3 }{4}O_{1}),\] (B161) \[\tilde{O}_{69} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+6O_{9}+6O_{8}+18O_{7}+O_{ 6}+\frac{5}{2}O_{5}+\frac{7}{4}O_{4}+\frac{13}{4}O_{3}-\frac{21}{4}O_{2}-\frac {15}{2}O_{13}+\] (B162) \[\qquad-3O_{12}-\frac{15}{2}O_{11}+\frac{5}{2}O_{10}-\frac{3}{4}O _{1}),\] \[\tilde{O}_{70} =2O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{15}{8}O_{9}-O_{7}- \frac{1}{2}O_{6}-\frac{3}{2}O_{5}-\frac{1}{4}O_{4}-\frac{9}{4}O_{3}+\frac{9}{4 }O_{2}+2O_{13}+\frac{3}{8}O_{12}+\] (B163) \[\qquad+\frac{3}{8}O_{11}-\frac{7}{8}O_{10}),\] \[\tilde{O}_{71} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{45}{8}O_{9}-4O_{8 }-9O_{7}-O_{6}-4O_{5}-2O_{4}-\frac{5}{2}O_{3}+4O_{2}+6O_{13}+\frac{9}{8}O_{12}+\] (B164) \[\qquad+\frac{33}{8}O_{11}-\frac{5}{8}O_{8}+\frac{3}{4}O_{1}),\] \[\tilde{O}_{72} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{3}{8}O_{9}+3O_{7}+ \frac{3}{4}O_{4}+\frac{3}{4}O_{3}+\frac{3}{4}O_{2}+\frac{3}{8}O_{12}+\frac{3}{ 8}O_{11}-\frac{3}{8}O_{10}+\frac{3}{2}O_{1}),\] (B165) \[\tilde{O}_{73} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{15}{8}O_{9}+4O_{8 }+9O_{7}-\frac{1}{2}O_{6}-\frac{1}{2}O_{5}-O_{4}-\frac{1}{2}O_{3}-\frac{5}{2} O_{2}-3O_{13}+\] (B166) \[\qquad-\frac{21}{8}O_{12}-\frac{45}{8}O_{11}-\frac{7}{8}O_{10}- \frac{9}{4}O_{1}),\] \[\tilde{O}_{74} =2O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{1}{8}O_{9}+2O_{8}+5 O_{7}-\frac{1}{6}O_{5}-\frac{1}{6}O_{3}-\frac{1}{2}O_{2}-O_{13}-\frac{9}{8}O_{12}- \frac{21}{8}O_{11}+\] (B167) \[\qquad+\frac{1}{8}O_{10}-\frac{3}{4}O_{1}),\] \[\tilde{O}_{75} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{33}{8}O_{9}-6O_{8 }-15O_{7}-O_{6}-\frac{7}{2}O_{5}-\frac{7}{4}O_{4}-\frac{11}{4}O_{3}+\frac{21}{ 4}O_{2}+\frac{15}{2}O_{13}+\] (B168) \[\qquad+\frac{21}{8}O_{12}+\frac{57}{8}O_{11}-\frac{13}{8}O_{10}+ \frac{3}{2}O_{1}),\] \[\tilde{O}_{76} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{3}{8}O_{9}+O_{7}+ \frac{1}{2}O_{5}+\frac{1}{2}O_{4}+\frac{1}{2}O_{3}+\frac{1}{2}O_{2}+\frac{3}{8 }O_{12}+\frac{3}{8}O_{11}+\frac{1}{8}O_{10}+\frac{3}{4}O_{1}),\] (B169) \[\tilde{O}_{77} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{15}{8}O_{9}-O_{7}- \frac{1}{3}O_{6}-\frac{1}{2}O_{5}-\frac{1}{12}O_{4}-\frac{5}{4}O_{3}-\frac{1}{4 }O_{2}-\frac{1}{2}O_{13}+\frac{3}{8}O_{12}+\] (B170) \[\qquad+\frac{3}{8}O_{11}-\frac{7}{8}O_{10}),\] \[\tilde{O}_{78} =-2O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{1}{2}O_{9}-2O_{8}-6O _{7}+\frac{1}{6}O_{6}+\frac{1}{3}O_{5}-\frac{1}{12}O_{4}+\frac{1}{12}O_{3}+ \frac{1}{4}O_{2}+O_{13}+\] (B171) \[\qquad+O_{12}+\frac{5}{2}O_{11}-\frac{1}{2}O_{10}+\frac{1}{4}O_{1}),\] \[\tilde{O}_{79} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+6O_{9}+6O_{8}+18O_{7}+ \frac{5}{6}O_{6}+3O_{5}+\frac{19}{12}O_{4}+\frac{15}{4}O_{3}-\frac{21}{4}O_{2}- \frac{15}{2}O_{13}+\] (B172) \[\qquad-3O_{12}-\frac{15}{2}O_{11}+\frac{5}{2}O_{10}-\frac{3}{4}O _{1}),\] \[\tilde{O}_{80} =-6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-2O_{7}+\frac{1}{2}O_{5} -\frac{1}{4}O_{4}-\frac{1}{4}O_{3}-\frac{1}{4}O_{2}+\frac{1}{2}O_{10}-\frac{3}{4}O _{1}),\] (B173) \[\tilde{O}_{81} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+3O_{9}+2O_{7}+\frac{1}{3}O_ {6}+\frac{1}{2}O_{5}+\frac{7}{12}O_{4}+\frac
\[\bar{O}_{85} =6O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+3O_{9}+6O_{8}+14O_{7}+\frac{5 }{6}O_{6}+\frac{5}{2}O_{5}+\frac{13}{12}O_{4}+\frac{7}{4}O_{3}-\frac{19}{4}O_{2} -\frac{13}{2}O_{13}+\] (B178) \[\qquad-3O_{12}-\frac{15}{2}O_{11}+\frac{3}{2}O_{10}-\frac{9}{4}O _{1}),\] \[\bar{O}_{86} =-12O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{9}{4}O_{9}-2O_{8} -6O_{7}+\frac{1}{3}O_{6}-O_{5}-\frac{1}{6}O_{4}-O_{3}+2O_{2}+3O_{13}+\frac{3}{ 4}O_{12}+\] (B179) \[\qquad+\frac{9}{4}O_{11}+\frac{1}{4}O_{10}),\] \[\bar{O}_{87} =-12O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{3}{4}O_{9}+\frac {1}{2}O_{6}+\frac{3}{2}O_{5}+\frac{5}{4}O_{4}+\frac{3}{4}O_{3}+\frac{3}{4}O_{ 2}+\frac{3}{4}O_{12}+\frac{3}{4}O_{11}+\frac{3}{4}O_{10}+\frac{3}{4}O_{1}),\] (B180) \[\bar{O}_{88} =-12O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+6O_{9}+4O_{7}-\frac{1} {3}O_{6}-O_{5}+\frac{1}{6}O_{4}+\frac{1}{2}O_{3}+\frac{3}{2}O_{2}+O_{13}-O_{10 }+\frac{3}{2}O_{1}),\] (B181) \[\bar{O}_{89} =-12O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(+\frac{9}{8}O_{0}+O_{7}+ \frac{1}{6}O_{6}+\frac{1}{2}O_{5}+\frac{2}{3}O_{4}+\frac{1}{2}O_{3}-\frac{1}{2 }O_{13}+\frac{3}{8}O_{12}+\frac{3}{8}O_{11}+\] (B182) \[\qquad+\frac{1}{8}O_{10}+\frac{3}{4}O_{1}),\] \[\bar{O}_{90} =24O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{9}{2}O_{9}-4O_{8} -12O_{7}-2O_{5}-O_{4}-2O_{3}+4O_{2}+6O_{13}+\frac{3}{2}O_{12}+\frac{9}{2}O_{1 1}+\frac{1}{2}O_{10}),\] (B183) \[\bar{O}_{91} =24O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-3O_{9}-2O_{8}-6O_{7}- \frac{1}{6}O_{6}-\frac{5}{2}O_{5}-\frac{17}{12}O_{4}-\frac{7}{4}O_{3}+\frac{5}{ 4}O_{2}+3O_{13}+\] (B184) \[\qquad+\frac{3}{2}O_{11}-\frac{1}{2}O_{10}-\frac{3}{4}O_{1}),\] \[\bar{O}_{92} =12O_{\mathbf{P}}^{(0)}+\frac{1}{m^{2}}(-\frac{33}{4}O_{9}-6O_{7} -\frac{3}{2}O_{4}-\frac{3}{2}O_{3}-\frac{3}{2}O_{2}-\frac{3}{4}O_{12}-\frac{3 }{4}O_{11}+\frac{3}{4}O_{10}-3O_{1}).\] (B185)
## Appendix C Details of the calculation of the boost correction from Poincare Algebra
In this appendix, we present an outline of the calculation process leading to the result of Eq. (27) for the 3N boost correction. Further details may be found in Ref. [36]. Starting from Eq. (22), we calculate the matrix elements of \(\delta V(\mathbf{P})\) between 3N initial and final states \(\Psi\) and \(\Psi^{\prime}\), i.e. \(\bra{\Psi}\delta V(\mathbf{P})\ket{\Psi^{\prime}}\).
First of all, we remind that the leading contact potential \(V^{(0)}\) is an identity of spin and isospin, thus its matrix elements between 3N eigenstates \(\ket{\mathbf{p_{1}},\sigma_{1},\tau_{1};\mathbf{p_{2}},\sigma_{2},\tau_{2};\mathbf{p_{3}}, \sigma_{3},\tau_{3}}\) (see Appendix D) are given by
\[\bra{\mathbf{p_{1}},\sigma_{1},\tau_{1};\mathbf{p_{2}},\sigma_{2},\tau_{2}; \mathbf{p_{3}},\sigma_{3},\tau_{3}}V^{(0)} \ket{\mathbf{p_{1}^{\prime}},\sigma_{1}^{\prime},\tau_{1}^{\prime};\mathbf{p_{2} ^{\prime}},\sigma_{2}^{\prime},\tau_{2}^{\prime};\mathbf{p_{3}^{\prime}},\sigma_{3}^ {\prime},\tau_{3}^{\prime}}=\] (C186) \[= E_{0}\delta(\mathbf{P}-\mathbf{P^{\prime}})\prod_{\nu=1}^{3}\delta_{ \sigma_{\nu}\sigma_{\nu}^{\prime}}\delta_{\tau_{\nu}\tau_{\nu}^{\prime}}+\text{ exchange terms};\]
We will use Jacobi coordinates defined in Eq. (21).
We may adopt the following notation conventions: operators of momentum and position are indicated by \(\mathbf{\pi_{a,b}}\), \(\mathbf{\rho_{a,b}}\), respectively, and their eigenvalues with \(\mathbf{q_{a,b}}\), \(\mathbf{\xi_{a,b}}\), respectively ( i.e., \(\mathbf{\pi_{a,b}}\ket{\mathbf{q_{a,b}}}=\mathbf{q_{a,b}}\ket{\mathbf{q_{a,b}}}\) and \(\mathbf{\rho_{a,b}}\ket{\mathbf{\xi_{a,b}}}=\mathbf{\xi_{a,b}}\ket{\mathbf{\xi_{a,b}}}\)). We introduce the shorthands \(\mathbf{q}\equiv(\mathbf{q_{a}},\mathbf{q_{b}})\) and \(\mathbf{q^{\prime}}\equiv(\mathbf{q^{\prime}_{a}},\mathbf{q^{\prime}_{b}})\), \(\mathbf{d}\mathbf{q}\equiv\mathbf{d}\mathbf{q_{a}}\mathbf{d}\mathbf{q_{b}}\) and \(\mathbf{d}\mathbf{q^{\prime}}\equiv\mathbf{d}\mathbf{q^{\prime}_{a}}\mathbf{d}\mathbf{q^{\prime}_{b}}\), \(\Psi\equiv\Psi(\mathbf{P},\mathbf{q})\) and \(\Psi^{\prime}\equiv\Psi^{\prime}(\mathbf{P^{\prime}},\mathbf{q^{\prime}})\). The notation \(\big{[}(\mathbf{q^{\prime}},\mathbf{q^{\prime\prime}})\rightarrow(\mathbf{q^{\prime\prime}}, \mathbf{q})\big{]}\) indicates substitution of \(\mathbf{q^{\prime}}\) with \(\mathbf{q^{\prime\prime}}\) and \(\mathbf{q^{\prime\prime}}\) with \(\mathbf{q}\) that precedes it inside the round brackets. We consider the particles to be identical in the initial and final states \(\Psi\) and \(\Psi^{\prime}\), while we treat them as they were distinguishable in the intermediate states (see Appendix D).
During calculations we can ignore permutation terms, using orthogonality and closure relations given by Eqs. (D210)-(D213).
Therefore, we may rewrite Eq. (C186) using Jacobi coordinates as
\[\bra{\mathbf{P},\mathbf{q_{a}},\mathbf{q_{b}}}V^{(0)}\ket{\mathbf{P^{\prime}},\mathbf{q^{\prime}_{a} },\mathbf{q^{\prime}_{b}}}=E_{0}\delta(\mathbf{P}-\mathbf{P^{\prime}}),\] (C187)
suppressing spin and isospin indices in \(3N\) states labels and neglecting the spin-isospin Kronecker deltas \(\delta_{\sigma\sigma^{\prime}}\delta_{\tau\tau^{\prime}}\).
The factor \(E_{0}\) can be safely ignored during the calculation process and later reintroduced, as it merely acts as a global factor.
To proceed with the calculation of \(\langle\Psi^{\prime}|\,\delta V(\mathbf{P})\,|\Psi\rangle\), we write \(\chi_{0}\equiv\chi^{(1)}+\chi^{(2)}+\chi^{(3)}\), with
\[\chi_{0}{}^{(1)} =-\frac{1}{4(3m)^{2}}\bigg{[}\mathbf{\rho_{a}}\cdot\mathbf{P\pi_{a}}\cdot \mathbf{P}+\mathbf{\rho_{b}}\cdot\mathbf{P\pi_{b}}\cdot\mathbf{P}\bigg{]}+H.c., \tag{108}\] \[\chi_{0}{}^{(2)} =\frac{1}{12m^{2}}\bigg{[}\mathbf{\rho_{a}}\cdot\mathbf{P\pi_{a}}\cdot\bm {\pi_{b}}-\frac{1}{2}\mathbf{\rho_{b}}\cdot\mathbf{P\pi_{b}}^{2}+\frac{2}{3}\mathbf{\rho_{b }}\cdot\mathbf{P\pi_{a}}^{2}\bigg{]}+H.c.,\] (109) \[\chi_{0}{}^{(3)} =-\frac{1}{6m^{2}}\bigg{[}\mathbf{s_{a}}\wedge\mathbf{P}\cdot\mathbf{\pi_{a} }+\mathbf{s_{b}}\wedge\mathbf{P}\cdot\mathbf{\pi_{b}}\bigg{]}, \tag{110}\]
where \(\mathbf{s_{a}}\equiv\mathbf{s_{1}}-\mathbf{s_{2}}\) and \(\mathbf{s_{b}}\equiv\mathbf{s_{3}}-\frac{\mathbf{s_{1}}+\mathbf{s_{2}}}{2}\).
We find the following intermediate results,
\[\langle\Psi^{\prime}|-\frac{\mathbf{P}^{2}V^{(0)}}{2(3m)^{2}}\,|\Psi\rangle =-\frac{E_{0}}{2(3m^{2})}\int\mathbf{dPdqdq^{\prime}}\Psi^{\prime*} \Psi\mathbf{P}^{2} \tag{111}\] \[-i\,\langle\Psi^{\prime}|\,[\chi_{0}{}^{(1)},V^{(0)}]\,|\Psi\rangle =-\frac{E_{0}}{4(3m)^{2}}\int\mathbf{dPdqdq^{\prime}}\Psi^{\prime*} \Psi\mathbf{4}\mathbf{P}^{2}\] (112) \[-i\,\langle\Psi^{\prime}|\,[\chi_{0}{}^{(2)},V^{(0)}]\,|\Psi\rangle =0\] (113) \[-i\,\langle\Psi^{\prime}|\,[\chi_{0}{}^{(3)},V^{(0)}]\,|\Psi\rangle =-i\frac{E_{0}}{6m^{2}}\int\mathbf{dPdqdq^{\prime}}\Psi^{\prime*} \Psi\big{[}\mathbf{P}\wedge(\mathbf{q}_{a}-\mathbf{q}_{a}^{\prime})\cdot\mathbf{s_{a}}+\mathbf{P} \wedge(\mathbf{q_{b}}-\mathbf{q}_{b}^{\prime})\cdot\mathbf{s_{b}}\big{]}, \tag{114}\]
leading to the final result in Jacobi coordinates,
\[\langle\Psi^{\prime}|\,\delta V(\mathbf{P})\,|\Psi\rangle=-\frac{E_{0}}{6m^{2}} \int\mathbf{dPdqdq^{\prime}}\Psi^{\prime*}\Psi\Big{[}\mathbf{P}^{2}+i\mathbf{P}\wedge( \mathbf{q_{a}}-\mathbf{q}_{a}^{\prime})\cdot\mathbf{s_{a}}+i\mathbf{P}\wedge(\mathbf{q_{b}}-\mathbf{q }_{b}^{\prime})\cdot\mathbf{s_{b}}\Big{]}. \tag{115}\]
In matrix elements such as those appearing in the above Eqs. (111), (112), (113), (114), exchange terms will always be present, analogously as in Eqs. (102), (102), (102), due to the symmetry of the operators under the exchange of particles (see Appendix D). Such exchange terms will be implicitly understood during calculations.
Eq. (111) follows straightly from (108).
In this appendix, we undertake the calculations leading to Eq. (112); Eq. (113) and Eq. (114) are obtained with a similar procedure.
It follows from Eq. (108) that, for any operator \(\mathbf{X}\),
\[\langle\Psi^{\prime}|\,[\mathbf{X},V^{(0)}]\,|\Psi\rangle= \int\mathbf{dPdP^{\prime}dqdq^{\prime}dq^{\prime\prime}}\Psi^{\prime *}\Psi\bigg{[}\langle\mathbf{P^{\prime}},\mathbf{q^{\prime}}|\,\mathbf{X}\,|\mathbf{P},\mathbf{q^{ \prime\prime}}\rangle-\langle\mathbf{P^{\prime}},\mathbf{q^{\prime\prime}}|\,\mathbf{X} \,|\mathbf{P},\mathbf{q}\rangle\bigg{]} \tag{116}\] \[= \int\mathbf{dPdP^{\prime}dqdq^{\prime}dq^{\prime\prime}}\Psi^{\prime *}\Psi\bigg{[}\langle\mathbf{P^{\prime}},\mathbf{q^{\prime}}|\,\mathbf{X}\,|\mathbf{P},\mathbf{q^{ \prime\prime}}\rangle-\big{[}(\mathbf{q^{\prime}},\mathbf{q^{\prime\prime}})\to(\mathbf{q ^{\prime\prime}},\mathbf{q})\big{]}\bigg{]}.\]
In Eq. (116) we identify \(\mathbf{X}=\chi_{0}{}^{(1)}\), with \(\chi_{0}{}^{(1)}=\big{(}\mathbf{\rho_{a}}\cdot\mathbf{P\pi_{a}}\cdot\mathbf{P}+\mathbf{\pi_{a}} \cdot\mathbf{P\rho_{a}}\cdot\mathbf{P}\big{)}+\big{(}\mathbf{\rho_{b}}\cdot\mathbf{P\pi_{b}} \cdot\mathbf{P}\big{)}.\) Developing the calculation only for the component involving Jacobi variables \(\mathbf{\rho_{a}}\), \(\mathbf{\pi_{a}}\), we get
\[-i\,\langle\Psi^{\prime}|\,\bigg{[}-\frac{1}{4(3m)^{2}} \ \ \big{(} \ \mathbf{\rho_{a}}\cdot\mathbf{P\pi_{a}}\cdot\mathbf{P}+\mathbf{\pi_{a}}\cdot\mathbf{P \rho_{a}}\cdot\mathbf{P}\big{)},V^{(0)}\bigg{]}\,|\Psi\rangle= \tag{117}\] \[= i\frac{1}{4(3m)^{2}}\int\mathbf{dPdP^{\prime}dqdq^{\prime}dq^{ \prime\prime}}\Psi^{\prime*}\Psi\bigg{[}\langle\mathbf{P^{\prime}},\mathbf{q^{\prime}}| \,\big{[}\mathbf{\rho_{a}}\cdot\mathbf{P\pi_{a}}\cdot\mathbf{P}+\mathbf{\pi_{a}}\cdot\mathbf{P\rho _{a}}\cdot\mathbf{P}\big{]}\,|\mathbf{P},\mathbf{q^{\prime\prime}}\rangle\] \[- \ \big{[}(\mathbf{q^{\prime}},\mathbf{q^{\prime\prime}})\to(\mathbf{q^{\prime \prime}},\mathbf{q})\big{]}\bigg{]}.\]
We insert the coordinates closure relation (102),
\[\langle\mathbf{P^{\prime}},\mathbf{q^{\prime}}|\,(\mathbf{\rho_{a}}\cdot\mathbf{P \pi_{a}}\cdot\mathbf{P}+\mathbf{\pi_{a}}\cdot\mathbf{P\rho_{a}}\cdot\mathbf{P})\,|\mathbf{P},\mathbf{q^{ \prime\prime}}\rangle= \tag{118}\] \[= \langle\mathbf{P^{\prime}},\mathbf{q^{\prime}}|\,\mathbf{\xi_{a}}\,|\mathbf{P},\mathbf{q ^{\prime\prime}}\rangle\cdot\mathbf{P}(\mathbf{q^{\prime\prime}_{a}}+\mathbf{q^{\prime}_{a}}) \cdot\mathbf{P}\] \[= \langle\mathbf{P^{\prime}},\mathbf{q^{\prime}}|\,\mathbb{1}_{(\mathbf{R},\mathbf{ \xi})}\mathbf{\xi_{a}}\,|\mathbf{P},\mathbf{q^{\prime\prime}}\rangle\cdot\mathbf{P}(\mathbf{q^{ \prime\prime}_{a}}+\mathbf{q^{\prime}_{a}})\cdot\mathbf{P}\] \[= \int\mathbf{d\xi_{a}}\delta(\mathbf{P}-\mathbf{P^{\prime}})\delta(\mathbf{q^{ \prime\prime}_{b}}-\mathbf{q^{\prime}_{b}})e^{i\mathbf{\xi_{a}}\cdot(\mathbf{q^{\prime \prime}_{a}}-\mathbf{q^{\prime}_{a}})}\mathbf{\xi_{a}}\cdot\mathbf{P}(\mathbf{q^{\prime\prime }_{a}}+\mathbf{q^{\prime}_{a}})\cdot\mathbf{P},\]
and we remind that since variables \(\mathbf{\xi}_{\mathbf{a}}\) and \(\mathbf{q}^{\prime}_{\mathbf{a}}\) are canonically conjugate, it holds
\[e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{\prime\prime}_{\mathbf{a}}-\mathbf{q}^{ \prime}_{\mathbf{a}})}\mathbf{\xi}_{\mathbf{a}} =i\frac{\vec{\partial}}{\partial\mathbf{q}^{\prime}_{\mathbf{a}}}\bigg{[} e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{\prime\prime}_{\mathbf{a}}-\mathbf{q}^{\prime}_{\mathbf{a}})} \bigg{]},\] (C199) \[e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}_{\mathbf{a}}-\mathbf{q}^{\prime\prime}_ {\mathbf{a}})}\mathbf{\xi}_{\mathbf{a}} =-i\frac{\vec{\partial}}{\partial\mathbf{q}_{\mathbf{a}}}\bigg{[}e^{i\mathbf{ \xi}_{\mathbf{a}}\cdot(\mathbf{q}_{\mathbf{a}}-\mathbf{q}^{\prime\prime}_{\mathbf{a}})}\bigg{]};\]
therefore, by substituting Eq. (C199) in Eq. (C198), and then Eq. (C198) in Eq. (C197), and performing the integration of Eq. (C197) with respect to \(\mathbf{P}^{\prime}\) and \(\mathbf{q}^{*}_{\mathbf{b}}\), we obtain
\[-i\left\langle\Psi^{\prime}\right|\left[-\frac{1}{4(3m)^{2}} \big{(}\mathbf{\rho}_{\mathbf{a}}\cdot\mathbf{P}\mathbf{\pi}_{\mathbf{a}}\cdot\mathbf{P}+ \mathbf{\pi}_{\mathbf{a}}\cdot\mathbf{P}\mathbf{\rho}_{\mathbf{a}}\cdot\mathbf{P}\big{)},V^{(0)} \right]\left|\Psi\right\rangle=\] \[=-\frac{1}{4(3m)^{2}}\int\mathbf{dPdqdq^{\prime}dq^{\prime}dq^{\prime \prime}d\mathbf{\xi}_{\mathbf{a}}\Psi^{\prime*}\Psi}\Bigg{\{}\frac{\vec{\partial}}{ \partial\mathbf{q}^{\prime}_{\mathbf{a}}}\bigg{[}e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{ \prime\prime}_{\mathbf{a}}-\mathbf{q}^{\prime}_{\mathbf{a}})}\bigg{]}\cdot\mathbf{P}(\mathbf{q}^{ \prime\prime}_{\mathbf{a}}+\mathbf{q}^{\prime}_{\mathbf{a}})\cdot\mathbf{P}+\] \[\quad+\big{[}(\mathbf{q}^{\prime},\mathbf{q}^{\prime\prime})\to(\mathbf{q}^{ \prime\prime},\mathbf{q})\big{]}\Bigg{\}}.\] (C200)
We now integrate the first term in Eq. (C200) by parts with respect to \(\mathbf{q}^{\prime}_{\mathbf{a}}\). The other term obtained by exchanging \(\big{[}(\mathbf{q}^{\prime},\mathbf{q}^{\prime\prime})\to(\mathbf{q}^{\prime\prime},\mathbf{ q})\big{]}\) is treated in a similar way, integrating it by parts with respect to \(\mathbf{q}_{\mathbf{a}}\), and contributes in the same way. We obtain
\[\int\mathbf{dPdqdq^{\prime}dq^{\prime\prime}dq^{\prime\prime}d\mathbf{ \xi}_{\mathbf{a}}\Psi^{\prime*}\Psi}\frac{\vec{\partial}}{\partial\mathbf{q}^{\prime}_ {\mathbf{a}}}\bigg{[}e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{\prime\prime}_{\mathbf{a}}-\bm {q}^{\prime}_{\mathbf{a}})}\bigg{]}\cdot\mathbf{P}(\mathbf{q}^{\prime\prime}_{\mathbf{a}}+\bm {q}^{\prime}_{\mathbf{a}})\cdot\mathbf{P}=\] (C201) \[=\int\mathbf{dPdqdq^{\prime}dq^{\prime\prime}d\mathbf{q}^{\prime\prime}_ {\mathbf{a}}\mathbf{\xi}_{\mathbf{a}}\mathbf{P}}\cdot\frac{\vec{\partial}}{\partial\mathbf{q}^{ \prime}_{\mathbf{a}}}\bigg{[}\Psi^{\prime*}\Psi e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{ \prime\prime}_{\mathbf{a}}-\mathbf{q}^{\prime}_{\mathbf{a}})}(\mathbf{q}^{\prime\prime}_{\mathbf{a }}+\mathbf{q}^{\prime}_{\mathbf{a}})\cdot\mathbf{P}\bigg{]}\] \[-\int\mathbf{dPdqdq^{\prime}dq^{\prime\prime}d\mathbf{q}^{\prime\prime}_ {\mathbf{a}}\mathbf{\xi}_{\mathbf{a}}\mathbf{P}}\cdot\frac{\vec{\partial}}{\partial\mathbf{q}^{ \prime}_{\mathbf{a}}}\bigg{[}\Psi^{\prime*}\Psi(\mathbf{q}^{\prime\prime}_{\mathbf{a}}+ \mathbf{q}^{\prime}_{\mathbf{a}})\cdot\mathbf{P}\bigg{]}e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{ \prime\prime}_{\mathbf{a}}-\mathbf{q}^{\prime}_{\mathbf{a}})}\] \[\equiv I_{1}+I_{2}=\int\mathbf{dPdqdq^{\prime}\Psi^{\prime*}\Psi}\bm {P}^{2},\]
where
\[I_{1}= \int\mathbf{dq^{\prime\prime}_{\mathbf{a}}d\mathbf{\xi}_{\mathbf{a}}}\quad\mathbf{P} \cdot\frac{\vec{\partial}}{\partial\mathbf{q}^{\prime}_{\mathbf{a}}}\bigg{[}\Psi^{ \prime*}\Psi e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{\prime\prime}_{\mathbf{a}}-\mathbf{q}^ {\prime}_{\mathbf{a}})}(\mathbf{q}^{\prime\prime}_{\mathbf{a}}+\mathbf{q}^{\prime}_{\mathbf{a}}) \cdot\mathbf{P}\bigg{]}\] (C202) \[=\mathbf{P}\cdot\frac{\vec{\partial}}{\partial\mathbf{q}^{\prime}_{\mathbf{ a}}}\bigg{[}\Psi^{\prime*}\Psi 2\mathbf{q}^{\prime}_{\mathbf{a}}\cdot\mathbf{P}\bigg{]};\] \[I_{2}= -\int\mathbf{dq^{\prime\prime}_{\mathbf{a}}d\mathbf{\xi}_{\mathbf{a}}\mathbf{P}} \cdot\frac{\vec{\partial}}{\partial\mathbf{q}^{\prime}_{\mathbf{a}}}\bigg{[}\Psi^{ \prime*}\Psi(\mathbf{q}^{\prime\prime}_{\mathbf{a}}+\mathbf{q}^{\prime}_{\mathbf{a}})\cdot\bm {P}\bigg{]}e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{\prime\prime}_{\mathbf{a}}-\mathbf{q}^ {\prime}_{\mathbf{a}})}\] \[= -I_{1}+\Psi^{\prime*}\Psi\mathbf{P}^{2}.\] (C203)
We take into account both terms in Eq. (C200) and find
\[-i\left\langle\Psi^{\prime}\right|\left[-\frac{1}{4(3m)^{2}}\big{(}\mathbf{\rho}_{ \mathbf{a}}\cdot\mathbf{P}\mathbf{\pi}_{\mathbf{a}}\cdot\mathbf{P}+\mathbf{\pi}_{\mathbf{a}}\cdot\mathbf{P} \mathbf{\rho}_{\mathbf{a}}\cdot\mathbf{P}\big{)},V^{(0)}\right]\left|\Psi\right\rangle= \ =-\frac{1}{4(3m)^{2}}\int\mathbf{dPdqdq^{\prime}\Psi^{\prime*}\Psi 2\mathbf{P}^{2}.\] (C204)
The component in \(\chi^{(1)}_{0}\) involving Jacobi variables \(\mathbf{\rho}_{\mathbf{b}}\), \(\mathbf{\pi}_{\mathbf{b}}\) makes an equal contribution. Thus, the final result is given by Eq. (C192).
We observe that it would have been easier to write, instead of (C199),
\[e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{\prime\prime}_{\mathbf{a}}-\mathbf{q}^{ \prime}_{\mathbf{a}})}\mathbf{\xi}_{\mathbf{a}} =-i\frac{\vec{\partial}}{\partial\mathbf{q}^{\prime\prime}_{\mathbf{ a}}}\bigg{[}e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}^{\prime\prime}_{\mathbf{a}}-\mathbf{q}^{ \prime}_{\mathbf{a}})}\bigg{]},\] (C205) \[e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}_{\mathbf{a}}-\mathbf{q}^{\prime\prime}_ {\mathbf{a}})}\mathbf{\xi}_{\mathbf{a}} =+i\frac{\vec{\partial}}{\partial\mathbf{q}^{\prime\prime}_{\mathbf{a}}} \bigg{[}e^{i\mathbf{\xi}_{\mathbf{a}}\cdot(\mathbf{q}_{\mathbf{a}}-\mathbf{q}^{\prime\prime}_{\mathbf{ a}})}\bigg{]}.\]
In this case, by substituting Eq. (C205) in Eq. (C198), then Eq. (C198) in Eq. (C197), and performing the integration by parts with respect to \(\mathbf{q^{\prime\prime}_{b}}\), we obtain
\[-i\left\langle\Psi^{\prime}\right|\left[-\frac{1}{4(3m)^{2}}\big{(} \mathbf{\rho_{a}}\cdot\mathbf{P}\mathbf{\pi_{a}}\cdot\mathbf{P}+\mathbf{\pi_{a}}\cdot\mathbf{P}\mathbf{\rho _{a}}\cdot\mathbf{P}\big{)},V^{(0)}\right]\left|\Psi\right\rangle=\] \[=-\frac{1}{4(3m)^{2}}\int\mathbf{dP}\mathbf{dqdq^{\prime}dq^{\prime \prime}d\mathbf{\xi_{a}}\Psi^{\prime*}\Psi\bigg{[}\frac{\ddot{\partial}}{ \partial\mathbf{q^{\prime\prime}_{a}}}\Big{(}e^{i\mathbf{\xi_{a}}\cdot(\mathbf{q^{\prime \prime}_{a}}-\mathbf{q^{\prime}_{a}})}\Big{)}\cdot\mathbf{P}(\mathbf{q^{\prime\prime}_{a} }+\mathbf{q^{\prime}_{a}})\cdot\mathbf{P}\] \[\quad+\big{[}(\mathbf{q^{\prime}},\mathbf{q^{\prime\prime}})\to(\mathbf{q^{ \prime\prime}},\mathbf{q})\Big{]}\bigg{]}\] (C206) \[=-\frac{1}{4(3m)^{2}}\bigg{\{}\int\mathbf{dP}\mathbf{dqdq^{\prime}dq^{ \prime\prime}d\mathbf{\xi_{a}}P}\cdot\frac{\ddot{\partial}}{\partial\mathbf{q^{\prime \prime}_{a}}}\bigg{[}\Psi^{\prime*}\Psi e^{i\mathbf{\xi_{a}}\cdot(\mathbf{q^{\prime \prime}_{a}}-\mathbf{q^{\prime}_{a}})}(\mathbf{q^{\prime\prime}_{a}}+\mathbf{q^{\prime}_{ a}})\cdot\mathbf{P}\bigg{]}\] \[-\int\mathbf{dP}\mathbf{dqdq^{\prime}dq^{\prime\prime}d\mathbf{\xi_{a}}P} \cdot\frac{\ddot{\partial}}{\partial\mathbf{q^{\prime}_{a}}}\bigg{[}\Psi^{\prime* }\Psi(\mathbf{q^{\prime\prime}_{a}}+\mathbf{q^{\prime}_{a}})\cdot\mathbf{P}\bigg{]}e^{i \mathbf{\xi_{a}}\cdot(\mathbf{q^{\prime\prime}_{a}}-\mathbf{q^{\prime}_{a}})}+\big{[}(\mathbf{ q^{\prime}},\mathbf{q^{\prime\prime}})\to(\mathbf{q^{\prime\prime}},\mathbf{q})\big{]}\bigg{\}}\]
Since \(\Psi\) is a square-integrable function, it is legitimate to assume that the first term in the above equation and its counterpart obtained by substituting \((\mathbf{q^{\prime}},\mathbf{q^{\prime\prime}})\to(\mathbf{q^{\prime\prime}},\mathbf{q})\) vanish at the boundary when the integration domain becomes arbitrarily large. By carrying out a partial derivation and by considering all terms in \(\chi_{0}^{(1)}\) we arrive again at the result (C192).
## Appendix D Orthogonality and closure relations
We define physical states in terms of the variables \(\mathbf{p_{\nu}}\), \(\mathbf{r_{\nu}}\), \(\mathbf{\sigma_{\nu}}\), \(\mathbf{\tau_{\nu}}\) as follows. We indicate a generic permutation of indices 1, 2, 3 with \(\mathbf{\alpha_{i}}=(\alpha_{i}^{1},\alpha_{i}^{2},\alpha_{i}^{3})\) and its sign with \(\epsilon_{\mathbf{\alpha_{i}}}\):
\[\begin{split}\left|\mathbf{p_{1}},\sigma_{1},\tau_{1};\mathbf{p_{2}}, \sigma_{2},\tau_{2};\mathbf{p_{3}},\sigma_{3},\tau_{3}\right\rangle&= \sum_{\mathbf{\alpha_{i}}}\epsilon_{\mathbf{\alpha_{i}}}\left|\mathbf{p_{\alpha_{i}^{1}}}, \sigma_{\alpha_{i}^{1}},\tau_{\alpha_{i}^{1}};\mathbf{p_{\alpha_{i}^{2}}},\sigma_ {\alpha_{i}^{2}},\tau_{\alpha_{i}^{2}};\mathbf{p_{\alpha_{i}^{3}}},\sigma_{ \alpha_{i}^{3}},\tau_{\alpha_{i}^{3}}\right\rangle,\\ \left|\mathbf{r_{1}},\sigma_{1},\tau_{1};\mathbf{r_{2}},\sigma_{2},\tau_ {2};\mathbf{r_{3}},\sigma_{3},\tau_{3}\right\rangle&=\sum_{\mathbf{ \alpha_{i}}}\epsilon_{\mathbf{\alpha_{i}}}\left|\mathbf{r_{\alpha_{1}^{1}}},\sigma_{ \alpha_{i}^{1}},\tau_{\alpha_{i}^{1}};\mathbf{r_{\alpha_{i}^{2}}},\sigma_{ \alpha_{i}^{2}},\tau_{\alpha_{i}^{2}};\mathbf{r_{\alpha_{i}^{3}}},\sigma_{\alpha _{i}^{3}},\sigma_{\alpha_{i}^{3}}\right\rangle.\end{split}\] (D207)
With the shorthand notation \(\mathbf{y_{\nu}}\in\{\mathbf{r_{\nu}},\mathbf{p_{\nu}}\}\), \(\kappa_{\nu}=(\sigma_{\nu},\tau_{\nu})\), with \(\nu=1,2,3\), orthogonality relations can be written
\[\left\langle\mathbf{y_{1}},\kappa_{1};\mathbf{y_{2}},\kappa_{2};\mathbf{y_{3}},\kappa_{3} |\mathbf{y^{\prime}_{1}},\kappa_{1}^{\prime};\mathbf{y^{\prime}_{2}},\kappa_{2}^{ \prime};\mathbf{y^{\prime}_{3}},\kappa_{3}^{\prime}\right\rangle=\prod_{\nu=1}^{3} \delta(\mathbf{y_{\nu}}-\mathbf{y^{\prime}_{\nu}})\delta_{\kappa_{\nu\nu^{\prime}}}+ \text{exchange terms},\] (D208)
and closure relations can be written
\[\mathbb{1}=\frac{1}{6}\bigg{[}\sum_{\kappa_{1}\kappa_{2}\kappa_{3}}\int\mathbf{dy_ {1}}\mathbf{dy_{2}}\mathbf{dy_{3}}\left|\mathbf{y_{1}},\kappa_{1};\mathbf{y_{2}},\kappa_{2}; \mathbf{y_{3}}\right\rangle\left\langle\mathbf{y_{1}},\kappa_{1};\mathbf{y_{2}},\kappa_{2}; \mathbf{y_{3}},\kappa_{3}\right|\bigg{]}.\] (D209)
In view of the above, Jacobi variables of momentum and position satisfy the following orbital closure and orthogonality relations, where permutation terms are understood:
\[\int\mathbf{dP}\mathbf{dq_{a}}\mathbf{dq_{b}}\left|\mathbf{P},\mathbf{q_{a}},\mathbf{q_{b }}\right\rangle\left\langle\mathbf{P},\mathbf{q_{a}},\mathbf{q_{b}}\right| =\mathbb{1}_{(\mathbf{P},\mathbf{q_{a}},\mathbf{q_{b}})};\] (D210) \[\left\langle\mathbf{P},\mathbf{q_{a}},\mathbf{q_{b}}|\mathbf{P^{\prime}},\mathbf{q^{ \prime}_{a}},\mathbf{q^{\prime}_{b}}\right\rangle =\delta(\mathbf{P}-\mathbf{P^{\prime}})\delta(\mathbf{q_{a}}-\mathbf{q^{\prime}_{a} })\delta(\mathbf{q_{b}}-\mathbf{q^{\prime}_{b}});\] (D211) \[\int\mathbf{dR}\mathbf{d\xi_{a}}\mathbf{d\xi_{b}}\left|\mathbf{R},\mathbf{\xi_{a}},\mathbf{ \xi_{b}}\right\rangle\left\langle\mathbf{R},\mathbf{\xi_{a}},\mathbf{\xi_{b}}\right| =\mathbb{1}_{(\mathbf{R},\mathbf{\xi_{a}},\mathbf{\xi_{b}})};\] (D212) \[\left\langle\mathbf{R},\mathbf{\xi_{a}},\mathbf{\xi_{b}}|\mathbf{R^{\prime}},\mathbf{ \xi^{\prime}_{a}},\mathbf{\xi^{\prime}_{b}}\right\rangle =\delta(\mathbf{R}-\mathbf{R^{\prime}})\delta(\mathbf{\xi_{a}}-\mathbf{\xi^{ \prime}_{a}})\delta(\mathbf{\xi_{b}}-\mathbf{\xi^{\prime}_{b}}).\] (D213)
|
2303.11624
|
Assessor-Guided Learning for Continual Environments
|
This paper proposes an assessor-guided learning strategy for continual
learning where an assessor guides the learning process of a base learner by
controlling the direction and pace of the learning process thus allowing an
efficient learning of new environments while protecting against the
catastrophic interference problem. The assessor is trained in a meta-learning
manner with a meta-objective to boost the learning process of the base learner.
It performs a soft-weighting mechanism of every sample accepting positive
samples while rejecting negative samples. The training objective of a base
learner is to minimize a meta-weighted combination of the cross entropy loss
function, the dark experience replay (DER) loss function and the knowledge
distillation loss function whose interactions are controlled in such a way to
attain an improved performance. A compensated over-sampling (COS) strategy is
developed to overcome the class imbalanced problem of the episodic memory due
to limited memory budgets. Our approach, Assessor-Guided Learning Approach
(AGLA), has been evaluated in the class-incremental and task-incremental
learning problems. AGLA achieves improved performances compared to its
competitors while the theoretical analysis of the COS strategy is offered.
Source codes of AGLA, baseline algorithms and experimental logs are shared
publicly in \url{https://github.com/anwarmaxsum/AGLA} for further study.
|
Muhammad Anwar Ma'sum, Mahardhika Pratama, Edwin Lughofer, Weiping Ding, Wisnu Jatmiko
|
2023-03-21T06:45:14Z
|
http://arxiv.org/abs/2303.11624v1
|
# Assessor-Guided Learning for Continual Environments
###### Abstract
This paper proposes an assessor-guided learning strategy for continual learning where an assessor guides the learning process of a base learner by controlling the direction and pace of the learning process thus allowing an efficient learning of new environments while protecting against the catastrophic interference problem. The assessor is trained in a meta-learning manner with a meta-objective to boost the learning process of the base learner. It performs a soft-weighting mechanism of every sample accepting positive samples while rejecting negative samples. The training objective of a base learner is to minimize a meta-weighted combination of the cross entropy loss function, the dark experience replay (DER) loss function and the knowledge distillation loss function whose interactions are controlled in such a way to attain an improved performance. A compensated oversampling (COS) strategy is developed to overcome the class imbalanced problem of the episodic memory due to limited memory budgets. Our approach, Assessor-Guided Learning Approach (AGLA), has been evaluated in the class-incremental and task-incremental learning problems. AGLA achieves improved performances compared to its competitors while the theoretical analysis of the COS strategy is offered. Source codes of AGLA, baseline algorithms and experimental logs are
shared publicly in [https://github.com/anwarmaxsum/AGLA](https://github.com/anwarmaxsum/AGLA) for further study.
keywords: Continual Learning, Meta-Learning, Lifelong Learning, Incremental Learning +
Footnote †: journal: Journal of Computer Vision
## 1 Introduction
The most popular approach to solve the problem of learning learning is to solve the problem of learning a
\begin{table}
\begin{tabular}{l l} \hline \hline
**Abbreviation** & **Description** \\ \hline Task IL & Task Incremental Learning \\ Class IL & Class Incremental Learning \\ AGLA & Assessor-Guided Learning Approach \\ DER & Dark Experience Replay \\ ICARL & Incremental Classifier and Representation Learning \\ EEIL & End-to-End Incremental Learning \\ BIC & Bias Correction \\ EWC & Elastic Weight Consolidation \\ SI & Synapse Intelligence \\ MAS & Memory Aware Synapse \\ LWF & Learning Without ForGetting \\ DMC & Deep Model Consolidation \\ HAL & Hindsight Anchor Learning \\ FIM & Fisher Information Matrix \\ SGD & Stochastic Gradient Descent \\ NAS & Neural architecture search \\ RLN & Representation Learning Network \\ PLN & Prediction Learning Network \\ COS & Compensated Over-Sampling \\ MLP & Multi-Layer Perceptron \\ CNN & Convolutional Neural Network \\ LSTM & Long-Short Term Memory \\ Kl divergence & Kullback–Leibler divergence \\ MSE & Mean-Square Error \\ Assr & Assessor \\ Aug & Augmentation \\ R.Tr & Random Transformation \\ \hline \hline \end{tabular}
\end{table}
Table 1: Abbreviations list
## 1 Introduction
Continual learning problem aims to build a learning model throughout the life span of the model in use and gain its improved intelligence as the increase of learning tasks. Unlike conventional learning algorithms limited to only a single task, a continual learner is exposed to streaming tasks where each task possesses varying characteristics, i.e., different data distributions, different target classes, or combinations between distributional changes and class changes. A continual learner has to adapt quickly to new environments without losing its relevance to old tasks. This problem is not trivial for a deep neural network because of the catastrophic interference problem where old parameters are overwritten when learning a new task thereby losing its generalization power to the old tasks. Because of uncertain and possibly infinite problem sizes, a retraining process from scratch is undesired. In other words, the learning process occurs without complete accesses of old data samples.
The continual learning problem goes one step toward human-like intelligence where the continual learner must be capable of accumulating knowledge from already seen experiences. As a result, this area has picked up substantial research attention. Existing works are categorized into three groups, regularization-based approach [1], structure-based approach [2] and memory-based approach [3]. The regularization-based approach introduces an extra regularization term preventing important parameters of old tasks from deviations. This approach is simple to implement and computationally light. These approaches, however, do not scale well for large-scale problems because an overlapping region of all tasks is difficult to find with the regularization-based approach. The structure-based approach increases network capacity to deal with new tasks while isolating old network parameters to avoid the catastrophic forgetting problem. This approach is, however, computationally expensive and usually involves complex learning procedures. The memory-based approach takes another route where a small subset of old data samples are stored in the memory and interleaved with current samples when learning a new task. _Conventional experience replay mechanism requires hundreds of samples to be stored in the memory thus incurring expensive memory footprints. There also exists the class imbalanced problem because old samples are often lower in quantity than new samples. The assessor-guided learning approach is put forward here to address these drawbacks where an assessor controls the learning process of the base learner via a soft-weighting mechanism of loss functions for every sample. Memory augmentation is applied in our method to address the class-imbalance problem between the current task and the
previous tasks. The compensated over-sampling mechanism is integrated where self-corrections while over-sampling is performed to avoid the out-of-distribution cases._
Assessor-Guided Learning Approach (AGLA) is proposed here where a sequence-aware assessor is integrated to navigate the learning process of a base model to attain an improved learning performance. The assessor is trained with a meta-objective to boost the generalization power of the base model via a soft-weighting mechanism of loss functions for every sample. A high-quality sample is assigned with a high weight whereas a low weight is assigned to poor samples. High-quality samples are those leading to positive forward and backward transfers whereas poor samples are those imposing high losses associated with catastrophic forgetting. Two data subsets, training subset and validation subset, are created for each task to simulate the training-testing procedure [4] where both subsets represent current and old concepts. The concept of random transformation [5] is integrated to craft the validation subset. The training procedure follows the meta-learning principle where the training subset is used to train the base model in the outer loop while the assessor utilizes the validation subset for its updates in the inner loop. The assessor produces a set of weights, cross-entropy weight, dark experience replay (DER) weight and distillation weight controlling the interaction of loss functions and in turn the influence of every sample. This is made possible by formulating the loss function as a meta-weighted combination of the cross entropy loss function, the DER loss function [6] and the knowledge distillation loss function [7]. The cross entropy loss focuses on current samples and past samples of the memory while the DER loss and the knowledge distillation loss targets past samples of the memory to maintain previous knowledge. In other words, the assessor steers a base model to address the stability- plasticity dilemma. It determines how much a base model should learn from the current condition and the past condition in respect to every sample. Note that both DER and knowledge distillation targets previous experiences because the number of previous tasks are larger than the current task but underrepresented in the training process, i.e., memory samples are much smaller than current samples. This aspect confirms the importance of meta-weighting mechanism regulating the influence of multi-objective functions seamlessly.
The class imbalanced problem due to disproportionate proportions of memory samples and currents samples is tackled using the compensated over-sampling (COS) strategy where compensations are performed while over-sampling via well-known data augmentation protocols to protect against out-of-distribution augmented samples undermining model's generalization. A lemma analyzing such
compensation w.r.t the bias-variance decomposition is provided as well as a theorem demonstrating reductions of MSEs as a result of the compensations is demonstrated.
This paper conveys five major contributions:
1. it puts forward the concept of assessor-guided continual learning where the sequence-aware assessor is deployed to guide the learning process of the base learner balancing the issue of stability and plasticity;
2. the compensated over-sampling (COS) strategy is proposed to deal with the class imbalanced problem underpinned with theoretical analyses;
3. a meta-training strategy via a bi-level optimization is put forward to train the assessor where the concept of random transformation is implemented to craft the validation subset;
4. a meta-weighted combination of three loss functions are proposed to train a base learner where each loss function is associated with either current or past conditions. This design offers an intuition of learning strategies for every sample, i.e, whether to focus on the current or previous contexts;
5. the source codes of AGLA and other supporting data are made public in [https://github.com/anwarmaxsum/AGLA](https://github.com/anwarmaxsum/AGLA) to enable further study.
The advantage of AGLA over existing approaches has been numerically validated under the class-incremental and task-incremental learning configurations. It is demonstrated that AGLA delivers improvements compared to recently published algorithms in realm of average accuracy while each learning component contributes positively to the performance. AGLA achieves comparable average forgetting indexes where it mostly attains the second place and maintains decent performances with various memory sizes compared to baseline algorithms.
## 2 Related Works
### Continual Learning
**Regularization-based method** designs a regularization term preventing important parameters of old tasks from deviations. Important parameters of old tasks are estimated and an important parameter matrix is integrated in the regularization term. Elastic Weight Consolidation (EWC) [1] adopts the Fisher information matrix (FIM) to estimate the importance of network synapses. Synaptic intelligence (SI) [8] utilizes an accumulated gradient to quantify the significance of network parameters and incurs less expensive computation than FIM. An alternative is
offered by memory aware synapses using an unsupervised and online approach [9]. Learning without forgetting (LWF) [10] applies the knowledge distillation approach making sure that current network outputs are close to previous network outputs. An online version of EWC is put forward in [11] where the parameter importance is approximated with the Laplace approximation. It is found that the regularization mechanism is better performed in the neuron level than the synaptic level because of the hierarchical nature of neural networks [12]. That is, the regularization step is achieved by controlling the learning rates of stochastic gradient descent (SGD) method. Similar approach is adopted in [13] but it considers the common information of each task enabling a node to be shared by different tasks. This strategy is capable of scaling up the regularization-based method for large-scale problems. Another approach to scale the regularization-based approach is done with the classifier's projection [14]. This approach induces wide local optimum regions, i.e., overlapping region. The regularization-based approach depends on the task IDs and task boundaries, thus performing poorly for the class-incremental learning problems.
**Structure-based approach** is pioneered by progressive neural networks [2] where new network components are added to deal with new tasks while freezing old network parameters to handle the catastrophic forgetting problem. The complexity of this approach grows as the increase of learning task. An error-based network growing method is put forward in [15]. It utilizes a selective-based retraining approach to handle the catastrophic forgetting problem. Learn-to-grow approach is developed in [16] where neural architecture search (NAS) is integrated to obtain the best network configuration of new tasks while isolating old network parameters. The same concept is implemented for graph continual learning in [17]. The Bayesian approach is incorporated in [18]. The idea is akin to [16] but the Bayesian approach is adopted instead of NAS to find the best network structure. These approaches are computationally prohibitive and calls for the presence of tasks IDs and task's boundaries. [19; 20] offer a data-driven structural learning approach for unsupervised continual learning where the bias-variance decomposition is put forward to grow or prune the network structure. The catastrophic forgetting problem is dealt with the centroid-based experience replay mechanism [19] or the knowledge-distillation approach [20]. Although it is free of the task IDs and task boundaries for learning and predicting, the data-driven structural learning technique does not guarantee an optimal structure.
**Memory-based approach** replays old samples stored in the memory when learning new tasks. iCaRL [7] utilizes the exemplar set of each class where the classification step is performed via the nearest-mean strategy. GEM [3] and A-GEM
[21] store past examples to determine the forgetting case used to constrain the model update. HAL [22] puts forward the concept of anchor points optimized to maximize the forgetting case via a bilevel optimization approach. Prediction should not change to these anchor points when learning new tasks. DER [6] combines the concept of knowledge distillation and experience replay. [23] proposes the concept of knowledge amalgamation as a post-processing approach of class-incremental continual learning. These methods suffer from the class imbalanced problem since the size of memory buffer is much less than new samples. AGLA presents an extension of the memory-based approach where a sequence-aware assessor is deployed to guide the learning process of the base model. The assessor not only regulates the influence of every sample such that only positive samples are learned but also selects a suitable learning strategy of a given sample where the interaction of loss functions is controlled in a seamless manner. The class imbalanced issue is tackled with the COS method handling out of distributions of augmented samples.
### Meta Learning
Meta learning also known as learning-to-learn aims to learn an algorithm to improve the learning performance of another learning algorithm. This approach has been adopted in the continual learning problem in [24; 25; 26]. [24] creates two networks: representation learning network (RLN) and prediction learning network (PLN). RLN is trained with a meta-objective later used as representation of PLN. [25] integrates the controller network generating scaling and shifting parameters to generate task-specific features. [26] applies the meta-learning concept to scale adversarial continual learning [27] for online continual learning cases. [4] introduces the assessor-guided learning principle in a single-task metric learning problem to address the flaws of hard mining. Our approach distinguishes itself from aforementioned methods where the meta-learning concept is developed to construct an assessor controlling the stability and plasticity of a base network in multi-task continual learning problems.
## 3 Problem Formulation
Supervised continual learning problems are considered here where a model is trained to a sequence of fully labelled tasks \(\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{K}\). Each task \(\mathcal{T}_{k}=\{x_{n},y_{n}\}_{n=1}^{N_{k}},\)\(k\in\{1,...,K\}\) is sampled from the i.i.d distribution \(\mathcal{D}_{k}\) where \(x_{n}\in\mathcal{X}\) stands for an input sample and \(y_{n}\in\mathcal{Y}\) denotes its true class label. \(K,N_{k}\) respectively denote the number of tasks which might be infinite and the
size of \(k-th\) task. Each task does not possess the same characteristic causing non-stationary environments to a continual learning model \(g_{\phi}(f_{\theta}(.))\) with feature extractor's parameters \(\theta\) and classifier's parameters \(\phi\). There exist three continual learning variants in the literature: domain-incremental, task-incremental and class-incremental [28]. The domain-incremental problem refers to different data distributions of each task \(P(X,Y)_{k}\neq P(X,Y)_{k+1},k\in\{1,...,K\}\) while having the same problem structure, i.e., input and target variables. The task-incremental and class-incremental problems feature different class labels of each task. Suppose that \(L_{k},L_{k^{\prime}},k,k^{\prime}\in\{1,..,K\}\) stand for label sets of the \(k-th\) task and the \(k^{\prime}-th\) task, \(\forall k,k^{\prime},L_{k}\cap L_{k^{\prime}}=\emptyset\). The difference between the task-incremental and class-incremental problems lies in **the absence of the task IDs** for the class-incremental problem. A prediction relies on a single classifier \(g_{\phi}(.)\) rather than one classifier per task \(g_{\phi_{k}}(.)\). A task \(\mathcal{T}_{k}\) is accessed at the \(k-th\) session and discarded once completed. This issue leads to the catastrophic interference problem where learning a new task \(\mathcal{T}_{k}\) overwrites previously valid parameters.
## 4 Learning Procedure of AGLA
AGLA is developed from the assessor-guided learning method where an assessor \(\kappa_{\psi}(.)\) is deployed to guide the learning process of the base learner \(g_{\phi}(f_{\theta}(.))\). The assessor produces a set of weights, the cross entropy weight, the DER weight, and the distillation weight, for every sample. The soft-weighting mechanism not only controls the pace and direction of the learning process based on the quality of a data sample, but also governs the interaction of each loss function making possible for an adaptive selection of a suitable learning strategy for every sample, i.e., which losses to be favoured. The compensated over-sampling (COS) strategy is integrated to cope with the class-imbalanced issue on continual learning where corrections are performed while over-sampling to prevent the adverse impacts of out-of-distribution samples.
### Loss Function
The cost function is formulated as a meta-weighted combination of the cross entropy loss function, the DER loss function [6] and the knowledge distillation loss function [7]. The three loss functions have been rigorously validated where the absence of one term results in significant performance drops as shown in our
ablation study.
\[\begin{split}\mathcal{L}_{bl}=\underbrace{\mathbb{E}_{(x,y)\sim \mathcal{D}_{k}\cup\mathcal{\hat{M}}_{k-1}}[\alpha l(s_{W}(o),y)]}_{L_{CE}}+ \underbrace{\mathbb{E}_{(x,y)\sim\mathcal{\hat{M}}_{k-1}}\lambda[||o-h||+l(s_{W }(o),y)]}_{L_{DER++}}+\\ \underbrace{\mathbb{E}_{(x,y)\sim\mathcal{\hat{M}}_{k-1}}[\pi l(o,h)]}_{L_{distill}}\end{split} \tag{1}\]
\[\lambda=\begin{cases}0,&k=1\\ k*\beta,&k\geq 1\end{cases},\qquad\qquad\pi=\begin{cases}0,&k=1\\ k*\gamma,&k\geq 1\end{cases}\]
where \(\alpha,\lambda,\pi\) are respectively the cross entropy weight, the DER weight and the distillation weight generated by the sequence-aware assessor \(\{\alpha,\beta,\gamma\}=\kappa_{\psi}(x)\in[0,1]\). \(o=g_{\phi}(f_{\theta}(.))\) is a presoftmax response known as output logit. In addition to determine the sample's influence in the learning process, these three weights determine the learning strategies for a sample of interest, i.e., it is capable of selecting a suitable loss function for every sample. Note that the soft-weighting mechanism is applied and provides better flexibility than the binary hard-sampling mechanism because the learning process of the base learner is governed in a smooth manner. \(h=g_{\phi}(f_{\theta}(x))_{k-1}\) and \(||.||_{2}\) is the L-2 distance function. Note that the L-2 distance function functions similarly as the KL divergence. No softmax function is applied to avoid the effect of squashing function and the real labels are included here \(L_{DER++}\) to prevent the distribution shift [6]. Both the DER loss \(L_{DER++}\) and the distillation loss \(L_{distill}\) aim to maintain the stability of previously learned knowledge while the cross entropy loss \(L_{CE}\) aims to enhance the plasticity to new knowledge. The three losses are meta-weighted by the assessor to achieve a proper tradeoff of the plasticity and the stability. Two loss functions, focusing on old tasks, the DER loss function and the distillation loss function, are integrated here because the number of old tasks are usually larger than the current task but under-represented in the learning process due to a small memory size. In addition, \(L_{CE}\) involves both current and old samples to avoid biased responses toward new classes and has been a common design choice in the literature [29; 30]. The use of meta-weights also avoids a tweaking problem in continual learning algorithms using a large weight \(>100\) to the losses corresponding to the past states.
The base learner \(g_{\phi}(f_{\theta}(.))\) is formulated as the MLP or CNN where the feature extractor \(f_{\theta}(.)\) is created as stacked linear layers or convolutional layers while the
classifier \(g_{\phi}(.)\) is formed as a fully connected layer. One classifier per task \(g_{\phi_{k}}(.)\) is used for the task-incremental learning problem while a single classifier \(g_{\phi}(.)\) is applied for the class-incremental learning problems without any task IDs. The assessor \(\kappa_{\psi}(.)\) is formed as LSTM followed by a fully connected layer with the sigmoid activation function at the last layer. The convolutional layers are integrated as the feature extractor in the assessor. The use of LSTM aims to provide short-term memory, hidden state, and long-term memory, cell state, making possible for a sequence of past weights to be preserved in the memory. \(l(.)\) is a cross-entropy loss function and \(s_{W}(.)\) is a softmax layer parameterized by \(W\).
### Meta-Training Strategy
The assessor is trained with a meta-objective to boost the learning process of the base learner, i.e., the influence of a current sample and an old sample as well as the interaction of the three loss functions are controlled to achieve a tradeoff between the issue of plasticity and stability. That is, the assessor \(\{\alpha,\beta,\gamma\}=\kappa_{\psi}(x)\in[0,1]\) controls the speed, direction and learning strategies of the base learner. Suppose the current \(k-th\) task, two data partitions, training set and validation set, are created where the training set is constructed from current samples and memory samples \(\mathcal{T}^{k}_{train}=\mathcal{T}^{k}_{train}\cup\hat{\mathcal{M}}_{k-1}\) whereas the validation set is created by applying random transformation to the training set \(\mathcal{T}^{k}_{val}=\{T(x_{i}),y_{i}\}_{i=1}^{N^{train}_{train}}\backsim \mathcal{T}^{k}_{train}\). \(T(.)\) is the random transformation operator such as color/geometric transformation or noise injection [5] and \(N^{k}_{train}\) is the size of the training set. We follow the same way as [5] where \(T(.)\) is taken from one of the possible transformation sets \(\Phi\), i.e., three transformations, image invert, Gaussian noise perturbation, RGB-rand perturbation are applied here to the original training samples. \(\hat{\mathcal{M}}_{k-1}\) denotes an augmented memory set including those of the data augmentation procedure. The training set \(\mathcal{T}^{k}_{train}\) is exploited to update the base learner whereas the validation set \(\mathcal{T}^{k}_{val}\) is used to train the assessor.
The training process of the base learner and the assessor is formulated as a bi-level optimization problem [25] where the base learner is updated using the training set while the assessor is trained using the validation set. That is, the bi-level optimization problem is formulated as follows:
\[\begin{split}\min_{\psi}\mathbb{E}_{(x,y)\backsim\mathcal{T}^{k }_{val}}[\mathcal{L}_{bl}(g_{\phi^{*}}(f_{\theta^{*}}(x)),y)]\\ s.t,\phi^{*},\theta^{*}=\arg\min_{\phi,\theta}\mathbb{E}_{(x,y) \backsim\mathcal{T}^{k}_{train}}[\mathcal{L}_{bl}(g_{\phi}(f_{\theta}(x)),y)] \end{split} \tag{2}\]
where \(\{\phi^{*},\theta^{*}\}\) stand for optimal base parameters with respect to the current assessor \(\psi\). The meta-learning strategy is implemented here because of the absence of
ground truth of the assessor, i.e., the ideal weights. This case implies the optimal parameters of the assessor \(\psi^{*}\) minimizing the validation loss of the base network. The bi-level optimization approach is solved by first updating the assessor. That is, the base learner is evaluated with the validation set \(\mathcal{T}^{k}_{val}\) returning the validation loss. The validation loss is used to update the assessor. In other words, the learning process of the assessor aims to minimize a meta-objective as follows:
\[\psi^{*}=\arg\min_{\begin{subarray}{c}\psi\\ (x,y)\in\mathcal{T}^{k}_{val}\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
batches. [29] shows that the last fully connected layer is heavily biased because it is not shared during the training process. We apply the over-sampling approach to correct the bias problem. That is, a memory set \(\mathcal{M}_{k-1}\) is augmented using the random transformation method, thus generating an augmented memory set \(\tilde{\mathcal{M}}_{k-1}\). The augmented memory set rectifies disproportionate class proportions between new classes and old classes.
A naive oversampling approach might induce out-of-distribution samples undermining model's generalization power [31]. This paper proposes a compensated over-sampling (COS) strategy for continual learning where the learning process of augmented samples is compensated to offset out-of-distribution samples. Ideally, the over-sampling approach via a family of the transformation functions \(T(.)\) should sample from the same distribution \(p^{*}\) and semantic of an original samples \(x_{i}\). Because such distribution \(p^{*}\) is unknown in practise, we rely on a good assumption of heuristically chosen \(T(.)\backsim\tilde{p}\):
\[\mathcal{L}^{*}=\sum_{i=1}^{|\mathcal{M}_{k-1}|}\sum_{k=1}^{M} \mathcal{L}_{bl}(T(x_{i})_{k},y_{i};\theta,\phi)\tilde{p}_{k}\frac{p_{k}^{*}}{ \tilde{p}_{k}} \tag{6}\]
The ratio \(p_{k}^{*}/\tilde{p}_{k}\) is hard to access in practise. Alternatively, we define \(w_{k}=\frac{p_{k}^{*}}{\tilde{p}_{k}}\) following the Radon Nikodym derivative [31] and analyze how it affects the MSE of the deep model. The following lemma describes the MSE of the deep estimator.
**Lemma 1[31]**: Define \(\theta_{0}=\arg\min_{\theta}\mathbb{E}_{x}\mathcal{L}_{\theta}(x),\theta_{G} =\arg\min_{\theta}\mathbb{E}_{x}\int\mathcal{L}_{\theta}(x,n)dp^{*}\), and \(\hat{\theta}_{G}=\arg\min_{\theta}\mathcal{L}^{*}\). Let \(V_{0}\) be the Hessian of \(\theta\rightarrow\mathbb{E}_{x}\mathbb{L}_{\theta}(x)\) at \(\theta_{0}\), and \(V_{G}\) the Hessian of \(\theta\rightarrow\mathbb{E}_{x}[\int\mathcal{L}_{\theta}(x,n)d\tilde{p}(n)w( n)]\) at \(\theta_{G}\). Let \(M_{0}(x)=\nabla\mathcal{L}_{\theta_{0}}(x)\nabla\mathcal{L}_{\theta_{0}}(x)^ {T}\) and \(M_{G}(x)=\nabla\mathcal{L}_{\theta_{G}}(x)\nabla\mathcal{L}_{\theta_{G}}(x)^{T}\) where \(\nabla\mathcal{L}_{\theta_{0}}(x)\) and \(\nabla\mathcal{L}_{\theta_{G}}(x)\) correspond to the gradients of \(\mathcal{L}_{\theta}(x)\) at \(\theta=\theta_{0}\) and \(\theta=\theta_{G}\) respectively. Suppose \(M_{G}(x,n)=\nabla\mathcal{L}_{\theta_{G}}(x,n)\nabla\mathcal{L}_{\theta_{G}}( x,n)^{T}\). \(tr(X)\) stands for the trace of matrix \(X\). Hence, with \(C\) a constant invariant of \(w\), under mild conditions, we obtain:
\[MSE(\hat{\theta}_{G})\backsim C+||\theta_{G}-\theta_{0}||_{2}^{2} +\frac{1}{N}\mathbb{E}_{x}[\int tr(V_{G}^{-1}(M_{G}(x,n)\] \[-M_{G}(x))V_{G}^{-1})d\tilde{p}(n)w(n)] \tag{7}\] \[+\frac{1}{N}\mathbb{E}_{x}[tr(V_{G}^{-1}(M_{G}(x)-M_{0}(x))V_{G}^ {-1})]\]
\[+\frac{1}{N}tr((V_{G}^{-1}-V_{0}^{-1})Cov_{x}\nabla\mathcal{L}_{ \theta_{0}}(x)(V_{G}^{-1}-V_{0}^{-1})) \tag{8}\]
\[-\frac{1}{N}tr(V_{G}^{-1}\mathbb{E}_{x}[Cov_{w}\nabla\mathcal{L}_{\theta_{G}}(x,n) ]V_{G}^{-1}) \tag{9}\]
where \(Cov_{w}(\nabla\mathcal{L}_{\theta_{G}}(x,n))\) is the covariance matrix of \(\mathcal{L}_{\theta}(x,n)\) at \(\theta_{G}\) under measure \(p^{*}(n)\).
This lemma tells that large data variance via sampling from \(p^{*}\) is desired to decrease the model's variance. Another case also explains the increase of MSEs or biases if there exist significant differences between the gradient of augmented samples and those of original samples. The key observation is the presence of \(w(n)\) which can be benefited to control the bias-variance trade-off. In other words, it is capable of compensating possible out of distribution augmented samples. We apply the same principle as [31] where \(w_{k}\) is determined as the likelihood ratio in sampling \(T(x_{i})_{k}\) to reduce the MSE in Lemma 1. Suppose that \(z_{i,k}=f_{\theta}(T(x_{i})_{k})\), \(w_{i,k}\) is arranged
\[\frac{p^{*}}{\tilde{p}}\propto w_{i,k}=\exp-(z_{i,k}-\mu_{i})(\tau\Sigma)^{-1} (z_{i,k}-\mu_{i}) \tag{10}\]
\[\mu_{i}=\frac{1}{M}\sum_{k=1}^{M}z_{i,k};\Sigma=\frac{1}{NM}\sum_{i}^{N}\sum_{ k}^{M}(z_{i,k}-\mu_{i})(z_{i,k}-\mu_{i})^{T} \tag{11}\]
where \(k\in\{1,...,M\}\), \(M\) denotes the number of random transformations applied an image \(x_{i}\) and \(\tau\) is a temperature that regulates the influence of the distance measure. \(w_{i,k}\) is further normalized across a data batch leading to \(\overline{x}_{i}\):
\[\frac{p^{*}}{\tilde{p}}\propto\overline{w}_{i,k}=\frac{w_{i,k}}{\sum_{i=1}^{N }\sum_{k=1}^{M}w_{i,k}} \tag{12}\]
The normalized ratio is applied to govern the learning process of augmented samples:
\[\mathcal{L}_{bl}=\sum_{i=1}^{N}\sum_{k=1}^{M}\overline{w}_{i,k}\mathcal{L}_{bl} \tag{13}\]
At first, a forward propagation is performed to produce \(\mu_{i},\Sigma\) leading to \(\overline{w}_{i,k}\). \(\overline{w}_{i,k}\) is frozen afterward and inserted during the learning process of \(\mathcal{L}_{bl}\). A theorem can be derived.
**Theorem 1**[31]: given mild conditions, the learning process of augmented samples via (13) where \(\overline{w}_{i,k}\) is determined via (12), (10) leads to reductions of MSEs as per Lemma 1.
Proofs of Theorem 1 and Lemma 1 are provided in the supplemental. Algorithm 1 exhibits the pseudo-code of AGLA.
```
Input: continual dataset \(\mathcal{D}\), learning rates \(\mu,\eta\), size of pseudo memory \(m\), iteration number \(E\) Output: parameters of base learner \(\{\phi,\theta\}\), parameters of the assessor \(\psi\) for\(k=1\) to \(K\)do \(\hat{\mathcal{M}}_{k-1}=Augment(\mathcal{M}_{k-1})\) /* Augment the memory to address the class imbalanced problem/* \(\mathcal{T}_{train}^{k}=\hat{\mathcal{M}}_{k-1}\cup\mathcal{T}_{k}\) /*Construct the training set/* \(\mathcal{T}_{val}^{k}=\left\{T(x_{i}),y_{i}\right\}_{i=1}^{N_{train}^{k}} \backsim\mathcal{T}_{train}^{k}\) /*Apply random transformation to construct the validation set/* for\(e=1\) to \(E\)do Update assessor parameters \(\psi\) using (4) Update base learner parameters \(\{\phi,\theta\}\) using (5) endfor \(B_{k}=Sample(\mathcal{T}_{k})\) /* Perform reservoir sampling on the current task /* \(\mathcal{M}_{k}=\mathcal{M}_{k-1}\cup B_{k}\) /*Update the current memory/* endfor
```
**Algorithm 1** Learning Policy of AGLA
\begin{table}
\begin{tabular}{||c c c c||} \hline Problem & nTasks & nClasses / Task & CL Setting \\ \hline SMNIST & 5 & 2 & Class and Task - IL \\ \hline SCIFAR10 & 5 & 2 & Class and Task - IL \\ \hline SCIFAR100 & 10 & 10 & Class and Task - IL \\ \hline SMINIIMAGENET & 10 & 10 & Class and Task - IL \\ \hline \end{tabular}
\end{table}
Table 2: Details of Continual Learning Problems, nTasks: number of tasks, nClasses / Task: number of classes per task, CL Setting: Continual Learning Setting
\begin{table}
\begin{tabular}{l l} \hline
**Methods** & **Params** & **Value** \\ \hline batch\_size & 100 \\ \hline clipping & 10000 \\ \hline eval\_on\_train & False \\ \hline fix\_bn & False \\ \hline gridsearch\_tasks & -1 \\ \hline keep\_existing\_head & False \\ \hline last\_layer\_analysis & False \\ \hline log & [’disk’] \\ \hline lr & 0.05 \\ \hline lr\_factor & 1 \\ \hline lr\_min & 0.0001 \\ \hline lr\_patience & 5 \\ \hline momentum & 0.9 \\ \hline multi\_softmax & False \\ \hline nepochs & 100 \\ \hline network & resnet18 \\ \hline no\_cudnn\_deterministic & False \\ \hline num\_workers & 4 \\ \hline pin\_memory & False \\ \hline pretrained & False \\ \hline use\_valid\_only & False \\ \hline warmup\_lr\_factor & 1 \\ \hline warmup\_nepochs & 0 \\ \hline weight\_decay & 0.0002 \\ \hline Memory Based: AGLA, DER & num\_exemplars\_per\_class & 50 \\ \hline exemplar\_selection & random \\ \hline \multirow{4}{*}{BIC} & T & 2 \\ \cline{2-2} & lamb & -1 \\ \cline{2-2} & num\_bias\_epochs & 200 \\ \cline{2-2} & val\_exemplar\_percentage & 0.1 \\ \hline ICARL & lamb & 1 \\ \hline \multirow{4}{*}{EEIL} & T & 2 \\ \cline{2-2} & lamb & 1 \\ \cline{2-2} & lr\_finetuning\_factor & 0.01 \\ \cline{2-2} & nepochs\_finetuning & 40 \\ \cline{2-2} & noise\_grad & FALSE \\ \hline \multirow{4}{*}{EWC} & alpha & 0.5 \\ \cline{2-2} & fi\_num\_samples & -1 \\ \cline{2-2} & fi\_sampling\_type & max\_pred \\ \cline{2-2} & lamb & 5000 \\ \hline \multirow{2}{*}{LWF} & T & 2 \\ \cline{2-2} & lamb & 1 \\ \hline \multirow{4}{*}{MAS} & alpha & 0.5 \\ \cline{2-2} & fi\_num\_samples & -1 \\ \cline{2-2} & lamb & 1 \\ \hline SI & damping & 0.1 \\ \hline \end{tabular}
\end{table}
Table 3: Hyper-parameters of Consolidated Algorithms
## 5 Experiments
The advantage of AGLA is numerically validated in both class-incremental and task-incremental learning problems. In addition, ablation study and memory analysis are offered to analyze the contribution of each component and the memory size to the final performance. Our experiments are conducted by five independent runs using different random seeds where the numerical results are averaged and reported in Table 4 and Table 5. Two performance metrics, average accuracy and forgetting measure [34], are applied to measure the learning performances. The source codes and other supporting data of AGLA including raw numerical results are made public in [https://github.com/anwarmaxsum/AGLA](https://github.com/anwarmaxsum/AGLA) for further study. The models are evaluated in two continual learning setting i.e. task incremental learning (Task IL) and class incremental learning (Class IL). **Task IL** is a problem setting where a model is trained to a sequence of tasks \(\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{K},k\in\{1,...,K\}\), each task \(\mathcal{T}_{k}\) has its training set \(T_{k}=\{x_{n},y_{n},id_{n}\}_{n=1}^{N_{k}}\), \(x_{n}\) represents an input sample and \(y_{n}\in\mathcal{Y}\) denotes its true class label, \(id_{n}\) stands for the task id. \(K,N_{k}\) respectively denote the number of tasks and the size of \(k-th\) task. **Class IL** is a similar setting with Task IL but the task id \(id_{n}\) is not visible to the model.
and Split Mini-ImageNet (SMImageNet) [39; 25]. SMNIST, SCIFAR10 feature the task-incremental and class-incremental learning problems with 5 tasks where each task contains two disjoint classes whereas SCIFAR100 and SMImageNet also present the two problems with 10 tasks where each task features 10 disjoint classes. Note that no task-IDs and the single-head structure are applied in the class-incremental learning problem while the task-incremental learning problem benefits from the presence of task IDs and the multi-head network structure. Table 2 tabulates our experimental setting. The hyperparameters setting for the consolidated algorithms is presented in Table 3.
### Baseline Algorithms
AGLA is compared with eight algorithms: EWC [1], SI [8], LWF [10], MAS [9], iCaRL [7], DER++ [6], EEIL [30], BIC [29], DMC [32] and HAL [33]. In
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**S-CIFAR-100**} & \multicolumn{2}{c}{**S-CIFAR-10**} & \multicolumn{2}{c}{**S-MINI-IMAGEENT**} & \multicolumn{2}{c}{**S-MNIST**} \\ \cline{2-10}
**Method** & Task IL & Class IL & Task IL & Class IL & Task IL & Class IL & Task IL & Class IL \\ \hline Finetuning & 49.01\(\pm\)1.84 & 24.21\(\pm\)0.76 & 74.18\(\pm\)5.18 & 42.88\(\pm\)1.57 & 44.06\(\pm\)1.78 & 20.99\(\pm\)0.89 & 78.33\(\pm\)4.28 & 40.92\(\pm\)3.24 \\ Joint & 75.45\(\pm\)1.71 & 55.88\(\pm\)2.03 & 95.86\(\pm\)1.53 & 86.94\(\pm\)0.93 & 66.45\(\pm\)1.95 & 44.34\(\pm\)1.55 & 99.88\(\pm\)0.06 & 99.62\(\pm\)0.09\(\ast\) \\ \hline EWC[1] & 53.46\(\pm\)2.02 & 26.18\(\pm\)1.21 & 77.16\(\pm\)6.22 & 40.51\(\pm\)4.54 & 48.41\(\pm\)2.36 & 23.26\(\pm\)0.94 & 83.07\(\pm\)2.83 & 43.19\(\pm\)3.57 \\ LWF[10] & 51.28\(\pm\)14.82 & 26.61\(\pm\)4.73 & 91.14\(\pm\)3.61 & 56.32\(\pm\)1.89 & 52.04\(\pm\)2.5 & 24.34\(\pm\)1.33 & 79.84\(\pm\)12.60 & 51.38\(\pm\)8.69 \\ MAS[9] & 52.3\(\pm\)1.95 & 26.01\(\pm\)1.15 & 76.72\(\pm\)5.21 & 38.94\(\pm\)4.77 & 46.57\(\pm\)2.36 & 22.16\(\pm\)1.08 & 86.50\(\pm\)4.83 & 44.97\(\pm\)2.31 \\ SI[8] & 52.19\(\pm\)1.85 & 25.56\(\pm\)1.71 & 76.44\(\pm\)5.67 & 39.2\(\pm\)2.6 & 46.72\(\pm\)2.22 & 22.15\(\pm\)1.06 & 82.10\(\pm\)3.13 & 41.48\(\pm\)1.20 \\ DMC[32] & 62.31\(\pm\)1.91 & 35.72\(\pm\)2.07 & 81.90\(\pm\)1.14 & 52.07\(\pm\)10.10 & 53.42\(\pm\)1.61 & 72.91\(\pm\)1.73 & 77.90\(\pm\)15.50 & 48.50\(\pm\)15.30 \\ \hline EEIL[30] & 67.24\(\pm\)1.56 & 40.75\(\pm\)1.49 & 93.99\(\pm\)1.67 & 71.2\(\pm\)2.93 & 57.99\(\pm\)2.51 & 30.84\(\pm\)1.69 & 86.51\(\pm\)5.68 & 72.74\(\pm\)11.74 \\ ICAR[7] & 67.13\(\pm\)1.51 & 44.15\(\pm\)2.02 & 93.91\(\pm\)2.35 & 78.54\(\pm\)1.4 & 58.53\(\pm\)2.63 & 34.93\(\pm\)2.14 & 99.79\(\pm\)0.07 & 98.67\(\pm\)0.15 \\ BIC[29] & 67.82\(\pm\)2.89 & 44.63\(\pm\)3.02 & 89.41\(\pm\)10.59 & 71.1\(\pm\)15.41 & 59.16\(\pm\)3.05 & 35.19\(\pm\)2.11 & 76.48\(\pm\)4.52 & 51.02\(\pm\)2.25 \\ DER++[6] & 65.89\(\pm\)1.84 & 41.1\(\pm\)1.72 & 94.7\(\pm\)1.6 & 77.42\(\pm\)1.98 & 55.91\(\pm\)2.12 & 30.78\(\pm\)1.3 & 99.77\(\pm\)0.08 & 98.54\(\pm\)0.28 \\ HAL[33] & 53.77\(\pm\)10.76 & 30.08\(\pm\)8.52 & 74.60\(\pm\)3.64 & 46.90\(\pm\)1.57 & 45.94\(\pm\)8.11 & 22.69\(\pm\)5.06 & 71.30\(\pm\)4.55 & 45.80\(\pm\)0.19 \\
**AGLA (Ours)** & **69.51\(\pm\)1.68** & **46.95\(\pm\)2.19** & **95.64\(\pm\)1.53** & **82.64\(\pm\)1.14** & **60.54\(\pm\)2.09** & **36.23\(\pm\)1.66** & **99.86\(\pm\)0.04** & **98.86\(\pm\)0.18** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification result (accuracy \(\%\)) for continual learning benchmarks, averaged across 5 runs. \(\ast\) 4 tasks only due to crash
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**S-CIFAR-100**} & \multicolumn{2}{c}{**S-CIFAR-10**} & \multicolumn{2}{c}{**S-MINI-IMAGEENT**} & \multicolumn{2}{c}{**S-MNIST**} \\ \cline{2-10}
**Method** & Task IL & Class IL & Task IL & Class IL & Task IL & Class IL & Task IL & Class IL \\ \hline Finetuning & 31.28\(\pm\)1.72 & 59.54\(\pm\)2.14 & 37.71\(\pm\)9.5 & 77.9\(\pm\)9.12 & 24.58\(\pm\)1.37 & 50.11\(\pm\)2.41 & 37.67\(\pm\)7.78 & 69.24\(\pm\)8.44 \\ Joint & -1.21\(\pm\)0.2 & 5.79\(\pm\)0.59 & -0.73\(\pm\)0.17 & 6.26\(\pm\)1.63 & -1.14\(\pm\)0.17 & 6.23\(\pm\)1.18 & -0.02\(\pm\)0.02 & 0.15\(\pm\)0.06\(\ast\) \\ \hline EWC[1] & 19.23\(\pm\)2.37 & 39.62\(\pm\)2.9 & 29.47\(\pm\)11.44 & 50.86\(\pm\)13.93 & -14.6\(\pm\)1.56 & 32.51\(\pm\)2.82 & 29.66\(\pm\)4.69 & 64.96\(\pm\)9.68 \\ LWF[10] & 13.71\(\pm\)7.94 & 39.19\(\pm\)22.02 & 6.22\(\pm\)3.78 & 48.94\(\pm\)5.61 & 12.57\(\pm\)0.69 & 42.3\(\pm\)1.55 & 0.07\(\pm\)0.10 & 18.45\(\pm\)26.53 \\ MAS[9] & 21.91\(\pm\)1.94 & 43.52\(\pm\)2.37 & 30.56\(\pm\)8.57 & 51.11\(\pm\)14.79 & 17.36\(\pm\)2.33 & 35.75\(\pm\)2.8 & 23.36\(\pm\)8.88 & 58.75\(\pm\)8.34 \\ SI[8] & 21.56\(\pm\)2.25 & 46.15\(\pm\)2.32 & 31.46\(\pm\)9.71 & 56.27\(\pm\)7.46 & 17.53\(\pm\)0.99 & 38.31\(\pm\)1.8 & 31.16\(\pm\)5.44 & 68.53\(\pm\)13.28 \\ DMC[32] & 4.08\
addition, the lower (fine-tuning) and upper bounds (joint) are provided where the naive SGD depicts the lower bound case and the joint training approach presents the upper bound case. Comparison is done in the same computational environments, eight NVIDIA DGX A100 GPUs with 40 GB each, under the FACIL framework [40] to ensure fair comparison.
### Experimental Setup
For all problems, ResNet18 is deployed without the pretraining process. A grid search approach is applied to all consolidated algorithms where detailed hyper-parameters are provided in Table 3. On the other hand, the assessor is formed as the LSTM network: ResNet18 feature extractor, two LSTM layers and a single fully connected layer where the number of nodes in each layer is set as 64. The memory size of the memory-based approaches, AGLA, BIC, DER++,
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{4}{c|}{**Configuration**} & \multicolumn{2}{c|}{**S-CIFAR-100**} & \multicolumn{2}{c}{**S-CIFAR-10**} \\ \cline{2-11} Code & Assr & Aug & R.Tr & (\(\overline{w}_{i,k}\)) & \(\mathcal{L}_{\mathit{DER+}}\mathcal{L}_{dist}\) & Task & Class & Task & Class \\ & & & & & & IL & IL & IL & IL \\ \hline A & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 69.16 & 46.26 & **96.42** & **82.96** \\ B & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ & **70.44** & 45.83 & 96.00 & 80.50 \\ C & ✓ & ✓ & & ✓ & ✓ & ✓ & 68.52 & 46.16 & 96.00 & **82.94** \\ D & ✓ & ✓ & ✓ & & ✓ & ✓ & 69.50 & **46.82** & 96.18 & 82.74 \\ E & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **70.01** & **47.61** & **96.28** & 82.68 \\ F & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 67.76 & 43.98 & 95.56 & 78.78 \\ AGLA & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **69.76** & **47.70** & **96.22** & **83.20** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Classification result (accuracy) of AGLA based on various configuration
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{4}{c|}{**Configuration**} & \multicolumn{2}{c|}{**S-CIFAR-100**} & \multicolumn{2}{c}{**S-CIFAR-10**} \\ \cline{2-11} Code & Assr & Aug & R.Tr & (\(\overline{w}_{i,k}\)) & \(\mathcal{L}_{\mathit{DER+}}\mathcal{L}_{dist}\) & Task & Class & Task & Class \\ & & & & & & IL & IL & IL & IL & IL \\ \hline A & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **4.04** & 19.78 & **0.35** & 21.03 \\ B & ✓ & & ✓ & ✓ & ✓ & ✓ & 5.30 & 31.71 & 0.68 & 27.20 \\ C & ✓ & ✓ & & ✓ & ✓ & ✓ & **4.16** & 19.00 & 0.78 & **17.35** \\ D & ✓ & ✓ & ✓ & & ✓ & ✓ & 4.74 & **10.94** & **0.43** & 20.85 \\ E & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & **4.36** & **11.56** & 0.80 & **19.40** \\ F & ✓ & ✓ & ✓ & ✓ & ✓ & & 10.07 & 36.44 & 1.20 & 29.83 \\ AGLA & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & 5.11 & **13.41** & **0.40** & **19.35** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Forgetting rate of AGLA based on various configuration
EEIL and ICaRL, is set equally to 5000 samples for all problems but the memory analysis is offered here as well. The image size of the SMIImageNet is resized to 32 by 32 from the original size of 84 by 84. The hyper-parameters of consolidated algorithms are listed in the Table 7. Note that AGLA only carries a single hyper-parameter, memory size, common to other memory-based approaches. Other hyper-parameters are shared by all benchmarked algorithms.
### Numerical Results
From Table 4, it is shown that AGLA outperforms other algorithms in all four problems with noticeable gaps. AGLA produces better accuracy than BIC hav
\begin{table}
\begin{tabular}{c l c c} \hline \hline & \multicolumn{3}{c}{S-CIFAR-100 Accuracy} \\ \cline{2-4} Memory & Method & Task IL & Class IL \\ \hline \multirow{4}{*}{2000} & EEIL[30] & 63.88 & 33.43 \\ & ICARL[7] & 63.94 & 40.07 \\ & BIC[29] & 67.41 & 42.33 \\ & DER++[6] & 62.53 & 34.33 \\ & HAL[33] & 55.92 & 28.02 \\ & **AGLA (Ours)** & **67.60** & **43.50** \\ \hline \multirow{4}{*}{3000} & EEIL[30] & 65.7 & 36.94 \\ & ICARL[7] & 65.08 & 42.01 \\ \cline{1-1} & BIC[29] & 68.21 & 44.1 \\ \cline{1-1} & DER++[6] & 64.26 & 36.95 \\ \cline{1-1} & HAL[33] & 57.23 & 30.75 \\ \cline{1-1} & **AGLA (Ours)** & **69.07** & **45.64** \\ \hline \multirow{4}{*}{4000} & EEIL[30] & 66.12 & 38.54 \\ \cline{1-1} & ICARL[7] & 65.34 & 42.13 \\ \cline{1-1} & BIC[29] & 67.26 & 44.28 \\ \cline{1-1} & DER++[6] & 65.06 & 39.43 \\ \cline{1-1} & HAL[33] & 57.77 & 32.19 \\ \cline{1-1} & **AGLA (Ours)** & **69.34** & **46.19** \\ \hline \multirow{4}{*}{5000} & EEIL[30] & 66.74 & 40.18 \\ \cline{1-1} & ICARL[7] & 66.09 & 43.03 \\ \cline{1-1} & BIC[29] & 67.84 & 44.8 \\ \cline{1-1} & DER++[6] & 66.57 & 41.33 \\ \cline{1-1} & HAL[33] & 57.76 & 32.32 \\ \cline{1-1} & **AGLA (Ours)** & **69.34** & **47.70** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Classification result (accuracy) of the benchmarks based on memory size
ing the second highest accuracy by about \(2-3\%\) in the SCIFAR100 problem for both the task-incremental and class-incremental learning problems. This finding confirms the advantage of the meta-trained assessor in guiding the learning process of the base model. The assessor enables a soft sample selection mechanism assigning high weights only to positive samples and a dynamic weighting mechanism of loss functions controlling interactions of the three loss functions. Note that BIC features the bias-correction layer to handle the class-imbalanced problem. The class-imbalanced problem is addressed in AGLA with the compensated over-sampling technique which does not call for an additional training stage as per BIC. This is done by simply augmenting the old samples in the memory with the same size as the original data. The class-imbalanced problem is addressed in AGLA with the compensated over-sampling technique which does not call for an additional training stage as per BIC. This is done by simply augmenting the old samples in the memory with the same size as the original data.
\begin{table}
\begin{tabular}{c l c c} \hline \hline \multirow{3}{*}{Memory} & \multicolumn{3}{c}{S-CIFAR-100 Forgetting rate} \\ \cline{2-4} & Method & Task IL & Class IL \\ \hline \multirow{4}{*}{2000} & EEIL[30] & 8.14 & 49.9 \\ & ICARL[7] & **2.47** & 15.71 \\ & BIC[29] & 2.96 & **15.54** \\ & DER++[6] & 14.99 & 52.81 \\ & HAL[33] & 10.72 & 47.46 \\ & **AGLA (Ours)** & 8.37 & 31.37 \\ \hline \multirow{4}{*}{3000} & EEIL[30] & 7.78 & 46.44 \\ & ICARL[7] & **2.52** & 15.32 \\ & BIC[29] & 2.58 & **13.99** \\ & DER++[6] & 11.8 & 48.23 \\ & HAL[33] & 9.02 & 41.80 \\ & **AGLA (Ours)** & 5.21 & 19.58 \\ \hline \multirow{4}{*}{4000} & EEIL[30] & 5.84 & 42.58 \\ & ICARL[7] & **0.69** & **12.59** \\ & BIC[29] & 2.23 & 13.69 \\ & DER++[6] & 10.18 & 42.94 \\ & HAL[33] & 9.02 & 40.32 \\ & **AGLA (Ours)** & 5.66 & 24.88 \\ \hline \multirow{4}{*}{5000} & EEIL[30] & 5.71 & 40.14 \\ & ICARL[7] & **0.31** & **10.73** \\ & BIC[29] & 1.87 & 13.32 \\ & DER++[6] & 8.33 & 39.87 \\ & HAL[33] & 8.18 & 37.20 \\ & **AGLA (Ours)** & 5.11 & 14.54 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Forgetting rate of the benchmarks based on memory size
corrections to prevent out of distributions. The same pattern is observed in the SCIFAR10 problem where AGLA delivers the highest accuracy with about \(1.2\%\) gap to DER++ in the task-incremental learning problem and \(4\%\) gap to ICARL in the class-incremental learning problem.
AGLA is also superior to other algorithms in the SMImagnenet problem. It beats the BIC by \(1.5\%\) margin in both the task-incremental and class-incremental learning problems. Once again, this aspect confirms the assessor guided learning process of AGLA achieving better tradeoff between the plasticity and the stability than other algorithms. The use of the meta-trained assessor also performs a selection of suitable learning strategy of data samples. It controls how much new concepts and past concepts are to be accepted. On the other hand, AGLA also outperforms other consolidated algorithms with a tiny margin in the task-incremental learning and class-incremental learning of the SMNIST problem. Note that the SMNIST problem is considered as an easy problem for continual learning algorithms, i.e., most algorithms produce decent results in this problem. The performance of AGLA is also stable across the four problems. That is, it does not suffer from any sudden performance losses, i.e., this occurs for BIC (SCIFAR10) and DER++ (SMINIIMAGENET). Another important finding is seen in the fact that the memory-based approaches are better than the regularization-based approaches. This fact becomes obvious in the class-incremental learning problem. Nonetheless, the regularization-based approach is simple to implement and faster than the memory-based approach. Table 5 presents the average forgetting of all consolidated algorithms. It is perceived that AGLA produces comparable average forgetting to other memory-based approaches where it mostly occupies the first or second place and still far better than the regularization-based approaches. This should not hinder the advantage of AGLA because the average forgetting is referred to only if the accuracy of compared algorithms is on par, i.e., AGLA beats other algorithms in all four cases. Low average forgetting score might occur in the case of poor average accuracy.
### Ablation Study
This section demonstrates the contribution of each learning component of AGLA. Five aspects are studied here: 1) the learning performance of AGLA without the presence of assessor is presented - configuration (A); 2) the learning performance of AGLA without the data augmentation strategy is demonstrated - configuration (B). That is, no oversampling strategy is performed; 3) the learning performance of AGLA without the random transformation strategy to construct the validation set of the meta-learning phase is depicted - configuration (C). This
Figure 2: Classification Accuracy of Consolidated Algorithms across All Datasets.
Figure 3: The trace of losses for AGLA with or w/o Assessor in the S-CIFAR-100 problem.
Figure 4: The trace of weighting coefficients and loss for AGLA in the S-CIFAR-100 problem.
implies the use of the training set in updating the assessor; 4) the learning performance of AGLA without the out-of-distribution compensation is depicted - configuration (D). This implies that the augmented data is not guaranteed to be within the original samples distribution; 5) the learning performance of AGLA without the DER loss is shown - configuration (E). This implies the application of the two loss functions only, the cross entropy loss function and the distillation loss function; 6) the learning performance of AGLA without the distillation loss function is presented - configuration (F). This implies the use of the cross entropy loss function and the DER loss function. All numerical results are tabulated in Table 6 and in Table 7 where they are produced using the SCIFAR10/100 problems in both the class-incremental learning problem and the task-incremental learning problem. Note that we are only able to report numerical results of a single run for the ablation study due to limited accesses of computational resources.
The absence of the assessor as depicted in configuration (A) leads to significant performance deterioration in all cases with up to \(2\%\) drop in accuracy and \(7\%\) drop in average forgetting. This finding confirms the importance of assessor in balancing between the plasticity and the stability. The three loss functions of AGLA contribute positively to the final performance. The absence of the distillation loss function in configuration (F) or the DER loss function in configuration (E) cause major performance drop, i.e., up to \(15\%\) for configuration (F) and up to \(1\%\) for configuration (E). The DER loss function and the distillation loss function protect against the catastrophic forgetting. This finding is further borne out by the major increase of average forgetting. Table 7 shows that the absence of distillation loss and DER loss increases the forgetting rate up to \(23\%\) for configuration (F) and \(2\%\) for configuration (E). The over-sampling procedure plays vital role where configuration (B) shows performance degradation in accuracy (up to \(3\%\)) and average forgetting (up to \(18\%\)). This mechanism is designed to cope with the class imbalanced problem of the memory-based approach. The random transformation technique to construct the validation set for the assessor training process improves model's generalization, i.e., configuration (C) results in noticeable loss in performance (up to \(2\%\)) and increasing forgetting rate (up to \(6\%\)). This aspect demonstrates that the meta-training strategy should be carried out using different training and validation sets. Last but not least, the compensation strategy also plays important role to the proposed method i.e. configuration (D) leads to drops in accuracy by up to \(1\%\) and increasing in forgetting rate by up to \(2\%\). This finding shows that the augmented samples should be protected to be within the original samples distribution.
### Memory Analysis
This section depicts the performance of AGLA and other consolidated memory-based approaches with different memory sizes in the SCIFAR100 problem under both class-incremental and task-incremental configurations. Specifically, the memory size is set to 2000, 3000, 4000 respectively where consolidated numerical results are provided in Table 8 and Table 9. As with the ablation study, the memory analysis is performed under a single run due to limited accesses of computational resources. However, this should not bias our finding in the ablation study and the memory analysis since AGLA is superior to all baseline algorithms in all cases in our main results reported in Table 4 and 5 where five runs are committed under different random seeds.
It is obvious that AGLA still maintains superior performance compared to other memory-based approaches under smaller memory sizes (2000, 3000, 4000) than that reported in Table 4 and 5 (5000) with clear margins in the range of \(1-3\%\). The performances of AGLA are relatively stable under different memory budgets where performance drops due to reductions of memory sizes are less than \(4\%\). Although ICARL and BIC produced smaller average forgetting than AGLA, their accuracy are far worse than AGLA. This finding confirms the advantage of the compensated over-sampling approach to cope with the severe class imbalanced problem without any extra parameters. It is also observed that the performance drops due to the memory constraint are higher in the class-incremental learning problem than in the task-incremental learning problem because of the absence of task IDs and single-head classifiers.
### Performance Evaluations per Task
The evolution of classification accuracy of all consolidated algorithms in all datasets in both the class-incremental and task-incremental learning problems in one of runs are pictorially illustrated in Fig. 2 where the advantage of AGLA is clearly demonstrated. For the class-IL problems, AGLA produces higher classification accuracy in all tasks across all datasets than its counterparts.
For task-IL problem, AGLA outperforms other algorithms across all tasks in all datasets. That is, it returns the highest classification accuracy per task than other algorithms. The positive contribution of the assessor-guided continual learning is portrayed here where it beats the DER++ algorithm across all tasks in all datasets regardless of the task-IL or class-IL problem. Note that AGLA shares similar loss functions as DER++ except for the application of the distillation loss function and the assessor and the meta-weighting strategy.
### Assessor vs w/o Assessor
This subsection confirms the positive roles of the assessor in guiding the continual learning process of AGLA where Fig. 3 visualizes the trace of AGLA's losses in the SCIFAR100 problem with and w/o the assessor under the class-IL setting in one of the experiments. Clearly, the use of assessor to perform the soft-weighting mechanism of data samples and the selection of appropriate learning strategies result in faster convergence and lower losses than with the absence of assessor in most tasks. Importantly, AGLA remains stable across all tasks with the assessor-guided learning process, i.e., the losses signify decreasing trends across all tasks. This fact implies the success of our meta-learning-based bi-level optimization approach to fine-tune both the base network and the assessor.
### Evolution of Meta-weights
Figure 4 shows the plots visualizing the evolutions of meta-weights (\(\alpha\), \(\beta\), \(\gamma\)) in CIFAR-100 dataset. The figure shows that the value of \(\gamma\) and \(\beta\) that correspond to distillation loss and dark experience replay loss respectively are higher than the value of \(\alpha\) that corresponds to cross-entropy loss. It shows that the proposed method needs higher weights for the memory samples than the current samples. This is in line with the fact that the number of memory samples is less than the current task samples, so it needs higher weights in current task training to handle class-imbalance problems. The figure also shows that the dynamic evolution of the three weights leads to decreasing trends of validation losses.
### Complexity Analysis
This sub-section discusses the complexity analysis of the proposed method. Suppose that \(N\) is the total samples of a dataset, \(k\in\{1,...,K\}\) is the index of task, \(N_{k}\) is the number of samples on task \(k\), where satisfy \(\sum_{k=1}^{K}N_{k}=N\), \(e\) is the number of epoch for networks training, \(M_{k}\) is the size of memory on task \(k\) that satisfy \(\sum_{k=1}^{K}M_{k}<N\). Following the pseudo-code in Algorithm 1, there are several processes one each task i.e. augmentation (Aug), union of train data and memory (Union), random transformation (R.Trans), assessor and base networks update (N.Update), memory sampling (M.Sampling) and memory update (M.Update). Please note that augmentation and random transformation are done in a few operations \((c<10)\). Let \(C\) denote the complexity of a process. Following the Algorithms 1, the complexity of the proposed method can be written as
the following equations:
\[\begin{split}\mathcal{C}(AGLA)=\sum\nolimits_{k=1}^{K}(\mathcal{C}( Aug)+\mathcal{C}(Union)+\mathcal{C}(R.Trans)+\\ \mathcal{C}(N.update)+\mathcal{C}(M.Samping)+\mathcal{C}(M.Update ))\end{split} \tag{14}\]
\[\begin{split}\mathcal{C}(AGLA)=\sum\nolimits_{k=1}^{K}((c.N_{k}) +max(N_{k},M_{k})+(c.(N_{k}+M_{k}))+\\ (2.e.(N_{k}+M_{k}))+(N_{k}+M_{k})+max(N_{k}+M_{k},M_{k+1}))\end{split} \tag{15}\]
\[\begin{split}\mathcal{C}(AGLA)=((c.\sum\nolimits_{k=1}^{K}N_{k}) +max(\sum\nolimits_{k=1}^{K}N_{k},\sum\nolimits_{k=1}^{K}M_{k})+\\ (c.(\sum\nolimits_{k=1}^{K}N_{k}+\sum\nolimits_{k=1}^{K}M_{k}))+ \\ (2.e.(\sum\nolimits_{k=1}^{K}N_{k}+\sum\nolimits_{k=1}^{K}M_{k}))+ \\ (\sum\nolimits_{k=1}^{K}N_{k}+\sum\nolimits_{k=1}^{K}M_{k})+\\ max(\sum\nolimits_{k=1}^{K}N_{k}+\sum\nolimits_{k=1}^{K}M_{k}, \sum\nolimits_{k=1}^{K}M_{k+1}))\end{split} \tag{16}\]
Since \(\sum_{k=1}^{K}N_{k}=N\), \(\sum_{k=1}^{K}M_{k}<N\), and \(\sum_{k=1}^{K}M_{k+1}<N\). then the complexity of AGLA can be derived to:
\[\begin{split}\mathcal{C}(AGLA)\leq((c.N)+N+(c.(N+N))+\\ (2.e.(N+N))+(N+N)+(N+N))\end{split} \tag{17}\]
\[\begin{split}\mathcal{C}(AGLA)\leq(cN+N+2cN+4eN+2N+N)\end{split} \tag{18}\]
\[\begin{split}\mathcal{C}(AGLA)\leq(4N+3cN+4eN)\end{split} \tag{19}\]
\[\begin{split}\mathcal{C}(AGLA)=(O(N)+O(cN)+O(eN))\end{split} \tag{20}\]
Considering that \(c\) is a small number \((<10)\), then the complexity of AGLA can be derived to:
\[\begin{split}\mathcal{C}(AGLA)&=(O(N)+O(N)+O(eN))\\ &=(O(N)+O(eN))\\ &=O(eN)\end{split} \tag{21}\]
The equation above concludes that the complexity of the proposed method is \(O(eN)\), where N is the total instance of the data across all tasks, and e is the number of epochs for network training. In case that the epoch is set as constant e.g. 100, then the complexity of the proposed method will be \(O(N)\).
## 6 Conclusions
This paper proposes an assessor-guided learning approach (AGLA) for continual learning where it puts forward a sequence-aware assessor to guide the learning process of the base learner. The assessor performs a soft-weighting mechanism controlling the influence of a data sample and the interaction of loss functions. It is made possible by forming the loss function of the base learner as the meta-weighted combination of the cross entropy loss function, the DER loss function and the distillation loss function. The underlying objective is to arrive at proper tradeoff between the plasticity and the stability because not only positive samples are selected, but also suitable learning strategies are chosen to every sample, i.e., the assessor balances the stability and the plasticity for every sample. Another major contribution is perceived in the proposal of compensated over-sampling (COS) to address the class-imbalanced problem on continual learning where corrections are carried out while augmenting data samples to avoid the effect of out of distribution augmented samples. Rigorous numerical results have been carried out using four popular continual learning problems in both the task-incremental and class-incremental learning problems. AGLA has been compared with ten recently published algorithms under the same computational environments where AGLA demonstrates improved and stable accuracy across all four problems in both the task-incremental learning problems and the class-incremental learning problems with noticeable margins. In realm of average forgetting, AGLA delivers comparable performances to those of memory-based approaches where it mostly occupies the second lowest average forgetting index and better than those of regularization-based approaches. Our ablation study demonstrates the advantage of each learning
component of AGLA where the absence of one component results in major performance drops. AGLA also maintains decent performances with tiny memory sizes as indicated by our memory analysis. Existing continual learning approaches including AGLA function with a large number of samples and over-fits quickly in the case of few samples in each task.
Our future work is directed to answer the few-shot continual learning problems handling a limited sample problem. The few-shot continual learning is a challenge since it is difficult to achieve plasticity while maintaining stability with few-shot samples. Furthermore, the replay methods that save previous task samples as memory can not be applied since they will be the same as joint training. Our future work is also directed to unsupervised few-shot continual learning problems to address unavailable labels in the dynamic environment. In real-world applications, a dataset is feasible to collect, but the labels (annotations) are hardly available. It needs expensive human effort to annotate the dataset. Last, but not least, our work is also directed to address federated continual learning problems to handle the data privacy constraint. The problems simulate there are many agents e.g. institutions that conduct continual learning in their respective environments to collaborate with each other but without sharing their private data.
## 7 Acknowledgement
This work is financially supported by the UniSA's start-up grant. The third authors acknowledges the support of the COMET-K2 Center of the Linz Center of Mechatronics (LCM) funded by the Austrian federal government and the federal state of Upper Austria.
|
2305.04819
|
Local Optimization Achieves Global Optimality in Multi-Agent
Reinforcement Learning
|
Policy optimization methods with function approximation are widely used in
multi-agent reinforcement learning. However, it remains elusive how to design
such algorithms with statistical guarantees. Leveraging a multi-agent
performance difference lemma that characterizes the landscape of multi-agent
policy optimization, we find that the localized action value function serves as
an ideal descent direction for each local policy. Motivated by the observation,
we present a multi-agent PPO algorithm in which the local policy of each agent
is updated similarly to vanilla PPO. We prove that with standard regularity
conditions on the Markov game and problem-dependent quantities, our algorithm
converges to the globally optimal policy at a sublinear rate. We extend our
algorithm to the off-policy setting and introduce pessimism to policy
evaluation, which aligns with experiments. To our knowledge, this is the first
provably convergent multi-agent PPO algorithm in cooperative Markov games.
|
Yulai Zhao, Zhuoran Yang, Zhaoran Wang, Jason D. Lee
|
2023-05-08T16:20:03Z
|
http://arxiv.org/abs/2305.04819v1
|
# Local Optimization Achieves Global Optimality in Multi-Agent Reinforcement Learning
###### Abstract
Policy optimization methods with function approximation are widely used in multi-agent reinforcement learning. However, it remains elusive how to design such algorithms with statistical guarantees. Leveraging a multi-agent performance difference lemma that characterizes the landscape of multi-agent policy optimization, we find that the localized action value function serves as an ideal descent direction for each local policy. Motivated by the observation, we present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO. We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate. We extend our algorithm to the off-policy setting and introduce pessimism to policy evaluation, which aligns with experiments. To our knowledge, this is the first provably convergent multi-agent PPO algorithm in cooperative Markov games.
## 1 Introduction
Recently, multi-agent reinforcement learning (MARL) has demonstrated many empirical successes, e.g., popular strategy games such as Go (Silver et al., 2016), StarCraft II (Vinyals et al., 2019), and poker (Brown and Sandholm, 2018). In contrast to vanilla reinforcement learning (RL), which is only concerned with a single agent seeking to maximize the total reward, MARL studies how multiple agents interact with the shared environment and other agents.
Policy optimization methods are widely used in MARL. These algorithms often parameterize policies with a function class and compute the gradients of the cumulative reward using the policy gradient theorem (Sutton et al., 1999) or its variants (e.g., NPG Kakade (2001) and PPO (Schulman et al., 2017)) to update the policy parameters.
Despite the empirical successes, theoretical studies of policy optimization in MARL are very limited. Even for the cooperative setting where the agents share a common goal: maximizing the total reward function, numerous challenges arise (Zhang et al., 2021). (1) non-stationarity: each action taken by one agent affects the total reward and the transition of state. Consequently, each learning agent must learn to adapt to the changing environment caused by other agents. From the optimization perspective, the geometry of the multi-agent policy optimization problem becomes unclear. Direct application of traditional single-agent analysis becomes vague due to the lack of stationary Markovian property, which states that evolution in the future only depends on the previous state and individual action. (2) scalability: taking other agents into consideration, each individual agent would face the joint action space, whose dimension increases exponentially
with the number of agents. Thus, having numerous agents in the environment problematizes the theoretical analysis of MARL. (3) function approximation: closely related to the scalability issue, the state space and joint action space are often immense in MARL, promoting function approximation to become a necessary component in MARL at the ease of computation and statistical analysis.
In this paper, we aim to answer the following fundamental question:
_Can we design a provably convergent multi-agent policy optimization algorithm in the cooperative setting with function approximation?_
We answer the above question affirmatively. We propose a multi-agent PPO algorithm in which the local policy of each agent is updated **sequantially** in a similar fashion as vanilla PPO algorithm (Schulman et al., 2017). In particular, we leverage a multi-agent performance difference lemma (cf. Lemma 4.1), assuming the joint policy is decomposed into conditional dependent policies. Such a lemma characterizes the landscape of policy optimization, showing the superiority of using localized action value functions as the decent direction for each local policy. Such factorized structure essentially bypasses the non-stationarity and scalability concerns. To address large state spaces, we parameterize each local policy using log-linear parametrization and propose to update the policy parameters via KL divergence-regularized mirror descent, where the descent direction is estimated separately. Combining these results, we obtain our multi-agent PPO algorithm. We prove that the multi-agent PPO algorithm converges to globally optimal policy at a sublinear rate. Furthermore, we extend multi-agent PPO to the off-policy setting in which policy is evaluated using samples collected according to _data distribution_\(\mu\). We prove similar theoretical guarantees under a coverage assumption of the sampling distribution.
We summarize our contributions below.
Our contributions.First, by focusing on the factorized policies, we prove a multi-agent version of the performance difference lemma showing that the action value functions are ideal descent directions for local policies. Such a geometric characterization functions as a remedy for the non-stationarity concern, motivating our multi-agent PPO algorithm.
Second, we adopt the log-linear function approximation for the policies. We prove that multi-agent PPO converges at a sublinear \(\mathcal{O}\left(\frac{N}{1-\gamma}\sqrt{\frac{\log|\mathcal{A}|}{K}}\right)\) rate up to some statistical errors incurred in evaluating/improving policies, where \(K\) is the number of iterations, \(N\) is the number of agents and \(|\mathcal{A}|\) is the action space of each individual agent. The sample complexity depends polynomially on \(N\), thus breaking the curse of scalability.
Third, we propose an off-policy variant of the multi-agent PPO algorithm and introduce pessimism into policy evaluation. The algorithm also converges sublinearly to the globally optimal policy up to the statistical error \(\widetilde{\mathcal{O}}(n^{-\frac{1}{3}})\). Here, \(n\) is the number of samples used to estimate the critics.1A key feature of the sample complexity bound is that it only requires single-policy concentrability.
Footnote 1: \(\widetilde{\mathcal{O}}\left(\cdot\right)\) hides logarithmic factors.
To our knowledge, this is the first provably convergent multi-agent PPO algorithm in cooperative Markov games with function approximation.
Organization.This paper is organized as follows. In Section 2, we review related literature. In Section 3, we formally describe the problem setup and introduce the necessary definitions. In Section 4, we state the main multi-agent PPO algorithm in detail. We further extend our results to the off-policy setting in Section 5. We conclude in Section 6 and defer the proofs to the Appendix.
Related Work
Policy optimizationMany empirical works have proven the validity and efficiency of policy optimization methods in games and other applications (Silver et al., 2016, 2017; Guo et al., 2016; Tian et al., 2019). These works usually update the policy parameter in its parametric space using the pioneering policy gradient (PG) theorem by Sutton et al. (1999), or many PG variants invented to improve the empirical performances of vanilla PG methods. In particular, Kakade (2001) introduced the natural policy gradient (NPG) algorithm which searched for the steepest descent direction within the parameter space based on the idea of KL divergence-regularization. Trust region learning-based algorithms are often regarded as advanced policy optimization methods in practice (Lillicrap et al., 2015; Duan et al., 2016), showing superior performances with stable updates. Specifically, TRPO (Schulman et al., 2015) and PPO (Schulman et al., 2017) could be seen as KL divergence-constrained variants of NPG. A benign feature of these algorithms is the monotonic improvement guarantees of the expected return.
Despite prosperous empirical findings, the lack of convexity often impedes the development of theories for policy optimization methods. Denote \(K\) and \(T\) as the number of iterations and samples. Agarwal et al. (2020) showed an iteration complexity of \(\mathcal{O}(K^{-\frac{1}{2}})\) and a sample complexity of \(\mathcal{O}(T^{-\frac{1}{4}})\) for online NPG with function approximation. Shani et al. (2020) considered a sample-based TRPO and proved a \(\tilde{\mathcal{O}}(T^{-\frac{1}{2}})\) rate converging to the global optimum, which could be improved to \(\tilde{\mathcal{O}}(1/r)\) when regularized. Making minor modifications to the vanilla PPO algorithm, Liu et al. (2019) presented a convergence rate of \(\mathcal{O}(K^{-\frac{1}{2}})\) to global optima when parameterizing both policy and \(Q\) functions with neural networks. The key to their analysis is the desirable one-point monotonicity in infinite-dimensional mirror descent that assists in characterizing the policy updates without convexity. We also make use of similar one-point properties in our multi-agent PPO algorithm analysis.
MarlMarkov Game (MG) is a commonly used model to characterize the multi-agent decision-making process (Shapley, 1953; Littman, 1994), which can be regarded as a multi-agent extension to the Markov Decision Process (MDP). Policy-based algorithms could generalize to large states through function approximation. There has been growing interest in developing provably efficient algorithms for Markov games (Daskalakis et al., 2020; Cen et al., 2021; Zhao et al., 2022; Ding et al., 2022; Cen et al., 2022). These works often studied competitive RL settings, e.g., zero-sum games. Their convergence rates usually depended on various notions of concentrability coefficient and may not scale tightly under the worst scenario.
Policy optimization for MARlApplying policy optimization methods in the MARL setting is more complicated than in the single-agent setting because of the non-stationary environment faced by each agent (Zhang et al., 2021). A learning paradigm called centralized training with decentralized execution (CTDE) is often used in practice (Kraemer and Banerjee, 2016; Lowe et al., 2017; Foerster et al., 2018; Yang et al., 2018; Wen et al., 2019; Zhang et al., 2020). In CTDE, a joint centralized value function helps to address the non-stationarity issue caused by other agents. Each agent has access to the global state and actions of other agents during training, thus allowing them to adjust their policy parameters individually. For instance, Lowe et al. (2017) proposed a multi-agent policy gradient algorithm in which agents learned a centralized critic based on the observations and actions of all agents.
Trust region learning (Schulman et al., 2015) has recently been combined with the CTDE paradigm to ensure monotonic improvements. In particular, IPPO (de Witt et al.,
2020] and MAPPO [Yu et al., 2021] showed strong performances of PPO-based methods in the cooperative setting. The practical efficacy of these methods is usually restricted by the _homogeneity_ assumption, where the agents share a common action space and policy parameter. Theoretically, providing statistical guarantees for policy optimization algorithms in MARL is more complicated than single-agent scenario [Zhang et al., 2021]. In Markov games, the non-stationary environment faced by each agent precludes direct application of the single-agent convergence analysis. A recent attempt by Kuba et al. [2022] proposed the first set of trust region learning algorithms in MARL that enjoyed monotonic improvement guarantees assuming neither homogeneity of agents nor value function decomposition rule. The critical observation leading to their results is the multi-agent advantage function decomposition rule that builds the sequential policy update structure. However, they did not show rates of convergence. In this work, we design a new, provably convergent PPO algorithm for fully cooperative Markov games that converges to globally optimal at policy at sublinear rates by taking advantage of this conditional dependency structure.
Pessim-based RL methodsThough being able to account for large state/action spaces, function approximation also has its own drawbacks. A significant issue arising in using function approximators is the usual occurrence of a positive bias in value function Thrun and Schwartz [1993]. The learner may not receive an accurate assessment. Numerous empirical works leverage the principle of _pessimism_ to correct such overestimation [Fujimoto et al., 2018, Laskin et al., 2020, Lee et al., 2020, Moskovitz et al., 2021]. For example, to reduce the evaluation bias brought by function approximation, Fujimoto et al. [2018] constructed the Bellman target by choosing the minimum of two value estimates as an intuitive estimate lower bound. Their approach took a pessimistic view of the value function.
On the theoretical side, a growing body of literature in offline reinforcement learning has also focused pessimism to account for datasets lacking data coverage [Liu et al., 2020, Jin et al., 2021, Uehara and Sun, 2021, Rashidinejad et al., 2021, Zhan et al., 2022]. Technically, these works aimed at maximizing the worst-case rewards that a trained agent could obtain. Instead of relying on coverage assumptions on dataset [Munos, 2003, Munos and Szepesvari, 2008, Chen and Jiang, 2019], these methods provided dataset-dependent performance bounds, thus providing robust results for datasets lacking exploration for which traditional methods do not apply. We focus on the off-policy setting in Section 5 where we leverage the Bellman-consistent pessimism [Xie et al., 2021]. We show concrete bounds under linear function approximation by assuming a sampling oracle that provides rewards and transition estimates that are used in approximating action value functions.
## 3 Preliminaries
In this section, we introduce necessary notations, problem setup, and some useful quantities that will be frequently used in this work.
### Setup and Notations
SetupWe consider a _fully-cooperative_ Markov game [Shapley, 1953, Littman, 1994], which is defined by a tuple \((\mathcal{N},\mathcal{S},\mathbf{\mathcal{A}},\mathcal{P},r,\gamma)\). Here, \(\mathcal{N}=\{1,\ldots,N\}\) denotes the set of agents, \(\mathcal{S}\) is the finite state space, \(\mathbf{\mathcal{A}}=\mathcal{A}^{N}\) is the product of finite action spaces of all agents(i.e., joint action space), \(\mathcal{P}:\mathcal{S}\times\mathbf{\mathcal{A}}\times\mathcal{S}\to[0,1]\) decides the transition
scheme, a reward function \(r:\mathcal{S}\times\mathbf{\mathcal{A}}\rightarrow[0,1]\), and \(\gamma\in[0,1)\) is the discount factor.2 The agents interact with the environment according to the following protocol: at time step \(t\), the agents are at state \(s_{t}\in\mathcal{S}\); every agent \(i\) takes action \(a_{t}^{i}\in\mathcal{A}\), drawn from its policy \(\pi^{i}(\cdot|s_{t})\), which together with actions of other agents gives a joint action \(\mathbf{a}_{t}=(a_{t}^{1},\ldots,a_{t}^{N})\in\mathbf{\mathcal{A}}\), drawn from the joint policy \(\mathbf{\pi}(\cdot|s_{t})=\prod_{i=1}^{N}\pi^{i}(\cdot|s_{t})\); the agents receive a joint reward \(r_{t}=r(s_{t},\mathbf{a}_{t})\in\mathbb{R}\), and move to \(s_{t+1}\sim\mathcal{P}(\cdot|s_{t},\mathbf{a}_{t})\). Given the joint policy \(\mathbf{\pi}\), the transition probability function \(\mathcal{P}\), and the initial state distribution \(\rho\), we define the discounted occupancy state-action distribution as
Footnote 2: For clarity, we assume \(N\) agents share the same set of actions. It is straightforward to generalize our results to the setting where action sets are different. See Section 4.
\[d_{\mathbf{\pi}}(s,\mathbf{a})=(1-\gamma)\mathbb{E}\sum_{t=0}^{\infty}\Pr^{\mathbf{ \pi}}(s_{t}=s,a_{t}=\mathbf{a}|s_{0}\sim\rho).\]
The standard value function and action value function are defined as
\[V_{\mathbf{\pi}}(s) \triangleq\underset{\mathbf{a}_{0:\infty}\sim\mathcal{P}, \mathbf{a}_{1:\infty}\sim\mathcal{P}}{\mathbb{E}}\left[\sum_{t=0}^{\infty} \gamma^{t}r_{t}\Big{|}\;s_{0}=s\right],\] \[Q_{\mathbf{\pi}}(s,\mathbf{a}) \triangleq\underset{s_{1:\infty}\sim\mathcal{P},\mathbf{a}_{1: \infty}\sim\mathbf{\pi}}{\mathbb{E}}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t}\Big{|} \;s_{0}=s,\;\mathbf{a}_{0}=\mathbf{a}\right].\]
The standard advantage function considering all agents is written as \(A_{\mathbf{\pi}}(s,\mathbf{a})\triangleq Q_{\mathbf{\pi}}(s,\mathbf{a})-V_{\mathbf{\pi}}(s)\). Later, we shall introduce the agents-specific advantage functions.
Let \(\nu_{\mathbf{\pi}}(s)\) and \(\sigma_{\mathbf{\pi}}(s,\mathbf{a})=\mathbf{\pi}(\mathbf{a}|s)\cdot\nu_{\mathbf{\pi}}(s)\) denote the stationary state distribution and the stationary state-action distribution associated with a joint policy \(\mathbf{\pi}\), respectively. Define the underlying optimal policy as \(\mathbf{\pi}_{*}\). We use \(\nu_{*}\) and \(\sigma_{*}\) in this paper to indicate \(\nu_{\mathbf{\pi}_{*}}\) and \(\sigma_{\mathbf{\pi}_{*}}\) for simplicity.
Throughout this paper, we pay close attention to the contribution of different subsets of agents to the performance of the whole team. We introduce the following multi-agent notations before proceeding to multi-agent definitions.
NotationsIn this work, we index the \(N\) agents with integers from \(1\) to \(N\) and use set \(\mathcal{N}=\{i|i=1,\cdots,N\}\) to represent all agents. We use \(m\in\mathcal{N}\) to indicate the specific \(m\)-th agent. In particular, the set notation on the superscript of a term represents the quantities associated with agents in that set. For example, \(\mathbf{a}^{\{1,2,3\}}\) represents the joint action of agents \(1,2\) and \(3\). We may write index \(k\) on superscript when we refer to the specific \(k\)-th agent. When bold symbols are used without any superscript (e.g., \(\mathbf{a}\)), they consider all agents. For simplicity, let \((m:m^{\prime})\) be shorthand for set: \(\{i|m\leq i\leq m^{\prime},i\in\mathcal{N}\}\). An example is \(\mathbf{\pi}^{1:m}(\cdot|s)\) which represents the joint policy considering agents \(1,2\cdots,m\).
We now introduce the multi-agent action value functions and advantage functions that characterize contributions from specific sub-agents.
**Definition 3.1**.: Let \(P\) be a subset in \(\mathcal{N}\). The multi-agent action value function associated with agents in \(P\) is
\[Q_{\mathbf{\pi}}^{P}\left(s,\mathbf{a}^{P}\right)\triangleq\mathbb{E}_{\tilde{ \mathbf{a}}\sim\tilde{\mathbf{\pi}}}\left[Q_{\mathbf{\pi}}\left(s,\mathbf{a}^{P}, \tilde{\mathbf{a}}\right)\right],\]
here we use a tilde over symbols to refer to the complement agents, namely \(\tilde{\mathbf{a}}=\{a^{i}|i\not\in P,i\in\mathcal{N}\}\).
Let \(P,P^{\prime}\subseteq\mathcal{N}\) be two disjoint subsets of agents. The multi-agent advantage function is defined below. Essentially, it accounts for the improvements of setting agents \(\mathbf{a}^{P^{\prime}}\) upon setting agents \(\mathbf{a}^{P}\), while all other agents follow \(\mathbf{\pi}\).
\[A_{\mathbf{\pi}}^{P^{\prime}}\left(s,\mathbf{a}^{P},\mathbf{a}^{P^{\prime}}\right) \triangleq Q_{\mathbf{\pi}}^{P\cup P^{\prime}}\left(s,\mathbf{a}^{P},\mathbf{a}^{P^ {\prime}}\right)-Q_{\mathbf{\pi}}^{P}\left(s,\mathbf{a}^{P}\right).\]
The multi-agent Bellman operators are defined by generalizing the classic versions.
**Definition 3.2**.: For \(m\in\mathcal{N}\) and any function \(f:\mathcal{S}\times\mathcal{A}^{m}\to\mathbb{R}\) we define _multi-agent Bellman operator_\(\mathcal{T}_{\mathbf{\pi}}^{1:m}:\mathbb{R}^{\mathcal{S}\times\mathcal{A}^{m}}\mapsto \mathbb{R}^{\mathcal{S}\times\mathcal{A}^{m}}\) as
\[\mathcal{T}_{\mathbf{\pi}}^{1:m}f(s,\mathbf{a}^{1:m})\coloneqq\mathop{\mathbb{E}}_ {\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}}r(s,\mathbf{a}^{1:m},\hat{\mathbf{a}})+ \gamma\mathop{\mathbb{E}}_{\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a }}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim \hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a} }\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat {\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a} }\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat {\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a} }\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat {\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a} }\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim \hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a} }\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat {\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a} }\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat {\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat {\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}} :\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}} \sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{ \mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{\mathbf{a}}\sim\hat{\mathbf{\pi}}:\hat{ \mathbf
ParametrizationFor the \(m\)-th agent (\(m\in\mathcal{N}\)), its conditional policy depends on all prior ordered agents \(\mathbf{a}^{1:m-1}\). Given a coefficient vector \(\theta^{m}\in\Theta\), where \(\Theta=\{\|\theta\|\leq R|\theta\in\mathbb{R}^{d}\}\) is a convex, norm-constrained set. The probability of choosing action \(a^{m}\) under state \(s\) is
\[\pi_{\theta^{m}}(a^{m}|s,\mathbf{a}^{1:m-1})=\frac{\exp\left(\phi^{\top}(s, \mathbf{a}^{1:m-1},a^{m})\theta^{m}\right)}{\sum\limits_{a^{m}\in\mathcal{A}} \exp\left(\phi^{\top}(s,\mathbf{a}^{1:m-1},a^{m})\theta^{m}\right)} \tag{2}\]
where \(\phi\) is a set of feature vector representations. Without loss of generality, we impose a regularity condition such that every \(\|\phi\|_{2}\leq 1\). This parametrization has been widely used in RL literature (Branavan et al., 2009; Gimpel and Smith, 2010; Heess et al., 2013; Agarwal et al., 2020; Zhao et al., 2022).3
Footnote 3: We assume that all players share the same parameter set only for clarity. We only need minor modifications in the analysis to extend our results to the setting where \(N\) agents have different capabilities. Specifically, we only need to treat norm bounds of updates (\(R\)), regularity conditions on features, and \(\beta\) separately for each agent.
### Policy Improvement and Evaluation
At the \(k\)-th iteration, we have the current policy \(\mathbf{\pi}_{\theta_{k}}\), and we need to: (1) perform **policy evaluation** to obtain the action value function estimates \(\hat{Q}_{\mathbf{\pi}_{\theta_{k}}}\) for determining the quality of \(\mathbf{\pi}_{\theta_{k}}\). (2) perform **policy improvement** to update policy to \(\mathbf{\pi}_{\theta_{k+1}}\).
For notational simplicity, we use \(\nu_{k}\) and \(\sigma_{k}\) to represent stationary state distribution \(\nu_{\mathbf{\pi}_{\theta}^{k}}\) and the stationary state-action distribution \(\sigma_{\mathbf{\pi}_{\theta}^{k}}\), which are induced by \(\mathbf{\pi}_{\theta_{k}}\).
Policy ImprovementAt the \(k\)-th iteration, we define \(\hat{\pi}_{k+1}^{m}\) as the ideal update based on \(\hat{Q}_{\mathbf{\pi}_{\theta_{k}}}^{1:m}\) (for agent \(m\in\mathcal{N}\)), which is an estimator of \(Q_{\mathbf{\pi}_{\theta_{k}}}^{1:m}\). The ideal update is obtained via the following update
\[\hat{\pi}_{k+1}^{m}\leftarrow\arg\max_{\pi^{m}}\hat{F}(\pi^{m}) \tag{3}\] \[\hat{F}(\pi^{m})=\mathop{\mathbb{E}}_{\sigma_{k}}\left[\langle \hat{Q}_{\mathbf{\pi}_{\theta_{k}}}^{1:m}(s,\mathbf{a}^{1:m-1},\cdot),\pi^{m}( \cdot|s,\mathbf{a}^{1:m-1})\rangle-\beta_{k}KL\left(\pi^{m}(\cdot|s,\mathbf{a} ^{1:m-1})\|\pi_{\theta_{k}^{m}}(\cdot|s,\mathbf{a}^{1:m-1})\right)\right]\]
where \(\theta_{k}^{m}\) is the parameter of the current conditional policy of the \(m\)-th agent. In above equation, the distribution is taken over \((s,\mathbf{a}^{1:m-1})\sim\nu_{k}\mathbf{\pi}_{\theta_{k}}^{1:m-1}\), we write \(\sigma_{k}\) for simplicity. Under log-linear parametrization: \(\pi_{\theta_{k}^{m}}\propto\exp\{\phi^{\top}\theta_{k}^{m}\}\), we have the following closed-form ideal policy update.
**Proposition 4.2**.: _Given an estimator \(\hat{Q}_{\mathbf{\pi}_{\theta_{k}}}^{1:m}\), the KL divergence-regularized update (3) has the following explicit solution_
\[\hat{\pi}_{k+1}^{m}(\cdot|s,\mathbf{a}^{1:m-1})\propto\exp\left\{\beta_{k}^{-1 }\hat{Q}_{\mathbf{\pi}_{\theta_{k}}}^{1:m}(s,\mathbf{a}^{1:m-1},\cdot)\ +\phi^{\top}(s,\mathbf{a}^{1:m-1},\cdot)\theta_{k}^{m}\right\}.\]
The proof is straightforward by adding the constraint: \(\sum_{a^{m}\in\mathcal{A}}\pi^{m}(\cdot)=1\) as a Lagrangian multiplier to \(\hat{F}(\pi^{m})\). See details in Appendix B.
To approximate the ideal \(\hat{\pi}_{k+1}^{m}\) using a parameterized \(\pi_{\theta_{k+1}^{m}}\propto\exp\{\phi^{\top}\theta_{k+1}^{m}\}\), we minimize the following mean-squared error (MSE) as a sub-problem
\[\theta_{k+1}^{m}\leftarrow\arg\min_{\theta^{m}\in\Theta}L(\theta^{m}) \tag{4}\]
where \(L(\theta^{m})\) is defined as
\[L(\theta^{m})=\mathop{\mathbb{E}}_{\sigma_{k}}\left((\theta^{m}-\theta_{k}^{m} )^{\top}\phi(s,\mathbf{a}^{1:m-1},a^{m})-\frac{\hat{Q}_{\mathbf{\pi}_{\theta_{k}}}^{ 1:m}(s,\mathbf{a}^{1:m-1},a^{m})}{\beta_{k}}\right)^{2}\]
Intuitively, a small \(L(\theta)\) indicates that \(\pi_{\theta^{m}}\) is close to the ideal update \(\hat{\pi}_{k+1}^{m}\). Moreover, if \(\hat{\pi}_{k+1}^{m}\) exactly lies in the log-linear function class, i.e., there exists a \(\vartheta\in\Theta\) such that \(\hat{\pi}_{k+1}^{m}\propto\exp\left\{\phi^{\top}\vartheta\right\}\). Then we have \(L(\vartheta)=0\).
To solve the MSE minimization problem (4), we use the classic SGD updates. Let stepsize be \(\eta\), at each step \(t=0,1,\cdots,T-1\), parameter \(\theta\) is updated via
\[\theta(t+\frac{1}{2}) \leftarrow\theta(t)-2\eta\phi\left((\theta(t)-\theta_{k}^{m})^{ \top}\phi-\beta_{k}^{-1}\hat{Q}_{\mathbf{\pi}_{\theta_{k}}}^{1:m}\right)\] \[\theta(t+1) \leftarrow\Pi_{\Theta}\theta(t+\frac{1}{2})\]
where we omit \((s,\mathbf{a}^{1:m-1},a^{m})\) for simplicity, which is sampled from \(\sigma_{k}\). See Algorithm 3 for the detailed solver.
Policy EvaluationIn this step, we aim to examine the quality of the attained policy. Thereby, a \(Q\)-function estimator is required. We make the following assumption.
**Assumption 4.3**.: Assume we can access an estimator of \(Q\) function that returns \(\hat{Q}\). The returned \(\hat{Q}\) satisfies the following condition for all \(m\in\mathcal{N}\) at the \(k\)-th iteration
\[\left[\operatorname*{\mathbb{E}}_{\sigma_{k}}\left(\hat{Q}_{\mathbf{\pi}_{\theta_{ k}}}^{1:m-1},a^{m}\right)-Q_{\mathbf{\pi}_{\theta_{k}}}^{1:m}(s,\mathbf{a}^{1:m-1},a^{ m})\right)^{2}\right]^{1/2}\leq\xi_{k}^{m}.\]
We also have a regularity condition for the estimator: there exists a positive constant \(B\), such that for any \(m\in\mathcal{N}\) and \((s,\mathbf{a}^{1:m-1},a^{m})\in\mathcal{S}\times\mathcal{A}^{m-1}\times \mathcal{A}\),
\[\left|\hat{Q}_{\mathbf{\pi}_{\theta_{k}}}^{1:m}(s,\mathbf{a}^{1:m-1},a^{m})\right| \leq B.\]
In RL practice, such an estimator is often instantiated with deep neural networks (DNNs) (Mnih et al., 2015). While there has been recent interest in studying the theoretical guarantees for DNNs as function approximators (Fan et al., 2020), we assume we have access to such an estimator to ensure the generality of our algorithm. We note that policy estimators like episodic sampling oracle that rolls out trajectories (Agarwal et al., 2020) or neural networks (Mnih et al., 2015; Liu et al., 2019) could all be possible options here. As a generalization, we introduce a specific value function approximation setting in Section 5, in which we assume all \(Q\)-functions lie in linear class \(\mathcal{F}\). We further adopt the principle of pessimism for better exploration.
AlgorithmEquipped with the sub-problem solver for policy improvement and the \(Q\)-function estimator, we are prepared to present the provable multi-agent PPO algorithm. The pseudo-code is listed in Algorithm 1. The algorithm runs for \(K\) iterations. At the \(k\)-th iteration, we estimate \(Q\)-function for each agent \(m\in\mathcal{N}\) via the estimator (cf. Assumption 4.3) to measure the quality of \(\mathbf{\pi}_{\theta_{k}}\). The estimates would also serve as the ideal descent direction for policy improvement. Since we use a constrained parametric policy class, the ideal update is approximated with the best policy parameter \(\theta\in\Theta\) by minimizing the MSE problem (4), which runs SGD for \(T\) iterations (cf. Algorithm 3). Thanks to the geometric characterization (cf. Lemma 4.1), we are guaranteed to reach a globally improved total reward by updating each agent consecutively.
### Theoretical Analysis
Our analysis relies on problem-dependent quantities. We denote weighted \(L_{p}\)-norm of function \(f\) on state-space \(\mathcal{X}\) as \(\|f\|_{p,\rho}=\left(\sum_{x\in\mathcal{X}}\rho(x)|f(x)|^{p}\right)^{\frac{1} {p}}\)
**Definition 4.4**.: At the \(k\)-th iteration, for \(m\in\mathcal{N}\) we define the following problem-dependent quantity using Radon-Nikodym derivatives
\[\phi_{k}^{m}=\left\|\frac{d(\nu_{\kappa}\mathbf{\pi}_{\mathbf{\pi}_{\mathbf{\theta}_{k}}^{1: m}})}{d(\nu_{k}\mathbf{\pi}_{\mathbf{\theta}_{k}}^{1:m})}\right\|_{2,\sigma_{k}}\]
These conditions are the well-known concentrability coefficients (Munos, 2003; Farahmand et al., 2010; Chen and Jiang, 2019) for the factorized policy. Still, our conditions are structurally simpler and weaker because they are only density ratios between stationary state-action distributions, not requiring trajectories to roll out.
Now we are prepared to present the main theorem that characterizes the global convergence rate.
**Theorem 4.5**.: _Under Assumption 4.3, for the output policy \(\bar{\mathbf{\pi}}\) attained by Algorithm 1 in the fully cooperative Markov game, set \(\eta=\frac{R}{G\sqrt{T}}\) and_
\[\beta=\sqrt{\frac{NB^{2}/2}{N\log|\mathcal{A}|+\sum_{m=1}^{N}\sum_{k=0}^{K-1}( \Delta_{k}^{m}+\delta_{k}^{m})}}.\]
_After \(K\) iterations, we have \(J(\mathbf{\pi}_{*})-J(\bar{\mathbf{\pi}})\) upper bounded by_
\[\mathcal{O}\left(\frac{B\sqrt{N}}{1-\gamma}\sqrt{\frac{N\log|\mathcal{A}|+ \sum_{m=1}^{N}\sum_{k=0}^{K-1}(\Delta_{k}^{m}+\delta_{k}^{m})}{K}}\right)\]
_where \(\Delta_{k}^{m}=\sqrt{2}(\phi_{k}^{m}+\phi_{k}^{m-1})\cdot\left(\epsilon_{k}^{ m}+\frac{\xi_{k}^{m}}{\beta_{k}}\right)\) and \(\delta_{k}^{m}=2\phi_{k}^{m-1}\epsilon_{k}^{m}\). Here \(\epsilon_{k}^{m}\) is the statistical error of a PPO iteration: for agent \(m\in\mathcal{N}\),_
\[\mathbb{E}_{\sigma_{k}}\Big{(}(\theta_{k+1}^{m}-\theta_{k}^{m})^{\top}\phi- \beta_{k}^{-1}\hat{Q}_{\mathbf{\pi}_{\mathbf{\theta}_{k}}}^{1:m}\Big{)}^{2}\leq( \epsilon_{k}^{m})^{2}\]
_where we omit \((s,\mathbf{a}^{1:m-1},a^{m})\) for simplicity._
_Let \(\epsilon_{approx}\) be the approximation capability of the log-linear policy class we adopt, then \(\epsilon_{k}^{m}=\epsilon_{approx}+\mathcal{O}(T^{-\frac{1}{4}})\)._
Theorem 4.5 explicitly characterizes the performance of the output \(\bar{\mathbf{\pi}}\) in terms of the number of iterations and the iteration errors. When PPO updates are ideal, namely,
viewing \(\delta^{\prime}_{k},\Delta^{m}_{k}\) to be \(0\) for any \(m\in\mathcal{N}\), and \(k<K\), the rate simplifies to \(\mathcal{O}\left(\frac{NB}{1-\gamma}\sqrt{\frac{\log|\mathcal{A}|}{K}}\right)\). The dependency on iteration \(K\) is \(\mathcal{O}(K^{-\frac{1}{2}})\), matching the same rate as the sample-based single-agent NPG analysis (Agarwal et al., 2020; Liu et al., 2019).
The proof of Theorem 4.5 further requires the following parts: mirror-descent update analysis used in (Liu et al., 2019) and Lemma 4.1 that builds sequential dependency structure among the agents. The full proof is deferred to Appendix B.
### Compare with Independent Learning
In MARL, independent learning refers to a class of algorithms that train multiple agents independently. In these methods, each agent has its own policy function that maps the agent's observations to its actions. The policies are optimized using policy gradient methods in a decentralized manner without explicit communication or coordination, and without explicitly modeling the behavior of the other agents. Independent learning methods are widely used in MARL due to its strong performance and efficiency.
In this subsection, we provide detailed comparisons between our algorithm and previous results on independent learning (both experiments and theories). We also performed a simulation study to showcase the superiority of our sequential policy update structure over naive independent policy gradient updates.
ExperimentsSome empirical attempts showed independent policy gradient learning could achieve surprisingly strong performance in MARL, such as MAPPO (Yu et al., 2021), IPPO (de Witt et al., 2020), and (Papoudakis et al., 2021).
Despite the empirical success, these methods have several drawbacks. IPPO and MAPPO assume homogeneity (agents share the same action space and policy parameters). Thus, parameter sharing is required. Even though the parameter sharing can be turned off, they still suffer from no monotonic improvement guarantees, though being called PPO-based algorithms. Recall that the main virtue of vanilla TRPO (Schulman et al., 2015) is monotonicity. Also, these methods do not come with any convergence guarantees. The converging problem becomes more severe when parameter-sharing is switched off. A counterexample in (Kuba et al., 2022, Proposition 1) shows parameter sharing could lead to an exponentially-worse sub-optimal outcome.
Thanks to the sequential agents' structure and novel multi-agent mirror-decent analyses, we present the first MARL algorithm that converges at a sub-linear rate. Note that our results neither rely on the homogeneity of agents nor the value function decomposition rule.
TheoriesSeveral theoretical works have studied convergence guarantees of independent policy optimization algorithms to a Nash equilibrium (NE) policy in MARL mathematically (Daskalakis et al., 2020; Leonardos et al., 2022; Fox et al., 2022; Ding et al., 2022). Specifically, Daskalakis et al. (2020) studied competitive RL. And others studied convergence to the NE policy in Markov potential games (an extension of fully-cooperative games). However, we argue that a NE policy is not necessarily optimal in terms of the value function.
In contrast to their work, we present the first provable multi-agent policy optimization algorithm that finds a policy with a near globally optimal value function equipped with a sub-linear convergence rate.
SimulationTo further validate the theoretical and experimental benefits of our algorithm, we conducted a numerical simulation to showcase the superiority of our algorithm
with sequential updates structure over naive independent policy gradient updates. We consider von Neumann's ratio game, a simple stochastic game also used by Daskalakis et al. (2020). Simulation results show that, unlike our algorithm, the independent learning method has significant difficulty escaping the stationary point. Moreover, our algorithm consistently outperforms independent learning in maximizing value function. See Section E for detailed settings and results.
## 5 Pessimistic MA-PPO with Linear Function Approximation
In this section, we study the off-policy setting, using samples from a data distribution \(\mu\) to evaluate \(Q_{\mathbf{\pi}}\). Experimentally, since function approximators often cause a positive bias in value function Thrun and Schwartz (1993), many deep off-policy actor-critic algorithms introduce pessimism to reduce such overestimation Fujimoto et al. (2018); Laskin et al. (2020). We also adopt pessimistic policy evaluation in this setting, aligning with experimental works.
We focus on the setting where value functions and policies are linearly parameterized. Our results can extend to the general function approximation setting, presented in Appendix D.
**Definition 5.1** (Linear Function Approximation).: Let \(\phi\) be a set of feature mappings built conditionally, the same definition as Section 4. Define the action value function class as \(\mathcal{F}^{m}=\{\phi^{\top}\omega:\omega\in\mathbb{R}^{d},\|\omega\|_{2}\leq L _{,}\phi^{\top}\omega\in[0,\nicefrac{{1}}{{1-\gamma}}]\}\). The policy class is still parameterized by log-linear: \(\Pi^{m}=\{\pi\propto\exp(\phi^{\top}\theta):\theta\in\mathbb{R}^{d},\|\theta \|_{2}\leq R\}\) (cf. Section 4).
_Remark 5.2_.: Under the definition, for any \(m\in\mathcal{N}\) and policy \(\mathbf{\pi}\), there must exist a parameter \(\omega\in\mathbb{R}^{d}\) that satisfies
\[Q_{\mathbf{\pi}}^{1:m}(s,\mathbf{a}^{1:m})=\phi(s,\mathbf{a}^{1:m})^{\top}\omega\]
In this section, we fix the initial state at a certain \(s_{0}\). Thus the expected reward we aim to maximize is defined as
\[J(\mathbf{\pi})\triangleq V_{\mathbf{\pi}}(s_{0}).\]
Note that, in single-agent offline RL, only one policy affects the action at a particular state so that we can gauge the quality of value function estimates using an offline dataset \(\mathcal{D}\) consisting of states, actions, rewards, and transitions. Intuitively, when the following \(L_{0}\) approaches \(0\), we can say \(f\) is a nice approximator for the \(Q\)-function Xie et al. (2021).
\[L_{0}=\frac{1}{n}\sum_{(s,a,r,s^{\prime})\sim\mathcal{D}}\big{(}f(s,a)-r- \gamma f(s^{\prime},\pi)\big{)}^{2}\]
where \(f(s^{\prime},\pi)\) is a shorthand for \(\sum_{a^{\prime}}f(s^{\prime},a^{\prime})\pi(a^{\prime}|s^{\prime})\) which will be frequently used in this section.
However, in the multi-agent environment, the complex dependent structure precludes the application of such an offline dataset. Specifically, for the \(m\)-th agent and policy \(\mathbf{\pi}\), estimating the multi-agent value function \(Q_{\mathbf{\pi}}^{1:m}\) demands that all agents not in \(\{1:m\}\) must follow \(\mathbf{\pi}\) (cf. Definition 3.1), which could not be guaranteed by an offline dataset.
Therefore, online interactions are unavoidable in the multi-agent setting we study. Below we make clarifications for the sample-generating protocol.
We will collect state-action samples from a fixed _data distribution_\(\mu=\mu_{s}\mu_{a}\in\Delta(\mathcal{S}\times\mathbf{\mathcal{A}})\). In the benign case, a well-covered \(\mu\) guarantees adequate exploration over the whole state and action spaces. Assume we have access to a standard RL oracle
**Definition 5.3** (Sampling Oracle).: The oracle can start from \(s\sim\mu_{s}\), take any action \(\mathbf{a}\in\mathbf{\mathcal{A}}\), and obtain the next state \(s^{\prime}\sim\mathcal{P}(\cdot|s,\mathbf{a})\), and reward \(r(s,\mathbf{a})\).
Our query oracle aligns with the classic **online sampling oracle** for MDP (Kakade and Langford, 2002; Du et al., 2019; Agarwal et al., 2020). The difference is that we transit for one step, while the classic online model usually terminates at the end of each episode. We also note that our oracle is weaker than the **generative model**(Kearns and Singh, 2002; Kakade, 2003; Sidford et al., 2018; Li et al., 2020) which assumes that agent can transit to **any** state, thus greatly weakening the need for explicit exploration. Whereas our oracle starts from a fixed \(\mu_{s}\).4
Footnote 4: In MDPs, such oracle is called \(\mu\)-reset model (Kakade and Langford, 2002).
We take advantage of the sampler in the following steps to obtain action value functions that preserve a small error under the multi-agent Bellman operator (cf. Definition 5.1). For agent \(m\in\mathcal{N}\) and \(\mathbf{\pi}\), (1) obtain \(s\sim\mu_{s}\); (2) obtain \(\mathbf{a}\sim\mu_{a}\) and \(\mathbf{a}^{\prime}\sim\mathbf{\pi}^{m+1:N}(\cdot|s)\); (3) take \((\mathbf{a}^{1:m},\mathbf{a}^{\prime})\) as the joint action to query the oracle where \(\mathbf{a}^{1:m}\) represents the \(\{1:m\}\) subset of \(\mathbf{a}\). The oracle returns \((r,s^{\prime})\), which are guaranteed to satisfy:
\[r\sim\operatorname*{\mathbb{E}}_{\tilde{\mathbf{a}}\sim\mathbf{\pi}^{m+1:N}}R(s, \mathbf{a}^{1:m},\tilde{\mathbf{a}}),\quad s^{\prime}\sim\operatorname*{ \mathbb{E}}_{\tilde{\mathbf{a}}\sim\mathbf{\pi}^{m+1:N}}\mathcal{P}(\cdot|s, \mathbf{a}^{1:m},\tilde{\mathbf{a}}).\]
Repeat these steps for \(n\) times. Together this gives dataset \(\mathcal{D}^{m}=\{(s_{i},\mathbf{a}^{1:m}_{i},r_{i},s^{\prime}_{i})|i=1,2, \cdots n\}\). Define
\[L^{1:m}(f^{\prime},f,\mathbf{\pi})\coloneqq\frac{1}{n}\sum_{\mathcal{D}^{m}}\left( f^{\prime}(s,\mathbf{a}^{1:m})-r-\gamma f(s^{\prime},\mathbf{\pi}^{1:m})\right)^{2}\]
where \(f\in\mathcal{F}^{m}\) (cf. Definition 5.1) and the summation is taken over \(n\) quadruples of \((s,\mathbf{a}^{1:m},r,s^{\prime})\).
We will need the following Bellman error to evaluate the quality of \(f\).
\[\mathcal{E}^{1:m}(f,\mathbf{\pi})=L^{1:m}(f,f,\mathbf{\pi})-\min_{f^{\prime}\in \mathcal{F}^{m}}L^{1:m}(f^{\prime},f,\mathbf{\pi}). \tag{5}\]
Intuitively, we consider \(f\) as a nice approximation of \(Q^{1:m}_{\mathbf{\pi}}(s,\mathbf{a}^{1:m})\) when the quantity is small. This formulation also works for general function approximation. See Appendix D for details.
We shall need a concentrability measure accounting for the distributional mismatch.
**Definition 5.4** (Concentrability).: The following condition characterizes the distribution shift from the \(d_{\mathbf{\pi}_{*}}\) to the sampling distribution.
\[\mathcal{C}^{d_{\mathbf{\pi}_{*}}}_{\mu}=\sup_{m\in\mathcal{N},f\in\mathcal{F}^{m },\mathbf{\pi}\in\Pi^{m}}\frac{\left\|f-\mathcal{T}^{1:m}_{\mathbf{\pi}}f\right\|_{2,d _{\mathbf{\pi}_{*}}}}{\left\|f-\mathcal{T}^{1:m}_{\mathbf{\pi}}f\right\|_{2,\mathcal{D }^{m}}}.\]
Recall that \(\|\cdot\|_{2,\rho}\) is the weighted \(L_{2}\)-norm. In the nominator, the sum is taken over \((s,\mathbf{a}^{1:m})\sim d_{\mathbf{\pi}_{*}}\). Whereas in the denominator, the sum is taken over \((s,\mathbf{a}^{1:m})\) from \(\mathcal{D}^{m}\) as an empirical version of \(\mu\). The notion serves a similar role as concentrability coefficients in the literature (Munos, 2003; Agarwal et al., 2020): it measures the distributional mismatch between the underlying optimal distribution and the distribution of samples we employ.
Policy EvaluationAt the \(k\)-th iteration, we have the current policy \(\mathbf{\pi}_{\theta_{k}}\). We perform pessimistic policy evaluation via regularization to reduce value bias in evaluating \(Q^{1:m}_{\mathbf{\pi}_{\theta_{k}}}\).
\[\omega^{m}_{k}\leftarrow\operatorname*{arg\,min}_{\omega}\left(f(s_{0},\mathbf{\pi} _{k}^{1:m})+\lambda\mathcal{E}^{1:m}(f,\mathbf{\pi}_{\theta_{k}})\right).\]
Here \(\mathcal{E}\) is the Bellman error defined in (5). We obtain \(f_{k}^{m}=\phi^{\top}\omega_{k}^{m}\) as the pessimistic estimate for \(Q^{1:m}_{\mathbf{\pi}_{\theta_{k}}}\). This update has a closed-form solution under linear function approximation (cf. Definition 5.1). Moreover, under linear function approximation, the minimization on the right-hand side can be solved computationally efficiently because of its quadratic dependency on \(\omega\). See details in Appendix C
Policy ImprovementWhen both value functions and policies are linear parameterized (cf. Definition 5.1), the mirror descent policy update for any \((s,\mathbf{a}^{1:m})\in\mathcal{S}\times\mathcal{A}^{m}\)
\[\pi_{k+1}^{m}(a^{m}|s,\mathbf{a}^{1:m-1})\propto\pi_{k}^{m}(a^{m}|s,\mathbf{a }^{1:m-1})\cdot\exp(\eta f_{k}^{m}(s,\mathbf{a}^{1:m})) \tag{6}\]
could be further simplified to parameter updates in \(\mathbb{R}^{d}\)
\[\theta_{k+1}^{m}=\theta_{k}^{m}+\eta\omega_{k}^{m}.\]
This observation makes policy improvements in this setting significantly more superficial than in Section 4. For the \(k\)-th iteration and agent \(m\in\mathcal{N}\), we only need to add \(\eta\omega_{k}^{m}\) to the policy parameter \(\theta_{k}^{m}\) to improve policy.
AlgorithmWith the pessimistic policy evaluation and intuitive policy improvement, our pessimistic variant of the multi-agent PPO algorithm is presented in Algorithm 2.
```
0: Regularization coefficient \(\lambda\).
0: Uniformly sample \(k\) from \(0,1\cdots K-1\), return \(\bar{\mathbf{\pi}}=\mathbf{\pi}_{\theta_{k}}\).
1: Initialize \(\theta_{0}^{m}=0\) for every \(m\in\mathcal{N}\).
2:for\(k=0,1,\ldots,K-1\)do
3:for\(m=1,2,\cdots,N\)do
4: Pessimistic policy evaluation: \(\omega_{k}^{m}\leftarrow\underset{\omega}{\arg\min}\left(f(s_{0},\mathbf{\pi}_{ \theta_{k}}^{1:m})+\lambda\mathcal{E}^{1:m}(f,\mathbf{\pi}_{\theta_{k}})\right)\).
5: Policy improvement: \(\theta_{k+1}^{m}=\theta_{k}^{m}+\eta\omega_{k}^{m}\).
6:endfor
7:endfor
```
**Algorithm 2** Pessimistic Multi-Agent PPO with Linear Function Approximation
Now we are prepared to present the main theorem for this section.
**Theorem 5.5**.: _For the output policy \(\bar{\mathbf{\pi}}\) attained by Algorithm 2 in a fully cooperative Markov game, set \(\eta=(1-\gamma)\sqrt{\frac{\log|\mathcal{A}|}{2K}}\) and \(\lambda=(1-\gamma)^{-1}\left(\frac{d\log\frac{nLR}{2}}{n}\right)^{-2/3}\). After \(K\) iterations, w.p. at least \(1-\delta\) we have \(J(\mathbf{\pi}_{*})-J(\bar{\mathbf{\pi}})\) upper bounded by_
\[\mathcal{O}\left(\frac{N}{(1-\gamma)^{2}}\sqrt{\frac{\log|\mathcal{A}|}{K}}+ \frac{\mathcal{C}_{\mu}^{d\mathbf{\pi}_{*}}}{(1-\gamma)^{2}}\sqrt{\frac{d\log \frac{nLR}{\delta}}{n}}\right)\]
To interpret this bound, the first term accounts for the optimization error accumulating from mirror descent updates (6). The first term has an \((1-\gamma)^{-2}\) dependency on the discount factor, which may not be tight, and we leave it as a future work to improve. The second term represents the estimation errors accumulated during training. We use state-action pairs from \(\mu\) and the sampling oracle for minimizing \(\mathcal{E}^{1:m}(f,\mathbf{\pi})\), thereby introducing _distribution mismatch_ which is expressed by \(\mathcal{C}_{\mu}^{d\mathbf{\pi}_{*}}\). Note that this single-policy concentrability is already weaker than traditional concentrability coefficients (Munos, 2003; Farahmand et al., 2010; Perolat et al., 2015). Intuitively, a small value of concentrability
requires the data distribution \(\mu\) close to \(d_{\mathbf{\pi}_{*}}\), which is the unknown occupancy distribution of optimal policy. On the other hand, if \(\mathcal{C}_{\mu}^{d_{\mathbf{\pi}_{*}}}\) is large, then the bound becomes loose. We provide a similar result for general function approximation in the appendix (cf. Theorem D.7).
There is no explicit dependence on state-space \(\mathcal{S}\) in the theorem. Hence the online algorithm proves nice guarantees for function approximation even in the infinite-state setting.
To prove Theorem 5.5, the quantitative analysis for Bellman-consistent pessimism (Xie et al., 2021) is useful. We obtain statistical and convergence guarantees by taking advantage of the conditional dependency structure of the cooperative Markov games. See Appendix C for details.
## 6 Conclusion
In this work, we present a new multi-agent PPO algorithm that converges to the globally optimal policy at a sublinear rate. The key to the algorithm is a multi-agent performance difference lemma which enables sequential local policy updates. As a generalization, we extend the algorithm to the off-policy setting and present similar convergence guarantees. To our knowledge, this is the first multi-agent PPO algorithm in cooperative Markov games that enjoys provable guarantees.
## Acknowledgements
JDL acknowledges support of the ARO under MURI Award W911NF-11-1-0304, the Sloan Research Fellowship, NSF CCF 2002272, NSF IIS 2107304, NSF CIF 2212262, ONR Young Investigator Award, and NSF CAREER Award 2144994.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.